id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.17348 | Intrinsic Biologically Plausible Adversarial Robustness | Artificial Neural Networks (ANNs) trained with Backpropagation (BP) excel in
different daily tasks but have a dangerous vulnerability: inputs with small
targeted perturbations, also known as adversarial samples, can drastically
disrupt their performance. Adversarial training, a technique in which the
training dataset is augmented with exemplary adversarial samples, is proven to
mitigate this problem but comes at a high computational cost. In contrast to
ANNs, humans are not susceptible to misclassifying these same adversarial
samples. Thus, one can postulate that biologically-plausible trained ANNs might
be more robust against adversarial attacks. In this work, we chose the
biologically-plausible learning algorithm Present the Error to Perturb the
Input To modulate Activity (PEPITA) as a case study and investigated this
question through a comparative analysis with BP-trained ANNs on various
computer vision tasks. We observe that PEPITA has a higher intrinsic
adversarial robustness and, when adversarially trained, also has a more
favorable natural-vs-adversarial performance trade-off. In particular, for the
same natural accuracies on the MNIST task, PEPITA's adversarial accuracies
decrease on average only by 0.26% while BP's decrease by 8.05%. | Matilde Tristany Farinha, Thomas Ortner, Giorgia Dellaferrera, Benjamin Grewe, Angeliki Pantazi | 2023-09-29T15:55:17Z | http://arxiv.org/abs/2309.17348v5 | # Efficient Biologically Plausible Adversarial Training
###### Abstract
Artificial Neural Networks (ANNs) trained with Backpropagation (BP) show astounding performance and are increasingly often used in performing our daily life tasks. However, ANNs are highly vulnerable to adversarial attacks, which alter inputs with small targeted perturbations that drastically disrupt the models' performance. The most effective method to make ANNs robust against these attacks is adversarial training, in which the training dataset is augmented with exemplary adversarial samples. Unfortunately, this approach has the drawback of increased training complexity since generating adversarial samples is very computationally demanding. In contrast to ANNs, humans are not susceptible to adversarial attacks. Therefore, in this work, we investigate whether biologically-plausible learning algorithms are more robust against adversarial attacks than BP. In particular, we present an extensive comparative analysis of the adversarial robustness of BP and _Present the Error to Perturb the Input To modulate Activity_ (PEPITA), a recently proposed biologically-plausible learning algorithm, on various computer vision tasks. We observe that PEPITA has higher intrinsic adversarial robustness and, with adversarial training, has a more favourable natural-vs-adversarial performance trade-off as, for the same natural accuracies, PEPITA's adversarial accuracies decrease in average by \(0.26\%\) and BP's by \(8.05\%\).
## 1 Introduction
State-of-the-art ANNs trained with Backpropagation (BP) [1; 2] are vulnerable to adversarial attacks [3]. Adversarial attacks produce adversarial samples, a concept first described by Szegedy et al. [4], which are input samples with small perturbations that can trick a trained ANN into misclassification. Although this phenomenon was first observed in the context of image classification [4; 5], it has since been observed in several other tasks such as natural language processing [6; 7], audio processing [8; 9], and deep reinforcement learning [10; 11]. Nowadays, making real-world decisions based on the suggestions provided by ANNs has become an integral part of our daily lives [12]. Therefore, these models' vulnerability to adversarial attacks severely threatens the safe deployment of artificial intelligence in everyday-life applications [13]. For example, in real-world autonomous driving, adversarial attacks have been successful in deceiving road sign recognition systems [14]. Researchers have proposed several solutions to address this problem, and adversarial training emerged as the state-of-the-art approach [15]. In adversarial training, the original dataset, consisting of pairs of input samples with their respective ground-truth labels, is augmented with adversarial data, where the original ground-truth labels are paired with adversarial samples. This additional training data allows the model to learn to classify correctly adversarial samples as well [5; 3]. Although adversarial training increases the networks' robustness to adversarial attacks, generating numerous training adversarial samples is computationally costly. To reduce this additional computational burden, researchers have developed new methods for generating adversarial samples more efficiently [16; 17; 18; 19]. For example, weak adversarial samples created with the Fast Gradient Sign Method (FGSM), which are easy to compute, are used for fast adversarial training [5]. However, if stronger computationally-heavy adversarial attacks, such as the Projected Gradient Descent (PGD) [20], are used to attack a model trained with
fast adversarial training, overfitting to the classification of FGSM adversarial samples can occur. In this case, the model trained with fast adversarial training can correctly classify FGSM adversarial samples, but its performance drops significantly (or to zero in the case of "catastrophic overfitting") for PGD adversarial samples [21]. Several adjustments have been proposed [22; 21; 23; 24] to circumvent this problem and make fast adversarial training effective, yet it remains still an active area of research. Another caveat to consider when using adversarial training is the trade-off between natural performance (classification accuracy of unperturbed samples) and adversarial performance (classification accuracy of perturbed samples) [25; 26; 27]. This natural-vs-adversarial performance trade-off is a consequence of the fact that while the naturally trained models focus on highly predictive features that may not be robust to adversarial attacks, the adversarially trained models select instead for robust features that may not be highly predictive [18].
While adversarial attacks can easily trick ANNs into misclassification, they appear ineffective for humans [28]. BP's learning algorithm differs drastically from biological learning mechanisms [29; 30; 31] and given that humans are not vulnerable to adversarial attacks, a fundamental research question is whether biologically-plausible learning algorithms are more robust to adversarial attacks. Researchers have made a significant effort in using the known learning principles of the brain to develop biologically-inspired algorithms as alternatives to BP [32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. Thus, we investigate in detail for the first time whether biologically-inspired algorithms are robust against adversarial attacks. In this work, we chose _Present the Error to Perturb the Input To modulate Activity_ (PEPITA), a recently proposed biologically-plausible learning algorithm [41], as a study case. In particular, we compare BP and PEPITA's learning algorithms in the following aspects:
* their intrinsic adversarial robustness (i.e., when trained solely on natural samples),
* their natural-vs-adversarial performance trade-off when trained with adversarial training,
* and their adversarial robustness against strong adversarial attacks when trained with weak adversarial samples (i.e., quality of fast adversarial training).
With this comparison, we open the door to drawing inspiration from biologically-plausible learning algorithms to develop more adversarially robust models.
## 2 Background - PEPITA
PEPITA is a learning algorithm developed as a biologically-inspired alternative to BP [41]. Its core difference from BP is that it does not require a separate backward pass to compute the gradients used to update the trainable parameters. Instead, a second forward pass is introduced (see Figure 1). In BP, the network processes the inputs \(\mathbf{x}\) with one forward pass (indicated with black arrows) to produce the outputs \(\mathbf{h}_{L}\), which are then compared to the target outputs \(\mathbf{y}^{*}\) through a loss function. The error signal \(\mathbf{e}\) computed by the loss function is then backpropagated through the entire network and used to train its parameters (indicated with red arrows). In PEPITA, the first forward pass is identical to BP. However, unlike BP, PEPITA feeds the error signals only to the softmax layer (directly) and to the input layer via a fixed random feedback matrix, \(F\). This modulatory feedback is then added to the original input \(\mathbf{x}\), producing the modulated inputs \(\mathbf{x}+F\mathbf{e}\) that are processed in the second forward pass (illustrated with orange arrows). The difference between the activations of the neurons in the first and second forward passes is then used to train the parameters of the network. This procedure sidesteps the biologically-implausible requirement of BP to back-propagate gradient information through the network layers, allowing the training of the synaptic weights to be based on spatially local information with a two-factor Hebbian-like learning rule. Therefore, while BP uses exact gradients for learning, that is, the exact derivative of the loss function with respect to its trainable parameters, PEPITA uses a very different learning mechanism that leads to approximations of BP's exact gradients. Similarly to BP, FGSM and PGD adversarial attacks rely on using the exact derivatives of the loss function to perturb the input samples in the most harmful way. As PEPITA-trained models do not use these exact derivatives for learning, they form excellent candidates to be explored in the context of adversarial robustness.
## 3 Results
### Model training details
For our comparative study, we use four benchmark computer vision datasets: MNIST [42], Fashion-MNIST [43], CIFAR-10 and CIFAR-100 [44]. For both BP and PEPITA, we used the same network
architectures and training schemes as described by [41] except for introducing a bias parameter, which improves performance across tasks. The learning rule for this bias is similar to the one for the synaptic weights, but the pre-synaptic activation is fixed to one, i.e., \(\mathbf{h}_{i-1}^{mod}\coloneqq\mathbf{1}\). Similarly to the update rule for the synaptic weights (see Algorithm 1), the bias update rule can be written as \(\Delta\mathbf{b}_{i}=(\mathbf{h}_{i}-\mathbf{h}_{i}^{mod})\) for \(i<L\) and \(\Delta\mathbf{b}_{L}=\mathbf{e}\). The network architecture consists of a single fully connected hidden layer with 1024 ReLU neurons and a softmax output layer (as represented in Figure 1). We used the mean-squared-error loss, trained the network for \(100\) epochs with early stopping, and optimized with momentum Stochastic Gradient Descent (SGD) [45]. Furthermore, we used a mini-batch size of 64, neuronal dropout of 10%, weight decay at epochs 60 and 90 with a rate of 0.1, and the He uniform initialization [46] with the feedback matrix \(F\) initialization scaled by \(0.05\). 1
Footnote 1: PyTorch implementation of all methods will be available at a public repository.
We optimized the learning rate hyperparameter through a grid search over 50 different values, and we defined the best-performing model as the model with the best natural accuracy on the validation dataset. We chose this model selection criterion because, in real-world applications, the networks' natural performance is most important to the user, and adversarial samples are outside of the norm. Thus, unless stated otherwise, we do not select the models based on the best adversarial validation accuracy, as we found this significantly worsens the natural performance of the model. The values reported throughout this section are the mean \(\pm\) standard deviation of the test accuracy for \(5\) random seeds. For adversarial training, we used the open-source library \(advertch.attacks\)[47] for the adversarial attacks, which follows the original implementations of FGSM and PGD, as introduced in [5] and [20], respectively. We defined an attack step size of \(0.1\) to create the FGSM and PGD adversarial samples and used \(40\) iterations for PGD. Note that the maximum and minimum pixel values of the adversarial images are the same as for the original natural images.
### Baseline natural and adversarial performance
Table 1 shows the natural performance of the models when trained without adversarial data and with natural validation accuracy as the hyperparameter selection criterion. In line with the results reported in the literature [41], PEPITA achieves a lower natural performance than BP because, while BP uses exact derivatives of the loss to compute the gradients used for learning, PEPITA uses approximations of these gradients. Notably, both models are not robust to adversarial attacks since they have neither been adversarially trained nor their hyperparameter selection criterion valued adversarial robustness as an advantage.
### PEPITA's intrinsic higher adversarial robustness
When using the same training procedure as in the section above (natural training) but selecting the hyperparameter search criterion to be the best adversarial validation accuracy, PEPITA shows a higher intrinsic adversarial robustness compared to BP, see Table 2. Although this model selection criterion leads to a certain level of adversarial robustness for PEPITA (see row 4 in Table 2), this comes at the cost of worse natural performance (compared to row 2 in Table 1) because of the
Figure 1: **Comparison of training in BP and PEPITA.** Schematic presentation of BP and PEPITA’s single hidden layer networks and training algorithms.
natural-vs-adversarial performance trade-off. However, when comparing the performance of BP with PEPITA for the MNIST dataset in Table 2, the natural performance of PEPITA decreases less than BP, and PEPITA is significantly more adversarially robust. Furthermore, BP cannot train adversarially robust models for more complex tasks, such as Fashion-MNIST, CIFAR-10, and CIFAR-100. During the hyperparameter search of BP, it was observed that the learning rates tended to be much larger with the current selection criterion (best adversarial validation accuracy). Consequently, the models either did not converge during learning, and the results were highly variable (see Table 2), or they did not learn at all, and the natural and adversarial performances were random.
### PEPITA's advantageous adversarial training
When the models are now adversarially trained and the hyperparameter search selection criterion defined as the natural validation accuracy, we observe that PEPITA achieves a better adversarial testing performance and less natural performance degradation compared to BP (see Table 3), except for CIFAR100 where both models are not significantly adversarially robust. Moreover, for the MNIST and Fashion-MNIST datasets, BP has a better natural test accuracy for these adversarially trained models. Hence, although Table 1 suggests that PEPITA offers a better natural-vs-adversarial performance trade-off, a direct comparison of the adversarial robustness between BP and PEPITA becomes difficult for these datasets. To better understand this trade-off, we selected the most adversarially robust BP-trained and PEPITA-trained models for different fixed natural accuracy values on the MNIST task. We plotted these results in Figure 2A, which shows that PEPITA performs significantly better than BP for similar values of natural performance. Specifically, the average decrease in adversarial performance for the same values of natural performance is \(0.26\%\) for PEPITA and \(8.05\%\) for BP. Moreover, we verified that even if we double the number of training epochs for BP, its natural and adversarial accuracies remain approximately the same, indicating that the model has converged in its learning dynamics (see Figure 2B). Hence, even after extensive hyperparameter searches and increased training epochs, we could not find BP-trained models with a better natural-vs-adversarial performance trade-off.
\begin{table}
\begin{tabular}{c c|c|c|c|c} & \multicolumn{2}{c|}{MNIST} & Fashion-MNIST & CIFAR-10 & CIFAR-100 \\ \hline \multirow{2}{*}{BP} & natural [\%] & \(98.58^{\pm 0.05}\) & \(90.52^{\pm 0.03}\) & \(57.05^{\pm 0.35}\) & \(27.54^{\pm 0.25}\) \\ \cline{2-6} & natural [\%] & \(98.16^{\pm 0.04}\) & \(86.46^{\pm 0.66}\) & \(52.15^{\pm 0.25}\) & \(25.88^{\pm 0.26}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Natural test accuracy. The hyperparameter selection criterion is the natural validation accuracy.
\begin{table}
\begin{tabular}{c c|c|c|c|c} & \multicolumn{2}{c|}{MNIST} & Fashion-MNIST & CIFAR-10 & CIFAR-100 \\ \hline \multirow{2}{*}{BP} & natural [\%] & \(94.22^{\pm 0.40}\) & \(43.82^{\pm 34.27}\) & \(10.003^{\pm 0.04}\) & \(9.078^{\pm 0.33}\) \\ & PGD [\%] & \(92.72^{\pm 0.36}\) & \(22.89^{\pm 10.63}\) & \(9.98^{\pm 0.05}\) & \(0.33^{\pm 0.24}\) \\ \cline{2-6} & natural [\%] & \(97.69^{\pm 0.16}\) & \(80.65^{\pm 0.74}\) & \(41.82^{\pm 1.57}\) & \(17.10^{\pm 0.72}\) \\ \cline{2-6} & PGD [\%] & \(97.56^{\pm 0.18}\) & \(80.48^{\pm 0.73}\) & \(41.73^{\pm 1.49}\) & \(16.76^{\pm 0.65}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Natural test accuracy and PGD adversarial test accuracy. The hyperparameter selection criterion is the adversarial validation accuracy.
\begin{table}
\begin{tabular}{c c|c|c|c|c} & \multicolumn{2}{c|}{MNIST} & Fashion-MNIST & CIFAR-10 & CIFAR-100 \\ \hline \multirow{2}{*}{BP (PGD)} & natural [\%] & \(98.73^{\pm 0.06}\) & \(85.16^{\pm 0.17}\) & \(35.83^{\pm 0.37}\) & \(12.45^{\pm 0.38}\) \\ & PGD [\%] & \(89.93^{\pm 0.03}\) & \(67.42^{\pm 0.21}\) & \(8.58^{\pm 0.16}\) & \(2.11^{\pm 0.15}\) \\ \cline{2-6} & natural [\%] & \(98.17^{\pm 0.10}\) & \(83.73^{\pm 0.76}\) & \(45.12^{\pm 0.89}\) & \(22.30^{\pm 0.16}\) \\ \cline{2-6} & PGD [\%] & \(96.93^{\pm 0.41}\) & \(83.19^{\pm 0.68}\) & \(44.94^{\pm 0.83}\) & \(2.88^{\pm 1.74}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Natural test accuracy and PGD adversarial test accuracy. All models are adversarially trained with PGD adversarial samples. The hyperparameter selection criterion is the natural validation accuracy.
### PEPITA's advantageous fast adversarial training
After demonstrating PEPITA's intrinsic adversarial robustness and beneficial natural-vs-adversarial performance trade-off, we now investigate PEPITA's capabilities in fast adversarial training [5]. Table 4 reports the results obtained when using fast adversarial training, i.e., when using FGSM samples for adversarial training, with the hyperparameter search selection criterion defined as the natural validation accuracy. We observe that when attacking the trained model with strong attacks, such as with PGD adversarial samples, the decrease in adversarial performance is much less significant for PEPITA than for BP, indicating that the PEPITA-trained models overfit less to the FGSM attacks. Moreover, both models do not suffer from catastrophic overfitting for this specific network architecture since the PGD testing accuracies do not drop to zero. This is the case because of two reasons: first, our network is over-parameterized, i.e., the network has more trainable parameters than there are samples in the dataset, so our large network width (\(1024\) neurons) improves adversarial robustness; and second, we use He weights initialization, so our shallow network (a single hidden layer) also prevents a decrease in adversarial robustness [48]. To conclude, even if these models do not suffer from catastrophic overfitting, PEPITA has a more advantageous fast adversarial training since the gap between the FGSM and the PGD accuracies is much smaller for PEPITA than for BP.
### Investigations on PEPITA's adversarial robustness
Given our observations that PEPITA is adversarially more robust than BP, we aimed to investigate why this is the case. A central difference between both ANN learning algorithms is their gradient computation. While BP uses exact derivatives of the loss to compute the gradients used for learning, PEPITA uses alternative feedback and learning mechanisms that lead to approximations of these exact gradients [49]. To test the hypothesis that the error feedback signal shaping the approximate gradients of PEPITA is not just a random signal but contains essential information for enhancing adversarial robustness, we added random noise to BP's weight gradients and studied its adversarial robustness. We generated noise from a normal distribution with zero mean and a tunable standard deviation. We tested several hyperparameter combinations, including the standard deviation of the random noise,
\begin{table}
\begin{tabular}{c c|c|c|c|c} & & MNIST & Fashion-MNIST & CIFAR-10 & CIFAR-100 \\ \hline \multirow{3}{*}{BP (FGSM)} & natural [\%] & \(98.93^{\pm 0.05}\) & \(84.90^{\pm 0.03}\) & \(51.56^{\pm 0.43}\) & \(26.59^{\pm 0.08}\) \\ & FGSM [\%] & \(91.04^{\pm 0.13}\) & \(66.31^{\pm 0.25}\) & \(45.06^{\pm 3.38}\) & \(2.51^{\pm 0.37}\) \\ & PGD [\%] & \(86.25^{\pm 0.09}\) & \(57.95^{\pm 0.33}\) & \(0.05^{\pm 0.04}\) & \(1.19^{\pm 0.08}\) \\ \cline{2-6} & natural [\%] & \(98.00^{\pm 0.14}\) & \(80.70^{\pm 0.95}\) & \(41.22^{\pm 2.01}\) & \(17.89^{\pm 0.52}\) \\ & FGSM [\%] & \(97.91^{\pm 0.13}\) & \(80.68^{\pm 0.96}\) & \(41.22^{\pm 2.22}\) & \(17.68^{\pm 0.44}\) \\ & PGD[\%] & \(97.81^{\pm 0.12}\) & \(80.27^{\pm 1.05}\) & \(41.00^{\pm 2.20}\) & \(17.53^{\pm 0.48}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Natural test accuracy and PGD and FGSM adversarial test accuracies. Adversarially trained models trained with FGMS adversarial samples. The hyperparameter selection criterion is the natural validation accuracy.
Figure 2: **PEPITA’s advantageous adversarial training.** The results presented here are for BP and PEPITA models trained adversarially with PGD samples on the MNIST task. (A) Natural-vs-adversarial performance trade-off: for different natural accuracy values, the adversarial accuracies of the most adversarially robust BP and PEPITA-trained models are reported. (B) Natural (represented by the full lines) and adversarial (represented by the dashed lines) accuracies of BP and PEPITA-trained models for double the amount of epochs.
and none of the parameter settings led to increased adversarial robustness for BP. In particular, BP's performance went from underperforming on classifying natural and adversarial samples for lower noise values to not being able to learn at all for higher noise values. Hence, we conclude that the critical factor leading to adversarial robustness is how the gradients are computed during learning. As many biologically-plausible learning algorithms use different feedback mechanisms and learning dynamics than BP to compute gradients, we can speculate that the resulting trained models possess better robustness against gradient-based adversarial attacks. Thus, an in-depth study of these could benefit the design of more adversarially robust models.
## 4 Discussion
Our paper demonstrates for the first time that biologically-inspired learning algorithms can lead to more adversarially robust ANNs than BP. We found that, unlike BP, PEPITA possesses intrinsic adversarial robustness, i.e., naturally trained PEPITA models can be robust against adversarial attacks without the computationally heavy burden of adversarial training. A similar finding of intrinsic adversarial robustness has been demonstrated by Akrout [50] for the biologically-plausible learning algorithm Feedback Alignment (FA) [32]. However, in this previous work Akrout [50], a non-common practice that leads to much weaker adversarial attacks was used: the attackers use the FA's random feedback matrices to generate adversarial samples instead of the transposed feedforward pathway. Hence, the analysis in Akrout [50] differs from our approach, where we let the attacker fully access the network architecture and craft its adversarial samples through the transposed forward pathway. Moreover, we found that PEPITA does not suffer from the natural-vs-adversarial performance trade-off as severely as BP, as its models can be more adversarially robust than BP while losing less natural performance. Lastly, we found that PEPITA benefits much more from fast adversarial training than BP, i.e., when trained with weaker adversarial attacks, it reports much better adversarial robustness against strong attacks.
### Limitations and future work
Although the link between adversarial robustness and PEPITA has been well established, a theoretical understanding of PEPITA's advantageous adversarial training and intrinsic adversarial robustness is still missing. Also, understanding this phenomenon's theoretical foundation could help identify the exact properties that improve the natural-vs-adversarial performance trade-off. Moreover, PEPITA has recently been extended to deeper networks (up to five hidden layers) and tested with different parameter initialization schemes [49], so studying the impact of other characteristics of the model, such as width, depth, and initialization on PEPITA's adversarial robustness would be beneficial (as done in [48]). PEPITA has also recently been combined with weight mirroring so that its feedback projection matrix can be learned [49], so it would be interesting to study whether this improves not only natural performance but also adversarial robustness. While PEPITA, to our knowledge, was the first model being investigated regarding adversarial robustness, this kind of analysis should also be done for several other BP-alternative biologically-plausible learning algorithms that have been proposed [33; 34; 35; 36; 38]. Hence, our work paves the way for a systematic assessment of the properties that lead to adversarially robust models.
### Conclusion
We demonstrated that ANNs trained with PEPITA, a recently proposed biologically-inspired learning algorithm, are more adversarially robust than BP-trained ANNs. In particular, we showed through several computational experiments that PEPITA performs significantly better than BP in an adversarial setting. Our analysis opens the door to drawing inspiration from biologically-plausible learning algorithms for designing more adversarially robust models. Thus, our work contributes to the future development of more adversarially robust ANNs and, consequently, to the creation of safer and more trustworthy artificial intelligence systems.
#### Acknowledgments and Disclosure of Funding
This work was supported by the Swiss National Science Foundation (315230_189251 1). We thank the IBM Zurich research group 'Emerging Computing and Circuits' for all the fruitful discussions during the development of this work. We would like to thank Anh Duong Vo, Sander de Haan, and Federico Villani for their feedback and Pau Vilimelis Aceituno for the insightful discussions.
|
2309.08268 | The Dispersion Measure Contributions of the Cosmic Web | The large-scale distribution of baryons is sensitive to gravitational
collapse, mergers, and galactic feedback. Known as the Cosmic Web, its
large-scale structure (LSS) can be classified as halos, filaments, and voids.
Fast Radio Bursts (FRBs) are extragalactic sources that undergo dispersion
along their propagation paths. They provide insight into ionised matter along
their sightlines via their dispersion measures (DMs), and have been
investigated as probes of the LSS baryon fraction, the diffuse baryon
distribution, and of cosmological parameters.
We use the cosmological simulation IllustrisTNG to study FRB DMs accumulated
while traversing different types of LSS.
We combine methods for deriving electron density, classifying LSS, and
tracing FRB sightlines. We identify halos, filaments, voids, and collapsed
structures along random sightlines and calculate their DM contributions.
We analyse the redshift-evolving cosmological DM components of the Cosmic
Web. We find that the filamentary contribution dominates, increasing from ~71%
to ~80% on average for FRBs originating at z=0.1 vs z=5, while the halo
contribution falls, and the void contribution remains consistent to within ~1%.
The majority of DM variance originates from halos and filaments, potentially
making void-only sightlines more precise probes of cosmological parameters. We
find that, on average, an FRB originating at z=1 will intersect ~1.8 foreground
collapsed structures, increasing to ~12.4 structures for a z=5 FRB. The impact
parameters between our sightlines and TNG structures of any mass appear
consistent with those reported for likely galaxy-intersecting FRBs. However, we
measure lower average accumulated DMs from these structures than the
$\sim90\;{\rm pc\;cm^{-3}}$ DM excesses reported for these literature FRBs,
indicating some DM may arise beyond the structures themselves. | Charles R. H. Walker, Laura G. Spitler, Yin-Zhe Ma, Cheng Cheng, M. Celeste Artale, Cameron Hummels | 2023-09-15T09:23:24Z | http://arxiv.org/abs/2309.08268v1 | # The Dispersion Measure Contributions of the Cosmic Web
###### Abstract
Context:The large-scale distribution of baryons is sensitive to gravitational collapse, mergers, and galactic feedback processes. Known colloquially as the Cosmic Web, its large-scale structure (LSS) can be classified as halos, filaments, and voids. Fast Radio Bursts (FRBs) are extragalactic transient radio sources that undergo dispersion along their propagation paths. These systems provide insight into ionised matter along their sightlines by virtue of their dispersion measures (DMs), and have been investigated as probes of the LSS baryon fraction, the diffuse baryon distribution, and of cosmological parameters. Such efforts are highly complementary to the study of intergalactic medium (IGM) through X-ray observations, the Sunyaev-Zeldovich effect, and galaxy populations.
Aims:We use the cosmological simulation IllustrisTNG to study FRB DMs accumulated while traversing different types of LSS.
Methods:We combine methods for deriving electron density, classifying LSS, and tracing FRB sightlines through TR300-1. We identify halos, filaments, voids, and collapsed structures along randomly selected sightlines, and calculate their DM contributions.
Results:We present comprehensive analysis of the redshift-evolving cosmological DM components of the Cosmic Web. We find that the filamentary contribution to DM dominates, increasing from \(\sim 71\%\) to \(\sim 80\%\) on average for FRBs originating at \(z=0.1\) vs \(z=5\), while the halo contribution falls, and the void contribution remains consistent to within \(\sim 1\%\). The majority of DM variance between sightlines originates from halo and filamentary environments, potentially making void-only sightlines more precise probes of cosmological parameters. We find that, on average, an FRB originating at \(z=1\) will intersect \(\sim 1.8\) foreground collapsed structures of any mass, increasing to \(\sim 12.4\) structures for an FRB originating at \(z=5\). The measured impact parameters between our sightlines and TNG structures of any mass appear consistent with those reported for likely galaxy-intersecting FRBs. However, we measure lower average accumulated DMs from these structures than the \(\sim 90\) pc cm\({}^{-3}\) DM excesses reported for these literature FRBs, indicating some of this DM may arise beyond the structures themselves.
Conclusions:
## 1 Introduction
Since their discovery (Lorimer et al., 2007), the microsecond to millisecond-duration luminous signals known as fast radio bursts (FRBs) are being increasingly detected1, and their natures heavily investigated (Zhang, 2018; Platts et al., 2019). Though the catalogue size of localised sources expands (Heintz et al., 2020), the underlying progenitor(s) of these radio transients is not yet unambiguously determined (Platts et al., 2019). One uncontroversial observation, however, is that most FRBs traverse extragalactic distances2, accumulating large dispersion measures (DMs), which can be measured from the frequency-dependent delay in their arrival times (\(\Delta t\sim\text{DM}/y^{2}\)). DM is considered an acceptable proxy for the integrated electron density to the distant source of emission along the line-of-sight (Kulkarni, 2020)
Footnote 1: For various FRB catalogues, see: www.frbcat.org (Petroff et al., 2016), [https://cdsarc.cs.unistra.fr/viz-bin/cat/J/ApJS/257/59](https://cdsarc.cs.unistra.fr/viz-bin/cat/J/ApJS/257/59) (CHIME/FRB Collaboration et al., 2021), and [https://www.wis-tns.org/](https://www.wis-tns.org/) (Yaron et al., 2020).
Footnote 2: Just one FRB-like source has thus far been associated with a Galactic counterpart (Bochenek et al., 2020).
\[\text{DM}=\int_{0}^{z}\frac{n_{\text{e}}(z)}{(1+z)}\,\text{d}l, \tag{1}\]
where \(n_{\text{e}}\) is the physical number density of free electrons at redshift \(z\), and \(\text{d}l\) is the proper distance increment (Macquart et al., 2020). Because of this definition, the total observed DM for an FRB can be decomposed into three parts (Walters et al., 2018)
\[\text{DM}_{\text{obs}}=\text{DM}_{\text{MW}}+\text{DM}_{\text{cos}}+\frac{ \text{DM}_{\text{host}}}{1+z}, \tag{2}\]
where \(\text{DM}_{\text{MW}}\), \(\text{DM}_{\text{cos}}\) and \(\text{DM}_{\text{host}}\) are the DM contributions from our Milky Way, the cosmological ionised medium and the FRB host galaxy respectively.
Work is underway to constrain \(\text{DM}_{\text{MW}}\), which is direction-dependent, and encompasses the contributions of both the Milky Way's disk and its halo (Cook et al., 2023). Large values of \(\text{DM}_{\text{host}}\) have been predicted (Walker et al., 2020) and observed (Spitler et al., 2014; Niu et al., 2022) for some FRBs, but \(\text{DM}_{\text{host}}\) is supressed by a factor of \((1+z)^{-1}\) for distant galaxies due to
cosmological redshift and time dilation effects (Yang & Zhang, 2016). Therefore a major component of DM for a source at a large cosmological distance will often be DM\({}_{\rm cos}\), which is attributed to a combination of the ionised matter in diffuse cosmological structures (also denoted DM\({}_{\rm IGM}\)), and the cirumgalactic media (CGM) of any foreground galaxies (often denoted DM\({}_{\rm halos}\)) along the line-of-sight (Prochaska & Zheng, 2019). In general, DM\({}_{\rm cos}\) can be written as
\[{\rm DM}_{\rm cos}(z)=c\int_{0}^{z}\frac{n_{\rm e}(z){\rm d}z}{(1+z)^{2}H(z)}, \tag{3}\]
with speed of light \(c\) and Hubble parameter for a spacially-flat universe
\[H(z) \equiv H_{0}E(z) \tag{4}\] \[= H_{0}\sqrt{\Omega_{\rm m}(1+z)^{2}+\Omega_{\Lambda}},\]
where \(H_{0}\) is the Hubble constant. In Eq. (4), \(\Omega_{\rm m}\) and \(\Omega_{\Lambda}\) are the fractional densities of matter and dark energy respectively. From Eq. (3), one can see that \(n_{\rm e}\) depends on the ionisation state of the baryons along the sightline (Inoue, 2004). Even for two FRBs originating at the same redshift, the different collapsed structures along their propagation paths can cause different values of DM\({}_{\rm cos}\). However the average \(\langle\)DM\(\rangle\) for many FRBs on different sightlines has a predictable relationship with redshift, and Macquart et al. (2020) have used five localised FRBs to directly measure the baryon density of the Universe, finding a result consistent with measurements from the cosmic microwave background (Planck Collaboration et al., 2018) and Big-Bang nucleosynthesis (Cooke et al., 2018).
Although Macquart et al. (2020) determined the total baryon density, the detailed location of these baryons in the Cosmic Web are not well known. Due to galactic feedback processes (Cen & Ostriker, 2006; Bregman, 2007), a significant amount of baryonic matter is anticipated to reside in a so-called warm-hot intergalactic medium (hereafter WHIM) with temperatures \(\sim 10^{5}\)-\(10^{7}\) Kelvin, which can be diffuse around halos, filaments and even in voids (Haider et al., 2016; Martizzi et al., 2019). In recent years, methods leveraging the Sunyaev-Zeldovich effect (Ma et al., 2015; Hojjati et al., 2015, 2017; Tanimura et al., 2019; Ma, 2017; Tanimura et al., 2019; Ma et al., 2021) and Oxygen absorption lines (OVII; Nicastro et al., 2018) have been developed to probe the WHIM. As well as counting all electrons along FRB sightlines, FRB DMs may prove complementary to these distribution-sensitive probes (see, e.g., Inoue, 2004; Akahori et al., 2016). McQuinn (2014) showed that the distribution of DM\({}_{\rm cos}\) is sensitive to the location of baryons within, or beyond galactic halos. Walters et al. (2019) showed that with \(10^{2}-10^{3}\) FRBs, the diffuse baryon fraction could be determined with a few percent error. Lee et al. (2021) proposed to use FRBs, together with foreground matter distribution mapping and Bayesian reconstruction techniques, to constrain the baryon fractions contained in the CGM of galactic halos and other cosmological structures (see also Walker et al., 2020).
In this work we therefore deconstruct the contributions to FRB DMs from different types of cosmological large-scale structure using the results of the numerical simulation suite IllustrisTNG. IllustrisTNG has already proven a valuable tool for studying the Cosmic Web's formation (Martizzi et al., 2019), its influence on galaxies (Donnan et al., 2022; Malavasi et al., 2022), and the portion of its matter which remains difficult to detect observationally (Parmbelli et al., 2022). We will thus use IllustrisTNG to analyse the contributions to DM from halos, filaments and voids, and provide comprehensive empirical distributions of these as a function of redshift. In Sect. 2, we introduce TNG300-1, our simulation run of choice, and discuss the process of obtaining electron densities from it. In Sect. 3, we focus on the significance of LSS, and define our criteria for classifying it within the simulation. In Sect. 4, we discuss our method to calculate FRB DMs given the sightlines. We review our results in Sect. 5, and discuss their implications in the context of other studies in Sect. 6. We conclude in Sect. 7.
To maintain consistency with IllustrisTNG (Nelson et al., 2019), unless otherwise stated we adopt the _Planck_-2015 cosmological parameters (Planck Collaboration et al., 2016): fractional baryon density \(\Omega_{\rm b}=0.0486\), fractional matter density \(\Omega_{\rm m}=0.3089\), fractional dark energy density \(\Omega_{\Lambda}=0.6911\), spectral index \(n_{\rm s}=0.9667\), _rms_ fluctuation amplitude \(\sigma_{8}=0.8159\), and reduced Hubble constant \(h=0.6774\).
## 2 Simulations
In general, a cosmological hydrodynamic simulation can be treated as a three-dimensional box of evolving universe, which is initialised at an early time with a specific set of initial conditions and certain cosmological parameters. This box of universe is then evolved forward in time until reaching the present day \(t_{0}\). At specific redshift intervals, instances of the simulation commonly referred to as'snapshots' are captured and stored for study. Individual snapshots are comprised of data (e.g., average mass density, star formation rate, dark matter density) recorded for smaller sub-volumes (hereafter 'cells') of the simulation box. These cells dictate simulation resolution and can be fixed, or variable in size (e.g. in grid or particle -based simulations respectively), or a hybrid of the two (e.g. in so-called'moving-mesh' simulations). The recorded physical parameters for these cells are hereafter referred to as 'fields'. For a review of simulation outputs, please refer to Hummels et al. (2017).
\begin{table}
\begin{tabular}{l l} \hline \hline Snapshot Number & Redshift \\ \hline
99 & 0.00 \\
91 & 0.10 \\
84 & 0.20 \\
78 & 0.30 \\
72 & 0.40 \\
67 & 0.50 \\
59 & 0.70 \\
50 & 1.00 \\
40 & 1.50 \\
33 & 2.00 \\
25 & 3.01 \\
21 & 4.01 \\
17 & 5.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The ’full’ TNG300–1 snapshots used in this analysis.
For FRB studies, various simulations such as the Magneticum Pathfinder Simulation (Dolag et al. 2015) and the MICE Onion Simulation (Pol et al. 2019) have been used to estimate cosmological DM contributions. Simulations can also help to analyse DM contributions from host and foreground galaxies (e.g. RANSES, Zhu & Feng 2021). Other uses include FRB progenitor identification (Dolag et al. 2015), cosmological parameter estimation (e.g. Illustris, Jaroszynski 2019) and the inference of the strength and origins of magnetic fields (e.g. CRPROPA, Hackstein et al. 2019, 2020). In this work, we analyse FRB DM contributions from the Cosmic Web using IllustrisTNG, a state-of-the-art cosmological hydrodynamic simulation suite.
### IllustrisTNG overview
IllustriisTNG (Nelson et al. 2019; Pillepich et al. 2018; Springel et al. 2018; Naiman et al. 2018; Nelson et al. 2018; Marinacci et al. 2018) is a moving-mesh cosmological hydrodynamic simulation suite, composed of Voronoi cells of density-dependent resolution. The simulations include sub-grid models to account for star formation, radiative metal cooling, stellar and supermassive black hole feedback, and chemical enrichment from SNII, SNIa, and AGB stars. The sub-grid models are calibrated to reproduce observational constraints such as the galaxy stellar mass function at \(z=0\), the cosmic star formation rate density, and the galaxy stellar size distribution, among others (see Pillepich et al. 2018, for further details).
The suite assumes Planck 2015 cosmology and is evolved between \(127<z<0\). In total, 100 snapshots are recorded for a given TNG run. Of these, 20 snapshots between \(12<z<0\) are fully recorded and provide complete field information for every cell in the simulation. The rest are recorded as'mini' snapshots and omit information about certain fields3.
Footnote 3: The full list of snapshots, and available fields, can be found at [https://www.tng-project.org/data/docs/specifications/](https://www.tng-project.org/data/docs/specifications/)
[https://www.tng-project.org/data/docs/background/](https://www.tng-project.org/data/docs/background/) (see Nelson et al. 2019 for further details).
TNG is composed of multiple runs, each differing in volume and/or resolution4. Together, these runs span cosmological (Sunseri et al. 2023) to sub-galactic (Boecker et al. 2023) spacial scales, making the suite suitable for investigating DM contributions due to various cosmic structures. The largest volume run, TNG300, consists of a box of approximately \(\sim(300\,\mathrm{cMpc})^{3}\) and contains the largest number galaxies (Nelson et al. 2019), making it the most suitable for our purposes. In this work, we therefore utilise 'full' snapshots recorded for TNG300-1, the highest-resolution version of this run. The respective redshifts of these snapshots are provided in Table 1.
Footnote 4: A full list of available TNG runs may be found at: [https://www.tng-project.org/data/docs/background/](https://www.tng-project.org/data/docs/background/) (see Nelson et al. (2019) for further details).
### The electron density in IllustrisTNG
In Eq. (3), the physical electron density \(n_{\mathrm{e}}\) can be written as
\[n_{\mathrm{e}}=\frac{3\Omega_{\mathrm{e}}H_{\mathrm{p}}^{2}}{8\pi Gm_{\mathrm{ p}}}(1+z)^{3}f_{\mathrm{e}}(z)f_{\mathrm{e}}(z), \tag{5}\]
where \(f_{\mathrm{d}}\) and \(f_{\mathrm{e}}\) are the fraction of baryons in the IGM and fraction of free electrons respectively, and \(m_{\mathrm{p}}\) is the proton mass. According to Walters et al. (2018), \(f_{\mathrm{e}}\) can be written as (see also, e.g. Deng & Zhang 2014; Zhang 2018; Walters et al. 2019; Macquart et al. 2020; Zhang et al. 2021; Batten et al. 2021):
\[f_{\mathrm{e}}(z)=\left[(1-Y_{\mathrm{p}})Y_{\mathrm{e,H}}(z)+Y_{\mathrm{p}}Y _{\mathrm{e,He}}(z)/2\right], \tag{6}\]
where \(Y_{\mathrm{p}}\simeq 0.24\) is the primordial helium abundance, \((1-Y_{\mathrm{p}})\simeq 0.76\) is therefore the hydrogen abundance, and \(\chi_{\mathrm{e,H}}\) and \(\chi_{\mathrm{e,He}}\) are the redshift-evolving ionisation fractions of hydrogen and helium.
In TNG300-1, the electron density is not a directly provided field, but can be calculated (Jaroszynski 2020; Zhang et al. 2020, 2021; Takahashi et al. 2021). Star-forming cells in TNG300-1 comprise a mixture of gas in both cold and warm phases (Katz
Figure 1: Large-scale structure of the Cosmic Web. _Left panel:_ The density of gas (greyscale) lying along a two-dimensional (\(75\,h^{-1}\mathrm{cMpc}\))\({}^{2}\), infinitely thin slice across the \(z=0\) snapshot of TNG100-3. _Right panel:_ The same data, separated into halo (yellow), filament (cyan) and void (purple) substructures according to the metric used in this work.
et al., 1996; Springel and Hernquist, 2003; Pakmor et al., 2018). Since only ionised matter contributes to DM, we only incorporate warm-phase gas into our calculations for these cells, with the assumption that this gas is completely ionised. Thus, for any given TNG300-1 cell, we calculate \(n_{\rm e}\) as follows (Zhang et al., 2021):
\[n_{\rm e}=(w\eta_{\rm b})X_{\rm H}\frac{\rho}{m_{\rm p}}(1+z)^{3}, \tag{7}\]
where \(\eta_{\rm e}\) is the cell's fractional electron number density with respect to its total hydrogen number density, provided by the TNG300-1 "ElectronAbundance" field, \(X_{\rm H}=(1-Y_{\rm p})\), and \(\rho\) is the gas cell comoving mass density. The warm-phase gas mass fraction, \(w\), is defined as
\[w=\begin{cases}1-x,&\text{for star forming cells}\\ 1,&\text{otherwise}\end{cases}, \tag{8}\]
where the cold-phase gas mass fraction, \(x\), is given by
\[x=\frac{u_{\rm h}-u}{u_{\rm h}-u_{\rm c}}. \tag{9}\]
Here, \(u\) is the cell's thermal energy per unit of gas mass and is provided by the TNG300-1 "InternalEnergy" field. Likewise, \(u_{\rm h}\) and \(u_{\rm c}\) are the thermal energy per unit mass of hot- (\(T_{\rm h}\sim 10^{7}\,\rm K\)) and cold- (\(T_{\rm c}\sim~{}10^{3}\,\rm K\)) phase gas respectively (Marinacci et al., 2017; Springel and Hernquist, 2003). We can calculate these values from their respective temperatures using5:
Footnote 5: [https://www.tng-project.org/data/docs/faq/](https://www.tng-project.org/data/docs/faq/)
\[u(T)=\frac{k_{\rm B}T}{\mu(\gamma-1)}\times\frac{\rm Unit\,Mass}{\rm Unit\, Energy}, \tag{10}\]
where \(k_{\rm B}\) is the Boltzmann constant in CGS unit, \(\gamma=5/3\) is the adiabatic index, \(\rm Unit\,Mass/Unit\,Energy=10^{10}\) in TNG300-1 units and \(\mu\) is the mean atomic weight:
\[\mu=\frac{4m_{\rm p}}{1+(3+4\eta_{\rm e})X_{\rm H}}. \tag{11}\]
## 3 Large-Scale Structure
The matter distribution and evolution of the Universe depends on the interplay between cosmological expansion, gravitational collapse, and cooling and feedback processes (Springel et al., 2006). The resulting LSS, known as the Cosmic Web, has been mapped using both galaxy redshift and peculiar velocity surveys (see e.g. Gott et al., 2005; Tully et al., 2014, 2015). The Cosmic Web can be broken down into sub-structures such as dense halos, connecting filaments, and under-dense voids (Martizzi et al., 2019). Fig. 1 shows a slice of TNG100-3 at \(z=0\) along with these sub-structures.
Cosmological hydrodynamic simulations predict that at low redshifts, most baryons reside in a 'warm-hot intergalactic medium' (WHIM) phase. This phase is induced through gravitational shock-heating to \(10^{5}\)-\(10^{7}\,\rm K\) temperatures and \(\sim 50\rho_{\rm cr}\Omega_{\rm baryon}\) densities where \(\rho_{\rm cr}=3H_{\rm p}^{2}/8\pi G\) is the critical density of the Universe at \(z=0\), and \(G\) is the gravitational constant (Martizzi et al., 2019). The mass fraction in this phase increases from \(z\sim 3\), may equal that of cold gas by \(z\sim 1\)(Cen and Ostriker, 1999; Meldock and Cen, 2021), and make up the majority of missing baryons by \(z=0\)(Cen and Ostriker, 2006; Martizzi et al., 2019). At \(z=0\), most of the WHIM is predicted to be tied up in filaments (Bregman, 2007; Martizzi et al., 2019). The exact evolution of these structures is sensitive to AGN feedback, the modelling of which is of active interest (Haider et al., 2016; Takahashi et al., 2021). Observational evidence for gas in filamentary structures between galaxies, within superclusters, and on larger (ten to hundred-Mpc) scales, is growing statistically via Sunyaev-Zel'dovich studies and X-ray emission (see e.g., Tanimura et al., 2019, 2020, 2020).
Meanwhile, FRB observations have been used to confirm the existence of the missing baryons (Macquart et al., 2020), and experiments have been proposed to constrain the amount of this matter tied up galaxies' CGM vs the diffuse IGM (Walters et al., 2019; Ravi, 2019; Lee et al., 2021). Simulations have been used to study the DM contribution of the WHIM (Akahori et al., 2016; Medlock and Cen, 2021). It is therefore attractive to examine whether the DM contributions of LSS could prove to be a complementary tracer.
### LSS Classification in IllustrisTNG
Several different ways to classify LSS are implemented in the literature (see Libeskind et al., 2018 for a review). Due to the influence of dark matter on LSS formation (Haider et al., 2016), one simple method for identifying cosmic structures uses the local dark matter density of the simulation. We follow this method, as described by Artale et al. (2022), building upon on work by Haider et al. (2016) and Martizzi et al. (2019), to classify TNG300-1 cells. According to this classification scheme, any given cell may be categorised as belonging to a halo, filament or void using the following metric:
\[\begin{split}\frac{\rho_{\rm c}}{\rho_{\rm cr}}&= \begin{cases}<0.1,&\text{for voids}\\ 0.1-57,&\text{for filaments}\\ >57,&\text{for halos}\end{cases}\end{split} \tag{12}\]
where \(\rho_{\rm c}\) is the cold dark matter density of the cell, provided by the TNG300-1 "SubfindDDDensity" field. The above halo
Figure 2: Mass distribution of large-scale structures. Following Artale et al. (2022), the mass distribution within LSS substructures in the \(z=0\) snapshot of TNG100-3 when binned according to our metric. Yellow, cyan, and purple portions of the curve indicate mass within halos, filaments, and voids respectively according to our boundaries (dashed and dot-dashed lines). \(\rho_{\rm c}\) is the cold dark matter density, and \(\rho_{\rm cr}\) is the present day’s critical density of the Universe.
filament boundary is based on predictions from the Navarro-Frenk-White (NFW) models, and the filament-void boundary is selected to encompass the definition of halos and sheets according to each cell's gravitational potential (Artale et al. 2022; Martizzi et al. 2019; Forero-Romero et al. 2009). A slice from the simulation coloured according to LSS type can be seen in Fig. 1. Following Artale et al. (2022), Fig. 2 illustrates the amount of matter allocated to each type of structure in the simulation at \(z=0\) according to our metric. Fig. 3 shows the evolution of baryonic mass and electron density within TNG300-1 as a function of LSS and redshift along with previous results by Artale et al. (2022).
### Further classification
The practice of reconstructing matter densities along FRB sightlines is a burgeoning field of research. Li et al. (2019), Prochaska & Zheng (2019) and Simha et al. (2020) investigated the environments along sightlines to individual sources using information about spatially-proximate galaxies along with various matter density reconstruction techniques. Lee et al. (2021) propose using similar methods for multiple sources to constrain CGM and IGM parameters. Therefore tracking the details of collapsed structures along sightlines in IllustrisTNG is also prudent.
TNG provides finely resolved details that are useful for this purpose in the form of "sub-halos", defined as virialised substructures within a dark matter halo (Wechsler & Tinker 2018). In TNG, sub-halos are classified using the submap algorithm (Springel et al. 2001; Nelson et al. 2019). All TNG-simulated galaxies are associated with a sub-halo. Through sub-halo analysis, any galaxies in proximity to FRB sightlines may therefore be identified. While sub-halo identification is not directly provided for individual TNG cells via any field, the "sub-halo ID" for a given cell may be reconstructed by combining the cell's "ParticleID" field, the entire TNG sub-halo catalog, and information supplied by relevant TNG sub-halo offset tables6.
Footnote 6: TNG offset file information may be found at: [https://www.tng-project.org/data/docs/specifications/#sec3a](https://www.tng-project.org/data/docs/specifications/#sec3a)
Not all TNG cells are associated with sub-halos. These cells are flagged with a Sub-halo ID\(=-1\). Additionally, sub-halos do not always contain galaxies. Non-galaxy sub-halos include sub-galactic, low-mass clumps of baryonic matter, which may be formed via, e.g., disk instabilities, which can be distinguished via a flag (Nelson et al. 2019). They may also be differentiated by enforcing cell requirements, e.g. containing non-zero stellar mass, or a certain number of stellar particles7. These requirements may vary between TNG runs. In this work, where appropriate, we classify collapsed structures both by considering all sub-halos, and by separating these sub-halos into bins according to their masses. We note, therefore, that statistics derived in upcoming sections which consider sub-halos of any mass may be influenced by environments containing some sub-halos which are small and not well resolved.
Footnote 7: [https://www.tng-project.org/data/form/topic/235/how-to-identify-subhalos-containing-well-formed-ga/](https://www.tng-project.org/data/form/topic/235/how-to-identify-subhalos-containing-well-formed-ga/)
## 4 Methods
As with electron density, the ability to trace sightlines through IllustrisTNG is not directly provided. However, multiple techniques to approximate the environments along sightlines exist. In principle, simulation-agnostic absorption line tools (e.g. TRIDENT, Hummels et al. 2017) should be adaptable for use with IllustrisTNG. In practice, we adapt here the methods first described in Zhang et al. (2021), whereby FRB sightlines are assembled from segments created from each simulation snapshot. To create a given segment, a random sub-volume is extracted from the total simulation volume, a line of sight defined, and the cells closest to this sightline identified. The "true" line of sight of the FRB is then approximated to propagate through these cells.
Figure 3: Matter evolution in large-scale structures. _Left panel_: Evolution of the baryonic mass fraction \(f_{b}\) in halos (yellow), filaments (cyan) and voids (purple) as a function of redshift in TNG300-1 (solid lines), compared to the results of Artale et al. (2022) (dashed lines). For halos, the solid and dashed lines overlap with each other. _Right panel_: Evolution of the total mean electron density \(\langle n_{e}\rangle\) in TNG300-1 (grey line), and in each type of LSS (coloured lines), as a function of redshift.
Section 4.1 summarises this process, and the process of obtaining average electron density and structural information for our segments. Calculating the DM accumulated along an individual segment is described in Sect. 4.2. Finally, Sect. 4.3 describes the process of combining segments to study different DM contributions out to cosmological distances.
### Line of sight segments
Following Zhang et al. (2021), we generate a single line of sight segment for a given TNG300-1 snapshot as follows. Initially, header data detailing snapshot size and redshift is loaded, alongside a matchlist which allows simulation cells to be related to their parent sub-halos, as discussed in Sect. 3.2. The segment's physical dimensions are then defined. A random Cartesian coordinate of the form \((0,y,z)\) is generated as starting point of the line of sight through the segment. The segment's rectangular extent, of (205,0,2,0) \(h^{-1}\)cMpc where the \(x\)-dimension spans the entire length of the simulation box, is selected so that the segment extends out to \(0.1\,h^{-1}\)cMpc either side of the origin in the \(y\) and \(z\) directions. The corresponding end-point of the sightline will be \((205,y,z)\,h^{-1}\)cMpc.
Once the segment coordinates are defined, all relevant simulation fields are loaded or otherwise created for this snapshot. For clarity, these fields are listed in Table 2. Warm and cold-phase gas mass fractions are calculated as described in Sect. 2.2. Field values along the segment sightline are obtained from the cells which lie closest to 10,000 linearly spaced, \(\sim 20\,h^{-1}\)ckpc-wide bins defined to span the length of the sightline. The LSS type associated with each bin is defined as discussed in Sect. 3.1. The number of cells along the segment sightline belonging to each structure type are calculated. Electron densities of every cell along the segment sightline are calculated as discussed in Sect. 2.2. These are averaged to obtain the average total electron density traversed along the segment. The average electron densities associated with each LSS type across the segment sightline are also calculated separately.
Additionally, Sub-halo IDs associated with cells along the sightline are identified as discussed in Sect. 3.2. Unique sub-halos are stored, and all bins along the segment associated with these sub-halos are identified. In this way, all DM attributed to these sub-halos may also be calculated. The sightline's coordinates of closest approach to the central positions of these sub-halos are also stored for further analysis. An example segment can be seen in the left panel of Fig. 4. An example of impact parameter analysis can be seen in the right panel of Fig. 4.
### DM within a segment
Using the information recorded for each segment, the DM accumulated along its line of sight can be calculated. We compute the total DM along a given segment sightline by converting the continuous Eq. (1) into a discrete sum over its bins assuming a negligible redshift change over the length of a snapshot:
\[\mathrm{DM}=\widetilde{\mathrm{DM}}/(1+z), \tag{13}\]
where \(\widetilde{\mathrm{DM}}\) is the total DM along the segment's sightline in its rest frame,
\[\widetilde{\mathrm{DM}}=\sum_{n=1}^{10,000}\frac{\mathrm{d}\,\mathrm{DM}}{ \mathrm{d}l}\Big{|}_{n}\times\mathrm{d}l. \tag{14}\]
In Eq. (14), \((\mathrm{d}\,\mathrm{DM}/\mathrm{d})_{n}=n_{e}\) is the physical electron density of the \(n^{\mathrm{th}}\) bin as described by Eq. (7), and \(\mathrm{d}l=ad\eta\) is the physical distance increment of propagation through the bin, where \(a\) is the scale factor at the redshift of the segment, and \(d\eta\) is the comoving distance increment, equal to the bin width in comoving IllustrisTNG units8. It thus follows that the DM accumulated by a limited portion of a sightline traversing a particular sub-halo can be calculated by constraining Eq. (14) to only those bins associated with its specific Sub-halo ID.
Footnote 8: For more information on converting TNG coordinates, see: [https://www.tng-project.org/data/docs/specifications/](https://www.tng-project.org/data/docs/specifications/)
### Combining DM from different segments
Following Zhang et al. (2021), to form complete sightlines between sources at redshift \(z\) and an observer at \(z=0\), we exploit the lack of redshift evolution within snapshots, approximating them as infinitesimally narrow redshift slices in order to convert Eq. (3), the continuous theoretical integral for DM\({}_{\mathrm{cos}}\), into a discrete cumulative sum over our snapshots which we denote DM\({}_{\mathrm{total}}^{\mathrm{TNG}}\):
\[\mathrm{DM}_{\mathrm{total}}^{\mathrm{TNG}}(z_{i+1}) = \mathrm{DM}_{\mathrm{total}}^{\mathrm{TNG}}(z_{i})\] \[+ \frac{1}{2}\bigg{(}\frac{\mathrm{d}\,\mathrm{DM}_{\mathrm{TNG}}} {\mathrm{d}z}\bigg{|}_{z_{i}}+\frac{\mathrm{d}\,\mathrm{DM}_{\mathrm{TNG}}}{ \mathrm{d}z}\bigg{|}_{z_{i+1}}\bigg{)}(z_{i+1}-z_{i}), \tag{15}\]
where \(z_{i}\) is the redshift of each snapshot, DM\({}_{\mathrm{total}}^{\mathrm{TNG}}(z)|_{z=0}=0\), and for a given segment of a given snapshot,
\[\frac{\mathrm{d}\mathrm{DM}_{\mathrm{TNG}}}{\mathrm{d}z}\bigg{|}_{z=z_{i}}= \frac{cn_{e}(z_{i})}{H_{0}(1+z_{i})^{2}\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{\Lambda}}}, \tag{16}\]
where \(n_{e}\) is the average electron density computed as described in Sect. 4.1. Additionally, if any cell along the segment sightline is associated with a dark matter sub-halo (see Sect. 3.2) then we consider the sub-halo to have been traversed by the sightline. By recording the number of unique sub-halos which are traversed
\begin{table}
\begin{tabular}{c c} \hline \hline Field & TNG units \\ \hline Coordinates & \(h^{-1}\)ckpc \\ Density & \(\frac{10^{10}h^{-1}\mathrm{M}_{\odot}}{(h^{-1}\mathrm{ckpc})^{3}}\) \\ ElectronAbundance & \(-\) \\ InternalEnergy & \((\mathrm{km\,s^{-1}})^{2}\) \\ Masses & \(10^{10}h^{-1}\mathrm{M}_{\odot}\) \\ ParticleIDs & \(-\) \\ StarFormationRate & \(\mathrm{M}_{\odot}\) yr\({}^{-1}\) \\ SubfindDMdensity & \(\frac{10^{10}h^{-1}\mathrm{M}_{\odot}}{(h^{-1}\mathrm{ckpc})^{3}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The IllustrisTNG fields drawn upon in this work. Uses include electron density and DM calculation, LSS classification, sub-halo identification and impact parameter analysis. The unit “ckpc” refers to “comoving kiloparsec”, a comoving quantity (Nelson et al., 2019).
by each segment, the number of sub-halos \(N_{\rm sub}\) traversed as a function of redshift for a single sightline may then be calculated similarly to Eq. (15) using:
\[N_{\rm sub}(z_{i+1}) = N_{\rm sub}(z_{i}) \tag{17}\] \[+ \frac{1}{2}\left(\frac{{\rm d}N_{\rm sub}}{{\rm d}z}\bigg{|}_{z_{i} }+\left.\frac{{\rm d}N_{\rm sub}}{{\rm d}z}\right|_{z_{i+1}}\right)(z_{i+1}-z_{ i}),\]
with the initial condition \(N_{\rm sub}(z)|_{z=0}=0\), and where \({\rm d}N_{\rm sub}/{\rm d}z\) is the number of sub-halos traversed by a given segment.
Although full TNG300\(-\)1 snapshots extend out to \(z=12\), and FRBs may be detectable by current instruments out to at least \(z=10\)(Zhang, 2018), the ionisation information which informs our electron densities may be inaccurate at \(z>6\)(Nelson et al., 2018). We thus generate segments as discussed in Sect. 4.1 for only the snapshots listed in Table 1, which extend out to \(z=5\). We calculate \(N_{\rm sub}\) and the cosmological DM\({}_{\rm total}^{\rm TNG}\) for these, and calculate the portions of DM acquired only from individual LSS types (DM\({}_{\rm halo}^{\rm TNG}\), DM\({}_{\rm filaments}^{\rm TNG}\)), DM\({}_{\rm voids}^{\rm TNG}\)) by substituting \(n_{\rm e}\) in Eq. (16) with the average electron densities associated only with their corresponding structures. Following Zhang et al. (2021), we create a total of 5125 unique segments of dimensions (205,0.2,0.2) \(h^{-1}\)cMpc for each snapshot; these are randomly sampled to create 10,000,000 individual FRB lines of sight. We also observe the convention of constructing our FRB sightlines along the \(x\)-axes of our snapshots due to computing constraints (Jaroszynski, 2019; Zhang et al., 2021). Including sightlines constructed along all three axes would better sample the entire simulation, and hence improve results. In principle, adapting the absorption line tool TRIDENT(Hummels et al., 2017) for use with IllustrisTNG, perhaps in conjunction with techniques discussed in Batten et al. (2021) for bridging gaps between simulation snapshots, could prove another powerful way to study not just average, but individual sightlines across the full breadth of the IllustrisTNG simulation suite.
Figure 5 presents normalised histograms showing the evolution of the distribution of DM\({}_{\rm total}^{\rm TNG}\) values calculated for our sightlines, and for their LSS contributions, as a function of redshift. We find that DM\({}_{\rm filaments}^{\rm TNG}\), and DM\({}_{\rm voids}^{\rm TNG}\) are both best fit by lognormal distributions9 of the form
Footnote 9: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html)
\[P(y,a)=\frac{1}{s}\frac{1}{ay\sqrt{2\pi}}\exp\left(-\frac{\ln^{2}(y)}{2a^{2}} \right), \tag{18}\]
and that DM\({}_{\rm halos}^{\rm TNG}\) can be approximated using the positive range of a log-logistic distribution10 fitted to the natural logarithm of the DM data:
Footnote 10: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisk.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisk.html)
\[P(y,a)=\frac{1}{s}\frac{ay^{a-1}}{(1+y^{a})^{2}}. \tag{19}\]
In each case, \(a\) refers to the distribution's shape parameter, and
\[y=\frac{(x-l)}{s}\qquad\mbox{ where }\begin{cases}x=\mbox{DM},&\mbox{for voids, filaments}\\ x=\ln(\mbox{DM}),&\mbox{for halos}\end{cases} \tag{20}\]
with shifting parameters \(l\) and scaling parameters \(s\) respectively. The best-fit parameters for each distribution at each redshift are provided in Table 3. Histograms of data drawn from these fits are included in Fig. 5 for comparison purposes. The draws from the void and filamentary fits are a good match to the TNG data distributions, while draws from the the halo fits are relatively good match, but slightly overestimate the probability of low-redshift sightlines containing high (\(\gtrsim 10^{3}\) pc cm\({}^{-3}\)) DM\({}_{\rm halo}^{\rm TNG}\) contributions.
Using our distributions, we calculate various statistics (mean, median, standard deviation) for the relationship between DM\({}_{\rm total}^{\rm TNG}\) and redshift. In Fig. 6, we compare these to similar results reported in previous literature. These comparisons are further discussed in Sect. 6.1. In Fig. 7 (left), we compare the average behaviours of DM\({}_{\rm halos}^{\rm TNG}\), DM\({}_{\rm filaments}^{\rm TNG}\) and DM\({}_{\rm voids}^{\rm TNG}\). We present more detailed breakdowns of the fractional contributions to FRB DMs by each type of large-scale structure as a function of redshift in Fig. 7 (right), and provide computed values at our redshift intervals for all measurements in Table 4.
Figure 4: Tracing structure along sightlines. _Left panel_: A (205,0.2,0.2) \(h^{-1}\)cMpc line of sight segment traversing a TNG snapshot at \(z=0\). The grey line indicates the sightline through the segment. Blue squares projected onto the \((xy,yz,xz)\) planes represent the number density of all cells within the segment in the \((x,y,z)\) directions. Dots, coloured according to their LSS type, represent the central locations of all cells identified as closest to one of the 10,000 bins along the sightline. The sightline is approximated to traverse these cells due to the Voronoi tessellation of the simulation. _Right panel_: A second sightline, complete with a sub-halo it traverses and the impact parameter measured between the sightline and sub-halo.
## 5 Results
In previous sections, we have reviewed the ingredients and methods required to approximate both LSS and DM for IllustrisTNG. Together, these elements may be combined to provide detailed insight into the redshift-evolving impact of cosmological large-scale structures on FRB signals. In this Section, we present the results of our analysis.
### LSS analysis
Figure 5 presents individually normalised distributions for \(\rm DM_{total}^{TNG}\) values which are accumulated along our sightlines by FRBs originating at various redshifts. Individually normalised distributions for (\(\rm DM_{halos}^{TNG}\), \(\rm DM_{filament}^{TNG}\), \(\rm DM_{ voids}^{TNG}\)), the contributions to the total DM by each type of LSS, are also shown. The similarity between the total, and filamentary DM distributions (Fig. 5 top left, bottom left) is immediately apparent, indicating that, for most sightlines, the filamentary contribution dominates
Figure 5: FRB DM evolution with redshift. _Clockwise from top left_: Histogrammed probability distributions (solid coloured lines) of the total cosmological DM (\(\rm DM_{total}^{TNG}\)), and contributions by large-scale structures (\(\rm DM_{halos}^{TNG}\), \(\rm DM_{Simons}^{TNG}\), \(\rm DM_{ voids}^{TNG}\)) for FRBs originating at various redshifts (see Table 1 and colourbar), calculated for TNG300\(-\)1. For comparison, grey lines are new data drawn from fits to these LSS distributions. These new data are combined to create the comparison histograms for \(\rm DM_{total}^{TNG}\). The slight mismatch between our low-redshift TNG data and drawn \(\rm DM_{total}^{TNG}\) values arises due to our fits slightly overestimating the probability of sightlines containing very high (\(\gtrsim 10^{3}\) pc cm\({}^{-3}\)) \(\rm DM_{halos}^{TNG}\) contributions at low redshifts.
DM. The filament and halo distributions also display the widest DM ranges, and high-DM tails, including for FRBs originating at low redshifts. This behaviour indicates that the ionised electron distributions in halo and filament environments may significantly vary between FRB sightlines, and that in rare instances, an FRB may propagate through a region of much higher electron density, which will contribute large amounts of observed DM. In contrast, the narrower distributions of the void components indicates that voids contribute a smaller range of possible DM values to FRB sightlines, and thus that voids may remain generally more homogeneous environments from sightline to sightline.
The differing behaviours of the contributions to DM by each type of LSS as a function of redshift are quantified using various statistics presented in Fig. 7 and Table 4. In Fig. 7 (left), we compare the evolution of the average total observed DM from TNG, \(\langle\rm DM_{ion}^{\rm TNG}\rangle\), to the evolution of its LSS subcomponents, \(\langle\rm DM_{ion}^{\rm TNG}\rangle\), \(\langle\rm DM_{flamients}^{\rm TNG}\rangle\), and \(\langle\rm DM_{vioids}^{\rm TNG}\rangle\). Here, the dominance of the filamentary contribution to the average FRB sightline is more apparent. It can be seen that filaments contribute an increasingly large portion of DM to the total for FRBs originating at higher redshifts, while halos and voids contribute to a lesser extent. We find that for FRBs originating at \(z=0.1\) vs \(z=5\), the average contributions to DM from halos, filaments, and voids will grow from \(\sim[13.64/65.82/13.13]\,\rm pc\ cm^{-3}\) to \(\sim[327.61/3494.53/653.02]\,\rm pc\ cm^{-3}\) respectively. The standard deviations of these components quantify the DM variability due to each type of LSS along our sightlines. It can be seen that the largest DM scatter arises due to material in halos, followed by filaments, and that voids have very little scatter. A more detailed breakdown of the evolution of these average components can be found in Table 4.
In Fig. 7 (right), we quantify the relative evolution of our LSS DM components as a function of redshift using their average fractional contributions to the total DM. We again see that \(\langle\rm DM_{flamients}^{\rm TNG}\rangle\) is the dominant contribution to \(\langle\rm DM_{total}^{\rm TNG}\rangle\), and that, on average, the fraction of DM accumulated from filaments, \(\langle f_{\rm filaments}^{\rm DM}\rangle\), rises from \(\sim 71\)% to \(\sim 80\)% for an FRB originating at \(z=0.1\) vs \(z=5\). Meanwhile, the average fraction accumulated from halos, \(\langle f_{\rm halos}^{\rm DM}\rangle\), halves, from \(\sim 15\)% to \(\sim 8\)%. The average fractional contribution from voids, \(\langle f_{\rm voids}^{\rm DM}\rangle\), remains relatively consistent, to within \(\sim 1\)%, regardless of source redshift. Here, too, the significant scatter in \(\rm DM_{halos}^{\rm TNG}\) and \(\rm DM_{flamients}^{\rm TNG}\) can be seen, and all calculated values can be found in Table 4. These results support previous predictions that denser structures, such as galaxy groups and clusters, may be major culprits of any observed sightline-to-sightline variance in FRB DMs (Prochaska & Zheng, 2019; Zhu & Feng, 2021; Connor & Ravi, 2022).
### Sub-halo analysis
Table 5 quantifies the number of unique sub-halos which are intersected by our segments at various redshifts. We provide the total number of sub-halos intersected for a given snapshot, and bin these according to their masses. Fig. 8 visualises these data, along with the impact parameters \(b_{\perp,\,\rm sub}\) and accumulated dispersion measures \(\rm DM_{sub}\) which are associated with these intersections. It is immediately apparent that the general trend is towards fewer sub-halo intersections at lower redshifts, presumably due to the merging of lower mass sub-halos to form higher mass sub-halos over cosmic time. We can also see that larger potential impact parameters are possible for low-redshift sub-halos, presumably due to increased sizes. In addition, for a given redshift, higher mass sub-halos generally contribute larger values of \(\rm DM_{sub}\), and allow for larger potential impact parameters, indicating potentially larger radii. We anticipate that the exact cutoffs in \(\rm DM_{sub}\)-\(b_{\perp,\,\rm sub}\) -space for sub-halos of different mass bins results from the radius-density profile of these sub-halos.
We quantify the average evolution of these intersections for complete FRB sightlines as a function of redshift in Fig. 9 and Tables 6 and 7. Fig. 9 (left) and Table 6 detail the average number of sub-halos that an FRB originating at a given redshift will traverse according to Eq. (17). We present the mean and standard deviations calculated for our 10,000,000 sightlines. Fig. 9 (right) and Table 7 detail the average impact parameters which
Figure 6: FRB DM evolution with redshift continued. _Both panels:_ The average total DM-\(z\) relationship derived for TNG300–1. The mean (solid grey line), median (dotted grey line) and standard deviation (shaded grey region) around the mean of \(\rm DM_{total}^{\rm TNG}\) values calculated for our sightlines are shown. _Left panel:_ These are compared to \(\rm(DM_{\rm cosmic})\) (dashed black line), and the results of analytical studies from previous literature (dashed coloured lines). _Right panel:_ Compared to results derived from other cosmological simulations in previous literature (dashed coloured lines).
we measure between our segments and these sub-halos at the redshifts of our snapshots. Due to the non-Gaussianity which we observe for these distributions, we present median, interquartile range (IQR) and interpercentile range (IPR) values calculated for these data. For comparison, we overplot derived positions in impact parameter - redshift space for a number of observed FRBs which are considered likely to have intersected galaxies including M31 and M33, according to Prochaska et al. (2019), Connor et al. (2020) and Connor & Ravi (2022). Where not otherwise provided by the literature, we have converted the galaxies' impact parameters from physical to comoving distances using their preferred redshifts, which we obtained using the NASA Extragalactic Database (NED). Where a galaxy was reported to have a blueshift, \(z=0\) was used in the conversion process. In both Fig. 9 subplots, we present the statistics obtained when considering all sub-halos traversed by our sightlines, and those we obtain when considering sub-halos binned by mass according to the bins in Fig. 8. We note that we only provide impact parameter statistics for a given mass bin and redshift in Table 7 when the total number of sub-halos traversed by segments in the snapshot for this mass bin is \(\geq 40\) (see Table 5).
The left panel of Fig. 9 shows that, on average, FRBs originating beyond \(0.5<z<0.7\) will intersect at least one sub-halo of any mass during propagation. The total number of sub-halos which are on average traversed will increase from \(\langle N_{\rm sub}\rangle=1.8\) for an FRB originating at \(z=1\), to \(\langle N_{\rm sub}\rangle\simeq 12.4\) for an FRB originating at \(z=5\). By studying these intersected sub-halos according to their masses, one sees that the traversed sub-halos are dominated by those of mass \(M_{\rm sub}=[10^{8},10^{12}]\,h^{-1}{\rm M}_{\odot}\). On average, less than one sub-halo of mass \(M_{\rm sub}\geq 10^{12}\,h^{-1}{\rm M}_{\odot}\) will be traversed, even by an FRB originating at \(z=5\). When comparing the relative behaviours of the mass-binned halos as a function of redshift, one sees that FRBs originating at \(z\lesssim 3\) are more likely to traverse higher-mass sub-halos (\(M_{\rm sub}=[10^{10},10^{12}]\,h^{-1}{\rm M}_{\odot}\)), but for FRBs originating at \(z\gtrsim 4\), the most commonly intersected collapsed structures are of lower mass (\(M_{\rm sub}=[10^{8},10^{10}]\,h^{-1}{\rm M}_{\odot}\)), with the transition occurring at around \(z\sim 3.8\). This transition in the encounter rates of lower vs higher mass sub-halos likely stems from the population of less massive sub-halos dwindling at later times due to continuous mergers.
By studying the impact parameters between our sightlines and mass-binned traversed sub-halos in Fig. 9 (right), it can be seen that the average impact parameter between our sightlines and sub-halos increases with mass, from a median \(b_{\perp,\rm sub}\simeq 36.1\,h^{-1}{\rm kpc}\) for sub-halos of mass \(M_{\rm sub}=[10^{8},10^{10}]\,h^{-1}{\rm M}_{\odot}\), to \(b_{\perp,\rm sub}\simeq 950\,h^{-1}{\rm kpc}\) for sub-halos of mass \(M_{\rm sub}=[10^{14},10^{16}]\,h^{-1}{\rm M}_{\odot}\). These average impact parameters remain relatively consistent for sub-halos with masses \(M_{\rm sub}=[10^{8},10^{14}]\,h^{-1}{\rm M}_{\odot}\), regardless of the redshift of intersection. However, the average impact parameter measured when considering sub-halos of any mass falls after \(z>1\). This is presumably due to a deficit of sub-halos of the highest masses (\(M_{\rm sub}>10^{14}\,h^{-1}{\rm M}_{\odot}\)) at higher redshifts, which have not yet had \(\langle\rm M_{\rm sub}>10^{14}\,h^{-1}{\rm M}_{\odot}\rangle\) at higher redshifts, which have not yet had \(\langle\rm M_{\rm sub}>10^{14}\,h^{-1}{\rm M}_{\odot}\rangle\).
Finally, we quantify the average DMs which are accumulated by during the traversal of the sub-halos in Fig. 10 and Table 8. Fig. 10 (left) details the average dispersion measure which will be accumulated by traversing any sub-halos at a given redshift in the rest frame of the sub-halos, \(\widetilde{\rm DM}_{\rm sub}\) (see also, Sect. 4.2). Fig. 10 (right) and and Table 8 detail the average DM which will be _observed_ due to traversing sub-halos at these redshifts. Here, too, we present statistics obtained when considering traversed sub-halos of any mass, and when considering sub-halos binned by their masses. It can be seen that although \(\widetilde{\rm DM}_{\rm sub}\) increases with redshift in all cases, this increase is greatly suppressed when considering observed DM, due to the \((1+z)^{-1}\) factor. Again, we only provide DM statistics for a given mass bin and redshift in Table 8 when the total number of sub-halos traversed by segments in the snapshot for this mass bin is \(\geq 40\) (see Table 5).
Figure 7: Redshift evolution of DM contributions as a function of large-scale structure. _Left panel:_ The mean, median, and standard deviation around the mean of the DM-redshift relationship for THG300-1 halos (DM\({}^{\rm TNG}_{\rm halo}\); yellow), filaments (DM\({}^{\rm TNG}_{\rm histogram}\); cyan), and voids (DM\({}^{\rm TNG}_{\rm solid}\); purple), derived using our metric. These are compared to the (DM\({}^{\rm TNG}_{\rm cosmic}\)) subcomponents (see Sect. 6.1): (DM\({}^{\rm TNG}_{\rm IGM}\)) (dashed black line) and (DM\({}^{\rm ICM}_{\rm IGM}\)) (dot-dashed black line). _Right panel:_ The means (coloured lines) and standard deviations (shaded regions) of the equivalent fractional DM contributions \(\langle f_{\rm DM}\rangle\) accumulated due to traversing each type of structure by our FRBs originating at given redshifts.
## 6 Discussion
The study of matter distributions in difficult-to-observe phases is a burgeoning field of research. Observationally, Sunyaev-Zel'dovich and X-ray studies are already being used (Tanimura et al. 2019c,a, 2020a,b); as are cosmological hydrodynamic simulations, to within the constraints afforded by, e.g., their feedback models. In the future, large galaxy catalogs provided by surveys such as LSST (Ivezic et al. 2019) might also be useful in conjunction with matter density reconstruction techniques (see, e.g., Wang et al. 2009, 2012, 2016) for reconstructing large volumes of LSS. FRBs, with their isotropic sky distribution (Petroff et al. 2022) and the insight they provide into the ionised environments which they traverse, may yet prove a valuable way to augment such techniques, and aid our understanding of the ionised matter distribution of the Universe.
In view of this, we have investigated the evolution with redshift of the cosmological DM contribution; the contributions of its constituent halos, filaments and voids; the likelihood of intercepting foreground collapsed structures; and the impact parameters and DM contributions associated with these interceptions, for FRBs within TNG300-1. In this Section, we compare our results to previous investigations utilising different analytical, em
Figure 8: Collapsed structures traversed by our segments. _Each panel:_ Scatter points indicating the distributions in impact parameter (\(b_{\perp,\rm sub}\)) - accumulated dispersion measure (DM\({}_{\rm sub}\)) parameter space for any sub-halos considered traversed by our segments, in the snapshot of redshift \(z\) described in the legend. Each scatter point colour is determined using its sub-halo mass \(M_{\rm sub}\), according to the mass bin ranges described by the colour bar. The total number of sub-halos \(N_{\rm sub}\) traversed by all segments _within a given snapshot_ is provided in the legend.
pirical and theoretical methods, different simulations and LSS classification schemes, and real FRB observations.
### LSS analysis
In Fig. 6, we compare \(\langle\)DM\({}_{\rm total}^{\rm TNG}\rangle\), the average total DM that the cosmological component will contribute to an FRB originating at a given redshift according to our simulation, to similar results from previous literature. In most cases, the results appear reasonably consistent, with many remaining within one standard deviation of the TNG result out to \(z=4\).
Using the publicly available FRUITBAT repository (Batten et al., 2021)11 we reproduce the results of Ioka (2003), Inoue (2004) and Zhang (2018) in Fig. 6 (left). These are analytical estimates which approximate Eq. (5), and thus DM, using different IGM baryon fractions, \(f_{\rm d}\), and free electron fractions, \(f_{\rm e}\), and assume these to be constant with redshift (see Batten et al. 2021 for a brief review). These values evolve with redshift in TNG (see Eq. (7)), which will account for some of the differences between the analytical results and our analysis. Conversely, the evolution of \(\langle\)DM\({}_{\rm cosmic}\rangle\) with redshift (see, e.g., Prochaska & Zheng 2019; Macquart et al. 2020), which we recreate using the public FRB github repository12 for Fig. 6 (left), is derived by combining theoretical predictions and empirical techniques, and allows for redshift-evolving IGM baryon and ionisation factors. It appears much closer to our result.
Footnote 11: [https://github.com/abatten/fruitbat](https://github.com/abatten/fruitbat)
Footnote 2: [https://github.com/FRBs/FRB](https://github.com/FRBs/FRB)
We also compare our results to the most probable DM-\(z\) relationships obtained using various cosmological simulations in Fig. 6 (right). These include the medium-resolution Magneticum Pathfinder simulation (Dolag et al., 2015), the original Illustris simulation (Jaroszynski, 2019), the RAMSES simulation (Zhu & Feng, 2021), the EAGLE simulation (Batten et al., 2021), and IllustrisTNG, when interpolating the results of Zhang et al. (2021). It is notable that our recreation of the Zhang et al. (2021) result (see Fig. 6 (right), yellow dashed line), which we achieve by interpolating their fits, yields systematically lower DMs as a function of redshift than our own result, despite both being derived using the same run of the same simulation (TNG300-1). We posit that these differences arise for two reasons. Firstly, a systematic shift towards lower DMs in the peak of the Zhang et al. (2021) fitted distributions compared with their data distributions can be seen when comparing each for a given redshift. Secondly, Zhang et al. (2021) elect to discard cells containing measurable star formation during their DM calculations, whereas we apply a correction factor (see Eq. (8)) to these cells, and still include some portion of their material into our calculations. Minor differences between our result and the majority of the other simulations likely arise due to technical differences and underlying assumptions which inform the simulations. These could include the sizes, resolutions, and AGN feedback models of the simulations, and the exact values of the cosmological parameters which they are initialised with.
All considered, the majority of the other curves lying within one standard deviation of our result demonstrates a broad consistency with previous literature, albeit highlighting the non-negligible effect that underlying cosmological parameters and galaxy evolution mechanisms have on the true DM-redshift relationship.
In the left panel of Fig. 7, we compare the evolution of \(\langle\)DM\({}_{\rm total}^{\rm TNG}\rangle\) to the evolution its large-scale structure subcomponents, \(\langle\)DM\({}_{\rm halos}^{\rm TNG}\rangle\), \(\langle\)DM\({}_{\rm filaments}^{\rm TNG}\rangle\), and \(\langle\)DM\({}_{\rm voids}^{\rm TNG}\rangle\). For reference, we plot these against another interpretation of the cosmological DM component. In this Figure, \(\langle\)DM\({}_{\rm cosmic}\rangle\), and its subcomponents, \(\langle\)DM\({}_{\rm CGM}\rangle\)13 and \(\langle\)DM\({}_{\rm IGM}\rangle\), are calculated using the public
Figure 9: Collapsed structures traversed with redshift. _Left panel_: Statistics (mean, standard deviation) describing the average number of sub-halos which are traversed by FRB sightlines out to given redshifts within TNG300-1. Black lines and grey shaded regions describe the total number of sub-halos traversed; coloured lines and shaded regions describe the number of sub-halos when binned by mass according to the mass bins of Fig. 8. _Right panel_: Statistics (median, interquartile range) describing the average impact parameters between our sightlines and these structures within each snapshot. Overplotted crosses are the impact parameters between observed FRBs and galaxies as calculated by Connor & Ravi (2022), Connor et al. (2020), and Prochaska et al. (2019).
FRB github repository. The value \(\langle\mathrm{DM}_{\mathrm{CGM}}\rangle\) describes the average DM contributed to \(\langle\mathrm{DM}_{\mathrm{cosmic}}\rangle\) by ionised material residing within the CGM of any galactic halos which lie along FRB sightlines as a function of redshift, and is computed using the Aemulus Halo Mass Function (McClintock et al., 2019). Thus the value \(\langle\mathrm{DM}_{\mathrm{IGM}}\rangle\), which is defined as \(\langle\mathrm{DM}_{\mathrm{cosmic}}\rangle-\langle\mathrm{DM}_{\mathrm{CGM}}\rangle\), must attribute any remaining DM to matter within the diffuse IGM, but outside of foreground galactic halos. When compared with our derived values, it can be seen that \(\langle\mathrm{DM}_{\mathrm{CGM}}\rangle\) consistently accounts for a larger portion of \(\langle\mathrm{DM}_{\mathrm{cosmic}}\rangle\) than \(\langle\mathrm{DM}_{\mathrm{halos}}^{\mathrm{TNG}}\rangle\) contributes to \(\langle\mathrm{DM}_{\mathrm{total}}^{\mathrm{TNG}}\rangle\). This likely results from the Aemulus Halo Mass Function incorporating all matter classified as belonging to halos according to our metric, as well as some of the denser matter lying closer to these halos, which we might classify as belonging to filaments. Likewise, it can be seen that \(\langle\mathrm{DM}_{\mathrm{GMM}}\rangle\) consistently contributes a smaller portion to \(\langle\mathrm{DM}_{\mathrm{cosmic}}\rangle\) than \(\langle\mathrm{DM}_{\mathrm{kinemets}}^{\mathrm{TNG}}\rangle\) contributes to \(\langle\mathrm{DM}_{\mathrm{total}}^{\mathrm{TNG}}\rangle\), presumably because it includes only the less dense portion of the matter which we classify as filamentary, along with any matter we classify as belonging to voids.
Zhu & Feng (2021) have previously used the RAMSES simulation to investigate the DM contributions of large-scale structures between \(0<z<1\), assuming different definitions for large scale structure types. They report values at \(z=0\), which we can compare to our results at \(z=0.1\). Assuming equivalence between Zhu & Feng (2021)'s definition of nodes and our definition of halos, and that our definition of filaments encompasses Zhu & Feng (2021) filaments and walls, their results translate to \(\sim 23.5\%\), \(\sim 61.4\%\) and \(\sim 15\%\) of cosmological DM being attributed to halos, filaments, and voids respectively at \(z=0\). These values are consistent to within one standard deviation of our results at \(z=0.1\).
Conversely, Akahori et al. (2016) have previously studied the the contributions to DM by large-scale structures out to \(z\sim 5\) using the \(\Lambda\)CDM universe simulations. They adopt a different classification scheme, defining their structures to be voids, sheets, filaments (which they also define as the WHIM) and clusters according to gas temperature (\(T<10^{4}\,\mathrm{K}\); \(10^{4}\,\mathrm{K}<T<10^{5}\,\mathrm{K}\); \(10^{5}\,\mathrm{K}<T<10^{7}\,\mathrm{K}\); and \(T>10^{7}\,\mathrm{K}\) respectively). Their work yields different results when compared with our Table 4. While the fractional contributions to total DM by matter which we classify as halos, and Akahori et al. (2016) classify as clusters, both decrease with redshift, our filamentary fraction grows between \(0.1<z<5\), and our void fraction decreases. In Akahori et al. (2016) these two contributions evolve in the opposite direction. Akahori et al. (2016) voids increase their relative DM contribution with redshift, overtaking the DM contributed by their filaments by \(z\sim 2\). By \(z\sim 5\), voids exceed filaments in their fractional contribution to total DM by a factor \(>2\), and by over \(\sim 1000\,\mathrm{pc\,cm^{-3}}\), which is more than all matter which we classify as voids ever contributes. Assuming that our definition of filaments encompasses the Akahori et al. (2016) definitions of both filaments and sheets (see, e.g., Martizzi et al., 2019, who show that filaments defined using local dark matter density are equivalent to filaments plus sheets when defined using the tidal tensor method) may reduce this discrepancy to some extent. However this still does not reconcile the two sets of results. As with our comparison of \(\langle\mathrm{DM}_{\mathrm{total}}^{\mathrm{TNG}}\rangle\) to previous literature, the root of the discrepancy between the trends for our LSS contributions likely derives from fundamental differences between our simulations (e.g. underlying galaxy feedback models), along with our different methods for constructing FRB sightlines and our categorisation of LSS according to gravitational collapse rather than temperature.
When analysing the relative contributions to DM accumulated from our different LSS types, the relatively low variance in the DM contributed by voids between sightlines presents an interesting notion. Walters et al. (2018) has previously shown that the constraining power of FRBs on cosmological parameters is limited by the inhomogeneity of the IGM from sightline to sightline, and that minimising this variance may be necessary in order to use FRBs to measure the dark energy equation of state. It thus follows that, could a sample of FRBs propagating along low-structure sightlines traversing only (or mostly) void material be assembled, these low-variance sources might offer the ability to obtain tighter cosmological constraints than by using the full FRB population. This idea has been touched on by Baptista et al. 2023, who, building on the \(P(\mathrm{DM}|z)\) model of Macquart et al. 2020, parameterised the halo gas contribution to its deviation from a Gaussian in terms of a fluctuation parameter, \(F\). While investigating the degeneracy of this parameter with \(H_{0}\), they note that the low DM end of the \(P(\mathrm{DM}|z)\) distribution is less affected by the choices of \(F\) and \(H_{0}\), implying that lower-structure sightlines (e.g. through voids) may better constrain \(H_{0}\).
As the measurable parameter for observed FRBs is total DM, to obtain such a sample, one would have to inventively select probable low-structure sightline FRBs from the overall population. Potential methods might include selecting well-localised FRBs with observed DMs which are lower than expected when considering their \(P(\mathrm{DM}|z)\) redshift distributions (see, e.g., Walker et al., 2020), or using, e.g., photometric techniques (Simha et al., 2021) to identify structureless sightlines. Such work could also potentially help constrain the baryon distribution within voids.
### Sub-halo analysis
In Fig. 9 (left) and Table 6, we quantify the average number of collapsed structures traversed by an FRB originating at a given redshift, according to our simulations. In Fig. 9 (right) and Table 7, we quantify the average impact parameters \(b_{\perp}\) which we measure between any collapsed structures traversed by our sightlines, and the sightlines which traversed them, at the redshifts of our snapshots. For reference, we have compared these results to real observational FRB data. Prochaska et al. (2019) have studied the halo gas of a galaxy intersected by an ASKAP-localised FRB, and Connor et al. (2020) have reported on a single FRB which intersects regions of both M33 and M31. Additionally, Connor & Ravi (2022) discuss unlocalised, non-repeating, CHIME-detected FRBs which they deem likely to have intersected galaxies (but unlikely to have passed directly through their disks) during their propagation through the IGM. The FRB-galaxy intersection discussed by Prochaska et al. (2019) occurs at \(z\sim 0.37\) with the \(\sim 10^{12}\,h^{-1}\mathrm{M}_{\odot}\) galaxy FG-181112 at \(b_{\perp}\sim 27\,h^{-1}\mathrm{ckpc}\). This impact parameter lies outside of the IQR (\(=[39.88,128.27]\,h^{-1}\mathrm{ckpc}\)) which we measure at \(z=0.4\) when considering impact parameters between all sub-halos and our sightlines, and when considering impact parameters to sub-halos of mass \(M_{\mathrm{sub}}=[10^{12},10^{14}]\,h^{-1}\mathrm{M}_{\odot}\) (\(=[184.85,438.88]\,h^{-1}\mathrm{ckpc}\)). It additionally lies outside of the 10th to 90th percentiles (\(=[112.76,663.54]\,h^{-1}\mathrm{ckpc}\)) which we measure for intersections with sub-halos of mass \(M_{\mathrm{sub}}=[10^{12},10^{14}]\,h^{-1}\mathrm{M}_{\odot}\), though it lies within these percentiles when considering sub-halos of any mass. These results support Prochaska et al. (2019)'s analysis that this is a particularly rare intersection event (which they place at a probability of \(\sim 0.5\%\)).
Of the 28 FRB-galaxy impact parameters discussed by Connor et al. (2020) and Connor & Ravi (2022), all are associated
with galaxies which lie at \(z\simeq 0\). Of these 28 impact parameters, we find that 19 lie within the IQR (\(=[41.33,131.86]\,h^{-1}\)ckpc) which we measure when considering the impact parameters between sub-halos of all masses and our sightlines at \(z=0\). Additionally, we find that 26 of these 28 lie between the 10th and 90th percentiles (\(=[23.06,303.08]\,h^{-1}\)ckpc) which we measure for our impact parameters when considering sub-halos of all masses. The FRB-galaxy impact parameters which lie outside these ranges are associated with FRB20190430B (Connor and Ravi 2022) and FRB191108 (Connor et al. 2020), in their purported intersections with the galaxies NGC6015 and M33 at \(b_{\perp}\simeq 12.9\,h^{-1}\)ckpc and \(b_{\perp}\simeq 12.2\,h^{-1}\)ckpc respectively. As demonstrated by Prochaska et al. (2019), FRBs may impact intervening galaxies at closer distances on rare occasions, thus we see no discrepancy between our analysis and this result. However, we note that every galaxy considered by Connor and Ravi (2022) is reported to with \(M_{\rm vir}>10^{12}\,h^{-1}\)M\({}_{\odot}\), placing them in our \([10^{12},10^{14}]\,h^{-1}\)M\({}_{\odot}\) sub-halo mass bin. Many of the Connor and Ravi (2022) \(b_{\perp}\) values fall outside our measured ranges when considering sub-halos in this mass bin alone.
Figure 10 and Table 8 quantify the average dispersion measures accumulated within individual snapshots by our sightlines due to traversing these sub-halos. Fig. 10 (left) presents the average dispersion measures \(\overline{\rm DM}_{\rm sub}\) which are accumulated within snapshots of given redshifts by our segments due to traversing their sub-halos, in the rest frame of the sub-halos. We find these increase with redshift, both when considering all traversed sub-halos, and when considering sub-halos binned by mass. Fig. 10 (right) presents the average _observed_ dispersion measures \(\rm DM_{\rm sub}\) due to these intersections. We find the observed DMs to be strongly suppressed as a function of redshift by the \((1+z)^{-1}\) factor. These trends generally align well with previous literature which examines the DM contributions of FRB host galaxies, e.g. Jaroszynski (2020), Zhang et al. (2020), and (Kovacs et al., in prep.), who also report increasing, albeit suppressed, average \(\rm DM_{host}\) contributions as a function of redshift. The works report systematically larger average \(\rm DM_{host}\) contributions (of order tens to hundreds of pc cm\({}^{-3}\) Jaroszynski 2020; Zhang et al. 2020) than our average \(\rm DM_{\rm sub}\) values (which have medians of order \(\rm DM_{\rm sub}\simeq 1\) pc cm\({}^{-3}\) for sub-halos of any mass, increasing for more massive sub-halos). This, however, is to be expected, given that FRBs are embedded in their host galaxies, and thus more likely to be exposed to denser material, including from their local environments, than when traversing foreground sub-halos.
While the evolution of the DM contribution due to collapsed structures as a function of redshift may be consistent with previous simulations, we find that, for a given redshift, the DM distributions themselves are highly skewed towards lower DMs, and that on average, the DM amplitudes which we observe do not align well with observational results. Connor and Ravi (2022) report an average observed DM excess of \(\sim 90\) pc cm\({}^{-3}\) for their likely galaxy-intersecting sources, compared to those unlikely to intersect galaxies. Their purported galaxies of intersection are all nearby (\(z\simeq 0\)), with masses \(\sim 10^{12}\,h^{-1}\)M\({}_{\odot}\). The \(\sim 90\) pc cm\({}^{-3}\) DM excess lies outside of our measured IQR (\(=[2.03,24.52]\) pc cm\({}^{-3}\)) for \(M_{\rm sub}=[10^{12},10^{14}]\,h^{-1}\)M\({}_{\odot}\) sub-halos, and also just above the range we measure between the 10th and 90th percentiles for these data (\(=[10.56,69.09]\) pc cm\({}^{-3}\)). Indeed, the reported DM excess lies above the interquartile ranges, and 90th percentiles which we measure for \(\rm DM_{\rm sub}\) for all but the most massive (\(M_{\rm sub}>10^{14}\,h^{-1}\)M\({}_{\odot}\)) TNG sub-halos (see Table 8).
When considering results derived from TNG sub-halos of any mass, some non-galaxy sub-halos could potentially skew statistics towards lower DMs (see Sect. 3.2). Equally, however, some of this DM discrepancy could be attributed to matter lying outside of sub-halos as defined by TNG. Connor and Ravi (2022) themselves note that their observational excess DM is unexpectedly large, citing expected means of \(<40\) pc cm\({}^{-3}\). They combine their unexpectedly large DM excess, with the observation that many of the purportedly intersected galaxies reside within galaxy groups, to conclude that FRBs have the potential to become effective probes of diffuse ionised matter within dark matter halos, between galaxy groups and clusters; and to allow for
Figure 10: DM accumulated within individual snapshots due to traversing collapsed structures. _Left panel:_ Statistics (median, interquartile range) describing the average \(\overline{\rm DM}\) accumulated within snapshots by sightlines traversing our sub-halos in the rest frames of the snapshots, and thus, the rest frames of the sub-halos. _Right panel:_ These values weighted by redshift, effectively describing the average DM which would be observed by an observer at \(z=0\) due to these sub-halos. Colours indicate the mass ranges described in Fig. 9.
the testing of models describing dark matter halos and the feedback which informs these matter distributions. Our results therefore provide a further incentive to continue investigating the DM contributions of cosmological LSS. As more FRBs are detected, well-localised, and observed to intersect galaxies at a range of redshifts, the statistics of these intersections may prove valuable for testing both IllustrisTNG, and feedback models.
## 7 Concluding remarks
In this work, we have studied the redshift-evolving contributions to DM by large-scale and collapsed structures along FRB sightlines using TNG300-1.
We find that filaments dominate the cosmological DM contribution, increasingly so for FRBs originating at larger redshifts. The filamentary contribution rises from \(\sim 71\%\) to \(\sim 80\%\) for FRBs originating at \(z=0.1\) and \(z=5\), respectively, while the halo contribution falls from \(\sim 15\%\) to \(\sim 8\%\), and the void contribution remains consistent to within \(\sim 1\%\). We find that the majority of the scatter in DM from sightline to sightline originates from halo and filamentary matter. Conversely, the DM contribution from voids varies less from sightline to sightline, indicating voids may be homogeneous environments. As it has been shown that constraining cosmological parameters using the DM-redshift relationship is limited by the variance in dispersion measures from sightline to sightline (Walters et al., 2018), we posit that leveraging void-only, and other relatively low-structure sightlines may prove an effective method for probing void baryon distributions, and more precisely constraining cosmological parameters using FRBs.
We find that, on average, an FRB sightline will intersect one foreground collapsed dark matter structure, or sub-halo, of any mass by between \(0.5<z<0.7\). An FRB originating at \(z=1\) will on average intersect \(\langle N_{\rm sub}\rangle\simeq 1.8\) sub-halos, along its propagation path. FRBs originating at \(z=5\) will on average intersect \(\langle N_{\rm sub}\rangle\simeq 12.4\) sub-halos. The impact parameters between our simulated sightlines and TNG sub-halos of any mass appear consistent with those measured for purported galaxy-intersecting FRBs from the literature. We find that of 28 purported FRB-galaxy impact parameters at \(z\sim 0\)(Connor et al., 2020; Connor and Ravi, 2022), 19 lie within our measured IQR (\(=\) 143.33, 131.86) \(h^{-1}\)ckpc) for impact parameters at \(z=0\), and 26 lie within the 10th and 90th percentiles (\(=\) [\(23.06,303.08\)] \(h^{-1}\)ckpc) of our data. The remaining two \(z\sim 0\) sightlines, and an FRB-galaxy intersection at \(z\sim 0.37\)(Prochaska et al., 2019) may simply be rarer cases of FRB sightlines more closely impacting galaxies along their propagation paths. However, we find higher \(b_{\perp}\) values than observed for the purported intersected galaxies when considering a subset of TNG sub-halos closer to their masses.
We find that in the rest frame of our traversed sub-halos, the average accumulated DM\({}_{\rm sub}\) increases with the redshift of the sub-halos, but this increase is observationally suppressed by \((1+z)^{-1}\). This behaviour mirrors similar analyses of the DM\({}_{\rm host}\) contributions of host galaxies according to other cosmological simulations (Jaroszynski, 2020; Zhang et al., 2020; Kovacs et al., in prep.) However, we find that, on average, the accumulated DM due to traversing these sub-halos is lower that the \(\sim 90\) pc cm\({}^{-3}\) average excess DM which has been calculated for the purported likely galaxy-intersecting FRBs of Connor and Ravi (2022), lending weight to suggestions that FRBs could probe ionised matter within galaxy groups and clusters, and the feedback which informs it. Exploring the nature of this DM deficit may be an interesting avenue of research for future work. For example, Simha et al. (2020) and Lee et al. (2021) have already demonstrated that combining density reconstruction techniques with information about galaxies intersecting FRB sightlines may allow for better constraints on IGM DM contributions than the average values and variance offered by DM\({}_{\rm cosmic}\). Ravi (2019) and Lee et al. (2021) both advocated complementing well-localised FRB observations with analysis of intervening halos along their sightlines, in order to constrain the fraction of baryons in the IGM, \(f_{\rm d}\), and the distributions of baryons in CGM gas. In addition, by combining simulations of future dark matter halo catalogues, FRB redshift distributions, and free electron models for the IGM and galaxy halos, Shirasaki et al. (2022) have performed a detailed investigation of the joint constraining power of FRBs and dark matter halos. They conclude that cross-correlations between FRB DMs and galaxy cluster-sized halos (\(M\sim 10^{14}\,h^{-1}\)M\({}_{\odot}\)) could potentially constrain multiple cosmological parameters including \(S_{8}(\equiv\sigma_{8}(\Omega_{\rm m}/0.3)^{0.5}\)), \(\Omega_{\rm b}\), \(h\), and \(f_{\rm e}\); while cross-correlations with main sequence galaxy-sized halos (\(M\sim 10^{12}\,h^{-1}\)M\({}_{\odot}\)) could inform models of galaxy formation and AGN feedback.
Aside from simply offering a better understanding of the matter distribution of the Universe, learning more about cosmological LSS and the numbers, and natures, of collapsed structures along FRB sightlines may prove useful in hitherto unanticipated ways. To aid future work, we provide our results in various forms. For easy comparison to future analyses which will inevitably use other simulations as they are developed and refined, and for simple analysis of FRB observations, we provide average DM and sub-halo information derived from Figs. 7, 9 and 10 in Tables 4, 6, 7 and 8. For more in-depth analyses which may require them, we provide our best-fit parameters for our Eqs. (18) and (19) fits to our DM\({}_{\rm bulge}^{\rm TNG}\), DM\({}_{\rm filaments}^{\rm TNG}\) and DM\({}_{\rm voids}^{\rm TNG}\) distributions in Table 3. We hope this information may prove useful in future analyses of cosmological ionised matter distributions, AGN feedback models, cosmological parameters, and FRBs themselves.
###### Acknowledgements.
CRHW acknowledges invaluable help with TNG from Dynam Nelson, and from MPI and MPCDF staff including Markus Rampp, Thorsten Naab, Rudiger Pakmor, and members of the TRIDENT and YT project communities, including Britton Smith, Masan Hani, and John Zufufone, CRHW also thanks Stefan Hackstein, Xavier Prochaska, Sunli Simha and Mohit Bhardwai for useful discussions. CRHW is a member of the Max Planck Lise Meitner group, and acknowledges support from the Max Planck Society. LGS is a Max Planck Lise Meitner group leader and acknowledges support from the Max Planck Society. YZM is supported by the National Research Foundation of South Africa under Grants No. 1505800, No. 120385 and No. 120378, and the NITheCS program "New Insights into Astrophysics and Cosmology with Theoretical Models", fronting Observational Data". MCA acknowledges partial financial support from the Seal of Excellence @UNIPD 2020 program under the ACRGOL project. CRH is supported by NSF grant AAG-1911233, and NASA grants HST-AR-15800, HST-AR-16633, and HST-GO-16703. The authors thank the anonymous referee for valuable feedback during submission of this work. All data processing for this work was performed on the MPCDF high-power computing cluster, RAVEN14.
Footnote 14: [https://www.mpcdf.mpg.de/services/supercomputing/raven](https://www.mpcdf.mpg.de/services/supercomputing/raven)
|
2309.14899 | Anisotropy and effective medium approach in the optical response of 2D
material heterostructures | 2D materials offer a large variety of optical properties, from transparency
to plasmonic excitation. They can be structured and combined to form
heterostructures that expand the realm of possibility to manipulate light
interactions at the nanoscale. Appropriate and numerically efficient models
accounting for the high intrinsic anisotropy of 2D materials and
heterostructures are needed. In this article, we retrieve the relevant
intrinsic parameters that describe the optical response of a homogeneous 2D
material from a microscopic approach. Well-known effective models for vertical
heterostructure (stacking of different layers) are retrieved. We found that the
effective optical response model of horizontal heterostructures (alternating
nano-ribbons) depends of the thickness. In the thin layer model, well adapted
for 2D materials, a counter-intuitive in-plane isotropic behavior is predicted.
We confront the effective model formulation with exact reference calculations
such as ab-initio calculations for graphene, hexagonal boron nitride (hBN), as
well as corrugated graphene with larger thickness but also with classical
electrodynamics calculations that exactly account for the lateral
structuration. | Bruno Majérus, Emerick Guillaume, Pascal Kockaert, Luc Henrard | 2023-09-26T12:56:36Z | http://arxiv.org/abs/2309.14899v1 | # Anisotropy and effective medium approach in the optical response of 2D material heterostructures
###### Abstract
2D materials offer a large variety of optical properties, from transparency to plasmonic excitation. They can be structured and combined to form heterostructures that expand the realm of possibility to manipulate light interactions at the nanoscale. Appropriate and numerically efficient models accounting for the high intrinsic anisotropy of 2D materials and heterostructures are needed. In this article, we retrieve the relevant intrinsic parameters that describe the optical response of a homogeneous 2D material from a microscopic approach. Well-known effective models for vertical heterostructure (stacking of different layers) are retrieved. We found that the effective optical response model of horizontal heterostructures (alternating nano-ribbons) depends of the thickness. In the thin layer model, well adapted for 2D materials, a counter-intuitive in-plane isotropic behavior is predicted. We confront the effective model formulation with exact reference calculations such as _ab-initio_ calculations for graphene, hexagonal boron nitride (hBN), as well as corrugated graphene with larger thickness but also with classical electrodynamics calculations that exactly account for the lateral structuration.
## I Introduction
The extraordinary optical and electromagnetic (EM) properties of two-dimensional (2D) materials have been broadly investigated from visible light to microwaves [1; 2; 3; 4], leading to developments in various domains such as photovoltaics [5; 6], biosensors [7; 8], superabsorbers [9; 10] and transparent conducting films [5; 11]. The description of the EM response of a single layer has been debated recently [12; 13; 14; 15; 16; 17; 18; 19; 20; 21] based on a thin film model which assigns an effective permittivity to a layer with a given thickness, or on a 2D model which sets a surface susceptibility or conductivity at the interface between two media [16; 17; 19].
It is clear that anisotropy is essential in the description of 2D materials. As an example, recent ellipsometry results on MoS\({}_{2}\) and graphene have shown that the out-of-plane response plays a crucial role in their optical response [15; 19]. In particular, the comparison between the thin film and surface susceptibility models required this out-of-plane response to be carefully handled as the materials are not periodic in that direction [17; 21].
The stacking of 2D layers modifies their electronic properties as shown for the stacking order of multilayer graphene [22; 23; 24; 25] or for the transition from direct to indirect band gap for TMDs [26; 27]. These effects mainly occur close to the Fermi level and affect less the EM response in the visible or U.V. range. The stacking also results in long-range (electrostatic) interactions that modify the optical response in the absence of change in the electronic structure of the system [12; 21]. This long-range interaction should also be accounted for to retrieve the single-layer optical response from quantum simulation based on supercell techniques (periodic repetition of the single-layer separated with vacuum) [12].
The high number of possible heterostructures and their atomic complexity, as well as their intrinsic anisotropy, demand a robust but computationally tractable approach. Vertical heterostructures, the stacking of identical or different 2D layers, have been widely investigated in the last decade [9; 28; 29; 30; 31; 32; 33; 34; 35; 36], in particular with an effective medium approach [17; 37; 28]. However, the thickness range of validity for a thin film or surface susceptibility effective models has not been explored. On the other hand, horizontal heterostructures of single-layer materials have been less studied [38; 39; 31; 35] and the effective models have not been confronted to exact methods that account for the structuring.
In this paper, first, we investigate the validity of the effective model for vertical 2D heterostructures with a graphene-hBN bilayer and we explore the limits of this approach with respect to the number of layers. Second, turning to horizontal heterostructure, we show that the optical response of 2D materials nanoribbons are correctly described with a different effective model than thick ribbons and nanorods [35; 40]. In particular, we show that 2D materials horizontal heterostructures optically behave like a uniform uniaxial material, isotropic in the plane.
In section II, starting from a microscopic framework, we define surface susceptibilities of 2D materials which are independent of the thickness, both for in-plane and out-of-plane polarisation, retrieving in a microscopic approach, that includes among others smooth transitions between layers, the results obtained from a macroscopic point of view in [21]. On this basis, the effective models for vertical and horizontal heterostructures are retrieved. The reference numerical methods, with which effective
model will be compared, are described in section III. First-principle quantum approach (time-dependent density functional theory - TDDFT) has been performed when the size of the system permits, e.g. for vertical heterostructure, and classical electrodynamics approach that exactly accounts for the structuring (Rigorous coupled wave analysis (RCWA)) has been used for horizontal heterostructure. In section IV we apply the effective models on various heterostructures and we compare the results with reference simulations. Graphene multilayers and a graphene-hBN bilayer are investigated for vertical heterostructures while graphene nanoribbons and graphene-hBN hetero-nanoribbons are studied for horizontal heterostructures.
## II Effective medium theory for 2D materials and heterostructures
In this section, macroscopic response functions are related to microscopic response functions and effective models are deduced for heterostructures.
### The irreducible and external susceptibilities
The time dependencies of the potentials are supposed to be harmonic, i.e. \(V(t)\propto e^{i\omega t}\). Below, this factor is not shown for the sake of conciseness. \(V_{app}\left(\mathbf{r}\right)\) is a periodic potential with a uniform amplitude \(V_{app}\) applied on a material at a microscopic (atomic) scale,
\[V_{app}\left(\mathbf{r}\right)=\tilde{V}_{app}\,e^{i\mathbf{k}\cdot\mathbf{r}}. \tag{1}\]
The induced and total potentials \(V_{ind}\left(\mathbf{r}\right)\) and \(V_{tot}\left(\mathbf{r}\right)\) can be written as follows:
\[V_{ind}\left(\mathbf{r}\right) = \tilde{V}_{ind}\left(\mathbf{r}\right)e^{i\mathbf{k}\cdot\mathbf{ r}}, \tag{2}\] \[V_{tot}\left(\mathbf{r}\right) = \tilde{V}_{tot}\left(\mathbf{r}\right)e^{i\mathbf{k}\cdot\mathbf{ r}}, \tag{3}\]
where the spatial variation of the functions \(\tilde{V}_{tot}(\mathbf{r})\) and \(\tilde{V}_{ind}(\mathbf{r})\) are related to the local fields (LF) and have the same spatial periodicity as the unit cell of the material. We define here the irreducible susceptibility \(\chi\left(\mathbf{r},\mathbf{r}^{\prime}\right)\) and the external susceptibility \(\xi\left(\mathbf{r},\mathbf{r}^{\prime}\right)\) by:
\[\tilde{V}_{ind}\left(\mathbf{r}\right) =-\int\chi\left(\mathbf{r},\mathbf{r}^{\prime}\right)\tilde{V}_{ tot}\left(\mathbf{r}^{\prime}\right)d^{3}\mathbf{r}^{\prime}, \tag{4}\] \[\tilde{V}_{ind}\left(\mathbf{r}\right) =-\int\xi\left(\mathbf{r},\mathbf{r}^{\prime}\right)\tilde{V}_{ app}\,d^{3}\mathbf{r}^{\prime}. \tag{5}\]
where the integrals span over the whole space. These susceptibilities can be calculated from the more usual irreducible and external polarizabilities [41; 42]. The macroscopic dielectric function of a material is obtained through the average of the potential over the unit cell \(\Omega\)[43; 44; 45] :
\[\varepsilon_{M}=\frac{\tilde{V}_{app}}{\left\langle\tilde{V}_{tot}\left( \mathbf{r}\right)\right\rangle}. \tag{6}\]
Taking the average of the total potential from eq. (5) using the fact that the total field is the sum of the applied and the induced fields, it comes:
\[\frac{1}{\varepsilon_{M}} =1-\frac{1}{V}\int_{\Omega}\int\xi\left(\mathbf{r},\mathbf{r}^{ \prime}\right)d^{3}\mathbf{r}^{\prime}d^{3}\mathbf{r}, \tag{7}\] \[\frac{1}{\varepsilon_{M}} \equiv 1-\xi_{M}, \tag{8}\]
with \(V\) the volume of the unit cell. The macroscopic external susceptibility \(\xi_{M}\) defined by eq. (8) relates the total displacement field to the polarization field
\[\mathbf{P}=\xi_{M}\mathbf{D}. \tag{9}\]
This susceptibility has already been defined in [21] as the displacement susceptibility. In contrast, the macroscopic irreducible susceptibility \(\chi_{M}\), the usual susceptibility of electromagnetism, is defined by
\[\mathbf{P}=\varepsilon_{0}\chi_{M}\mathbf{E}, \tag{10}\]
and is obtained from eq. (8),
\[\chi_{M}=1-\varepsilon_{M}=\frac{\xi_{M}}{1-\xi_{M}}. \tag{11}\]
In general \(\chi_{M}\) cannot be obtained directly by averaging eq. (4), because of the spatial variations of the total field \(\tilde{V}_{tot}\) in the unit cell, associated with the local fields effect. However, if the material is homogeneous, the LF are negligible and the total field can be replaced by its spatial average in eq. (4) [43; 46] and
\[\varepsilon_{M} =1+\frac{1}{V}\int_{\Omega}\int\chi\left(\mathbf{r},\mathbf{r}^{ \prime}\right)d^{3}\mathbf{r}^{\prime}d^{3}\mathbf{r}, \tag{12}\] \[\varepsilon_{M} \equiv 1+\chi_{M}, \tag{13}\]
where \(\chi_{M}\) is equal to that obtained using eq. (11).
### Permittivity and surface susceptibilities of 2D materials
Bidimensional materials cannot be considered strictly 2D because the electronic wave function extends in the normal direction. Therefore, the microscopic dielectric function varies along this direction. In order to determine
Figure 1: Model of a 2D material layer between two media.
this permittivity both numerically (e.g. using TDDFT [44; 12]) and experimentally (e.g. using ellipsometry [47; 19; 15]), 2D materials can be modelled as a layer with a constant permittivity over a thickness \(L\)[44; 12]. In the most general case, the layer of thickness \(L\) is embedded between two media of different permittivity \(\varepsilon_{a}\) and \(\varepsilon_{b}\) as represented in fig. 1 (left). Following [21], this layer can be described as a 2D material of permittivity \(\varepsilon_{2D}\) with thickness \(d\) surrounded by vacuum (fig. 1, right). This vacuum may for example represent the vacuum layers of the supercell used in DFT, or the interlayer distance with another 2D material in the case of heterostructures. The spatial variation of the permittivity in this layered system (vacuum - 2D material - vacuum) is schematically represented in fig. 2. When \(d=L\), it corresponds to the thin film model. When \(d\to 0\), the 2D material is infinitely thin and the permittivity is represented using a Dirac distribution as for a finite surface polarization at the interface between two materials. These two approaches have been formally combined in [21]. Other models can be imagined such as a continuous permittivity, with a maximum value at the center of the atomic layer as represented in fig. 2c. However, we show in the following that at a macroscopic scale, all these descriptions are equivalent.
To rigorously define the surface susceptibilities of 2D materials, we use the effective model describing the three-layer system (vacuum - 2D material - vacuum) represented on the right side of fig. 1 and in fig. 2a. This effective model comes, for in-plane polarization, from the conservation over the whole system of the tangential component of the electric field, in which case we can use the parallel capacitors equation, eq. (S9). For out-of-plane polarization, it comes from the conservation of the normal component of the displacement field, in which case we can use the series capacitors equation eq. (S12) [48; 49; 41]. For in-plane polarization, it gives:
\[\varepsilon_{eff}^{\parallel}=\frac{L-d}{L}\varepsilon_{vac}+\frac{d}{L} \varepsilon_{2D}^{\parallel}, \tag{14}\]
and for out-of-plane polarization
\[\frac{1}{\varepsilon_{eff}^{\perp}}=\frac{L-d}{L}\frac{1}{\varepsilon_{vac}}+ \frac{d}{L}\frac{1}{\varepsilon_{2D}^{\perp}}, \tag{15}\]
The permittivities \(\varepsilon_{2D}^{\parallel}\) and \(\varepsilon_{2D}^{\perp}\) are the permittivities of the layer of thickness \(d\) for in-plane and out-of-planes polarizations respectively.
The fact that the effective permittivity modelling is different for \(\parallel\) and \(\perp\) directions is related to the role of the local field on the optical properties of stratified media and 2D materials. LF affects mostly fields polarized perpendicularly to the sheets, while it may be neglected for in-plane polarization [50; 12]. Since LFs are negligible for this polarization, \(\varepsilon_{M}\) of eq. (12) can be used to evaluate \(\varepsilon_{2D}^{\parallel}\) in eq. (14) and, with \(\varepsilon_{vac}=1\), we obtain
\[\varepsilon_{eff}^{\parallel}=1+\frac{1}{LS}\int_{\Omega}\int\chi^{ \parallel}\left(\mathbf{r},\mathbf{r}^{\prime}\right)d^{3}\mathbf{r}^{\prime }d^{3}\mathbf{r}, \tag{16}\]
with \(S=V/d\) the surface of the unit cell.
We can define the surface irreducible susceptibility for in-plane polarization of a 2D material as
\[\chi_{S}^{\parallel}=\frac{1}{S}\int_{\Omega}\int\chi^{\parallel}\left( \mathbf{r},\mathbf{r}^{\prime}\right)d^{3}\mathbf{r}^{\prime}d^{3}\mathbf{r} \tag{17}\]
such that
\[\varepsilon_{eff}^{\parallel}=1+\frac{\chi_{S}^{\parallel}}{L}. \tag{18}\]
The second term of the right-hand side of eq. (16) is the average value of the microscopic susceptibility \(\chi^{\parallel}\left(\mathbf{r},\mathbf{r}^{\prime}\right)\) over a surface \(S\) and a height \(L\). Therefore, it does not depend directly on the variation profile of the permittivity in the layer of thickness L or on the distance \(d\) and the equation (18) is valid for other models such as those represented in fig. 2b and c. While the in-plane surface susceptibility \(\chi_{S}^{\parallel}\) is independent of the chosen thickness \(L\), as it accounts only for the response of the 2D material in the volume of thickness \(d\), we note that the effective permittivity \(\varepsilon_{eff}^{\parallel}\) depends on \(L\).
The same reasoning can be performed for the out-of-plane polarization. Because the LF cannot be neglected, eq. (8) is used to obtain the effective permittivity of the layer
\[\frac{1}{\varepsilon_{eff}^{\perp}}=1-\frac{1}{L}\frac{1}{S}\int_{\Omega}\int \xi^{\perp}\left(\mathbf{r},\mathbf{r}^{\prime}\right)d^{3}\mathbf{r}^{\prime }d^{3}\mathbf{r}. \tag{19}\]
As before, the second term of the right-hand side of eq. (16) is the average value of the microscopic susceptibility \(\xi^{\perp}\left(\mathbf{r},\mathbf{r}^{\prime}\right)\) over a surface \(S\) and a height \(L\). The surface external susceptibility for out-of-plane polarization is then
\[\xi_{S}^{\perp}=\frac{1}{S}\int_{\Omega}\int\xi^{\perp}\left(\mathbf{r}, \mathbf{r}^{\prime}\right)d^{3}\mathbf{r}^{\prime}d^{3}\mathbf{r}, \tag{20}\]
and the effective out-of-plane permittivity of the layer is
\[\varepsilon_{eff}^{\perp}=\frac{1}{1-\frac{\xi_{S}^{\perp}}{L}}. \tag{21}\]
As for the in-plane response, the surface external susceptibility is independent of the thickness, but the effective permittivity \(\varepsilon_{eff}^{\perp}\) depends on the thickness of the layer.
Figure 2: Permittivity of a 2D material as a function of the position across the layer within: (a) a thin-film model; (b) a surface polarization model; (c) a more general description of the spatial variation.
We emphasize, as stated above, that the surface susceptibilities of eqs. (17) and (20) are model-independent, in the sense that they do not depend on the exact variation profile of the permittivity (see fig. 2). They describe the average response of the 2D material at this interface and they are the relevant quantities to describe the optical response of 2D materials at a macroscopic scale. In particular, they are related to the surface polarization field by
\[\mathbf{P}_{\mathbf{S}}^{\parallel}=\varepsilon_{0}\chi_{S}^{\parallel} \mathbf{E}^{\parallel}, \tag{22}\]
and
\[P_{S}^{\perp}=\xi_{S}^{\perp}D^{\perp}. \tag{23}\]
Note that \(\chi_{S}^{\perp}\) has been used to characterize the out-of-plane polarization of 2D materials [13; 17; 19; 20]. However, from eq. (11), we see that it is not an intrinsic quantity as it depends on the thickness L.
\[\chi_{S}^{\perp}=\frac{\xi_{S}^{\perp}}{1-\frac{\xi_{S}^{\perp}}{L}}. \tag{24}\]
Moreover, when the constitutive relation \(P_{S}^{\perp}=\varepsilon_{0}\chi_{S}^{\perp}E^{\perp}\) is used instead of eq. (23), the obtained surface response function are found to problematically depend on the surrounding media [13; 17; 19; 20]. In the case of a 2D layer in vacuum or when the out-of-plane susceptibility is neglected, the derived optical responses are not affected. But recent ellipsometry measurements on graphene and MoS\({}_{2}\) in the visible range [19] cannot be interpreted with a model that neglects the out-of-plane response of the 2D material, reinforcing the need to define truly intrinsic quantities for the 2D layer, independent of the external media [21].
The microscopic susceptibilities of 2D materials can be obtained numerically _ab initio_ for a periodic system in a supercell approach [45; 46], with a vacuum layer of a few nanometers separating repeating layers to avoid short-range interactions between them. The thickness \(L\) then corresponds to the height of the supercell. If, in the _ab initio_ codes, the macroscopic permittivity is directly given as the result of the quantum calculations, the surface susceptibilities can be derived from eq. (18) and (21). The permittivities or susceptibilities can then be used in classical electrodynamics approaches, implementing each 2D material with a 2D or 3D model.
### Effective model for 2D-material heterostructures
In this section, the in-plane and out-of-plane surface susceptibilities of horizontal and vertical heterostructures are related to the bulk effective permittivities of a thin film of finite thickness or to the surface susceptibilities of a surface polarization as shown in fig. 3, where (a) and (b) represent the heterostructures and (c) and (d) the effective models (3D or 2D).
#### ii.3.1 Effective model for vertical heterostructures
A vertical heterostructure is modelled here as alternating layers of 2D materials and vacuum (fig. 3a), which can be seen as a generalisation of the approach of the previous section. The effective permittivity of a multilayer can be found using the parallel capacitors equation (eq. (S9)) [28; 37]. Moreover, the effective surface irreducible susceptibility of a purely 2D material equivalent to the multilayer can be deduced from eq. (S9) using eq. (18):
\[\chi_{S,eff}^{\parallel}=\sum_{i}\chi_{S,i}^{\parallel}. \tag{25}\]
where the sum spans on each layer indexed \(i\). A further analysis of the validity of this approach is proposed in [21].
Similarly, from the series capacitors equation (eq. (S12)) and using eq. (21), an effective surface external susceptibility for the out-of-plane polarization is obtained as:
\[\xi_{S,eff}^{\perp}=\sum_{i}\xi_{S,i}^{\perp}. \tag{26}\]
Equations (25) and (26) are equivalent to eqs. (53) and (56) of [21], validating the coherence of the two approaches.
#### ii.3.2 Effective model for lateral heterostructures
A horizontal heterostructure corresponds to alternating ribbons of 2D materials (fig. 3b). This kind of geometry was not considered in [21] and no model published up to now can provide the counter-intuitive results that are obtained for thin ribbons below.
Following the same approach as for vertical heterostructure, the effective susceptibilities of thick ribbons are found, reproducing well-known results [35; 51]:
\[\xi_{S,eff}^{x}=\sum_{i}f_{i}\xi_{S,i}^{x} \tag{27}\]
Figure 3: Schematic representation of the considered systems and the effective models. Vertical heterostructures (a) and horizontal heterostructures (b) of 2D materials can be represented by a thin film (c) or a surface polarization (d).
\[\chi^{y,z}_{S,eff,2D}=\sum f_{i}\chi^{y,z}_{S,i}. \tag{28}\]
where \(f_{i}\) is the volume filling fraction of each type of ribbon.
In the case of 2D materials, particular care must be taken owing to their extremely small thicknesses. We first consider the displacement field \(\mathbf{D}\) and the electric field \(\mathbf{E}\) in a unit cell composed of two distinct materials (fig. 4a, yellow rectangle). The incident medium and substrate have large thicknesses compared to the thickness of the 2D ribbon \(L\). The total thickness of the system, noted \(H\), verifies the condition \(H\ll\lambda\).
If the electric field is polarized along the ribbon and parallel to the interface (i.e. along the \(y\)-axis), it is conserved at the interfaces (namely at points \(A\), \(B\), \(C\), \(D\) and \(E\) depicted in fig. 4b). Therefore, as the wavelength of the field is much larger than \(H\), the electric field is uniform over the whole structure, which is the condition to apply the parallel capacitors equation eq. (S9) (see [41]). Accordingly, the effective susceptibility of the layer is
\[\chi^{y}_{S,eff,2D}=\sum f_{i}\chi^{y}_{S,i}. \tag{29}\]
similarly to eq. (28) with the subscript \(2D\) indicating that this equation is valid for 2D materials.
When the electric field is polarized across the ribbon in the material plane (\(x\)-axis), the electric field (\(E^{\parallel}\)) is conserved at the interface at points \(A\), \(B\), \(C\) and \(D\). At point \(E\), the displacement field normal to the surface is conserved and the electric field is discontinuous. Therefore, neither the electric field nor the displacement field are constant over the whole structure. The formal conditions to apply strictly the parallel or series capacitors equations are then not fulfilled.
Nonetheless, as the thickness \(L\) of each ribbon is much smaller than its width, one can consider that the electric field does not vary between \(A\) and \(B\) (or between \(C\) and \(D\)). Only close to the interface between the ribbons does the field vary truly. Consequently, in a first approximation, we consider the electric field constant over the two ribbons, neglecting the variation in the small volume around the interface between the two ribbons. The parallel capacitors equation eq. (S9) applies and:
\[\chi^{x}_{S,eff,2D}=\sum f_{i}\chi^{x}_{S,i}, \tag{30}\]
which gives the same expression than the effective susceptibility in the \(y\) polarization (if the 2D materials are isotropic in the plane). The 2D heterostructure is then isotropic in the 2D plane. This is a counter-intuitive result. This conclusion is also different from the one obtained for thick heterostructures (eq. (27)) that has been used in previous works on 2D materials [51; 35]. We will numerically analyse further these results later in the paper.
Finally, for electric fields across the 2D materials (along the \(z\)-axis), the displacement field normal to the interface is conserved at the interfaces \(A\), \(B\), \(C\), \(D\) but not \(E\), where the electric field parallel to the interface is conserved. As in the previous case, the displacement field can be approximated as being constant across the ribbons equation, thus (S12) leads to:
\[\xi^{z}_{S,eff,2D}=\sum f_{i}\xi^{z}_{S,i}. \tag{31}\]
As for the \(x\) direction, this is not the same result as for the thick ribbon effective model, which will also be numerically tested later in the paper.
This effective model for horizontal heterostructures is valid even for ribbon widths of the same order of magnitude as the wavelength. The only necessary condition is that this width needs to be much larger than the layer thickness such that the fields vary slightly in the ribbon.
As a consequence of the in-plane isotropy of the effective model, the optical response of such structured 2D materials at normal incidence does not depend on the polarization except if there are features that cannot be captured by the effective model. For instance, as surface plasmon resonances are phenomena appearing due to the structuring of the material, they cannot be described by the effective layer model. Therefore, comparing the spectra of the effective system to the proper system and inspecting the discrepancies can highlight the plasmonic resonances taking place in the ribbons.
## III Numerical methods
In this section, we describe the reference numerical methods employed to illustrate the range of applicability of the surface susceptibilities and the effective models presented in section II.3. An _ab initio_ atomistic method is used to determine the susceptibilities of 2D materials and of vertical heterostructures. The optical response is then obtained using the surface susceptibility model of a vertical heterostructure. On the other hand, a classical electrodynamic method is used to determine the absorption of horizontal heterostructures based on the susceptibilities of the individual components. In the last case, _ab
Figure 4: a) Schematic view of horizontal heterostructure. The yellow box of height H includes the ribbons of 2D materials of thickness \(L\) and a part of the substrate and incident medium. b) Close-up view of the yellow box with the different interfaces labelled from A to E.
_initio_ approaches are not feasible due to the large number of atoms in the unit cell but the horizontal structuring of the materials is fully accounted for, as well as the anisotropy. Those two methods are then complementary in order to verify the relevance of the effective models.
### Time-dependent density functional theory (TDDFT)
The surface susceptibility of graphene, hBN and a bilayer graphene-hBN has been calculated using the GPAW implementation of TDDFT [52], within the random-phase approximation. Highly corrugated graphene with a thickness larger than a single layer of flat graphene was also investigated for comparison. The height of the unit cell of graphene and hBN is 1.70 nm. For the bilayer, the graphene and hBN layers were separated by 0.34 nm of vacuum, which corresponds to the interlayer distance in graphene and is close to the average interlayer distance of graphene-hBN heterostructures accounting for van der Waals corrections [53]. In this case, the total height of the cell is 2.0 nm. The ground states of graphene, hBN, and the bilayer were calculated using a GGA-PBE functional [54], a k-point grid of \(256\times 256\times 1\) and an energy cut-off of 350 eV. For the TDDFT calculation, a cut-off energy of 250 eV was used. Corrugated graphene is modelled using a unit cell containing 50 atoms forming a hill of height 0.25 nm in a cell of height 2.50 nm as described in [55]. Its ground state is calculated using the LDA, a \(48\times 48\times 1\) k-point grid and a cut-off energy of 400 eV. The cut-off energy for the TDDFT calculation is 20 eV. The GW and BSE calculations are not performed due to computational limitations, which could result in inaccurate optical spectra [56]. However, as we focus here on the effect of the anisotropy on the optical response of heterostructures, we ensure that the same level of approximation is considered when comparing systems. The link between microscopic and macroscopic response functions and effective quantities remains valid for all level of approximations.
### Rigorous coupled wave analysis
A homemade code based on the rigorous coupled wave analysis (RCWA) method [57] was used. The RCWA method solves Maxwell equations for a series of layers of finite thicknesses with lateral structuring, as horizontal heterostructure. The method was adapted to account for the intrinsic anisotropy of materials in order to accurately model 2D materials [41]. The RCWA approach was used for modelling thin films and ribbons, with the dielectric functions obtained by TDDFT.
The thicknesses of the layers representing the 2D materials (structured or not) were arbitrarily fixed to \(L=0.34\) nm for monolayer, or \(L=0.68\) nm for bilayers. As long as these thicknesses are coherent with the thicknesses used to obtained the effective permittivity the results are independent of this choice [17].
## IV Results and discussions
The surface susceptibilities of graphene and hBN are shown on fig. 5. It was first verified that \(\chi_{S}^{\parallel}\) and \(\xi_{s}^{\perp}\) are independent of the supercell thickness \(L\) (not shown). For the in-plane susceptibility \(\chi_{S}^{\parallel}\) (fig. 5, left panels) we observe the \(\pi\) and the \(\pi+\sigma\) plasmons around respectively 4.5 and 14 eV, as expected without the GW and BSE correction [58]. The GW correction tends to blueshift the plasmon energy and the BSE one has the inverse effect. Together they produce a global blueshift of less than 0.5 eV [59; 18; 60; 61]. The imaginary part of the out-of-plane susceptibility \(\xi_{s}^{\perp}\), responsible for the absorption, is exactly zero below 10 eV, fig. 5, right panels. For corrugated graphene, the out-of-plane susceptibility is not negligible below 10 eV, due to the atomic structure extending in the normal direction. This highlights the role of the valence bounds in the normal direction to the out-of-plane response of 2D materials.
The absorption spectra of the systems described by an effective surface polarization were obtained following the method published in our previous papers [21; 17]. It was adapted to incorporate the surface external susceptibility for the out-of-plane polarization [21]. The absorption
Figure 5: In-plane surface irreducible susceptibility (left) and out-of-plane surface external susceptibility (right) of (from top to bottom) graphene, hBN and corrugated graphene. The solid line is the real part, the dashed line is the imaginary part.
spectra of the thin films and the ribbons were obtained using the RCWA method.
For all the following absorption calculations of 2D materials, the incident medium is air (\(n_{a}=1\)) and the refractive index of the substrate is \(n_{b}=1.5\). The angle of incidence is fixed to \(70^{\circ}\) in TM polarization to probe the effects of the anisotropy.
### Vertical heterostructures
We consider two types of vertical heterostructures. First, a system with identical layers (graphene) to analyse the limits of the 2D model compared to the thin film model. These structures may be synthesized experimentally up to a few layers using the transfer techniques on CVD-grown graphene for example [47]. Secondly, we test the validity of the effective model for vertical heterostructure (section II.3.1) on a graphene-hBN bilayer, whose optical properties have already been reported for in-plane polarizations [32; 33; 60].
In fig. 6 the absorption of a single sheet of graphene (black), a multilayer of 10 sheets (blue) and a multilayer of 20 sheets of graphene (red) are calculated using the thin film model (solid lines) and the surface polarization model (dotted lines), showing the robustness of the strictly 2D model, even for 10 layers. For 20 layers, the discrepancies between the models become significant at high energy, in particular around the \(\pi+\sigma\) plasmon. In this case, the wavelength of the electromagnetic wave inside the layer is no longer larger than the thickness \(L\), and the phase shift cannot be neglected, which was an assumption for the 2D model. However, this result shows that the 2D model is not restricted to single-layered 2D materials and that few-layered 2D materials and heterostructures can also be modelled as surface polarization. The maximum number of layers is determined by the small-phase shift conditions and thus does not depend only on the thickness of the 2D material but also on its permittivity.
In fig. 7, the TDDFT surface susceptibilities (the reference) of a graphene-hBN bilayer are compared to the effective susceptibilities of eq. (25) and (26), with the single layers susceptibilities also obtained by TDDFT. The effective model replicates well \(\chi_{S}^{\parallel}\) except around 5 eV which suggests a coupling between the \(\pi\)-plasmons in each 2D material. For \(\xi_{S}^{\perp}\), the effective model fails to reproduce the TDDFT result above 10 eV, though the global trend is conserved. This is due to long range electronic interactions that are significant even with large vacuum layers for out-of-plane polarization, as highlighted in [12].
### Horizontal heterostructure
The first structure that we consider in this section is made of a 2D pattern alternating between graphene nanoribbons (15 nm wide) and vacuum (5 nm), leading to a filling factor of 0.75. To assess the domain of validity of the thin-ribbon (eqs. (29)-(31)) and thick-ribbon models (eqs. (27) and (28)), we compare a single-layer ribbon to a stack of 100 ribbons, with a total thickness of 35 nm.
Using the RCWA method, the absorption of the graphene nanoribbons is obtained for two polarizations of the light with an incident angle of \(70^{\circ}\), namely for the component of the electric field in the surface plane \(E_{tan}\) either parallel or perpendicular to the ribbons (fig. 8). To the best of our knowledge, this is the first investigation of 2D materials nanoribbons fully accounting for the intrinsic anisotropy of the 2D material. While for a large thickness (100 layers), the absorption depends on the polarisation, the aligned 2D nanoribbons have an isotropic response. To confirm this result, we also have considered an incident light with a different azimuthal angle, for which the tangential component of the electric field is neither parallel nor perpendicular to the ribbons. No
Figure 6: Absorption by a single layer (black), 10 layers (blue) and 20 layers (red) of graphene, considering an incident angle of 70” within the thin-film model (solid line) and the surface polarization model (dotted line).
Figure 7: Real (left) and imaginary (right) parts of the surface susceptibilities of a graphene-hBN bilayer heterostructures from the reference model, i.e. TDDFT calculation (red lines) and the effective model (black lines).
modification of the polarization of the transmitted light has been observed, confirming the isotropy of the system. The uniaxial response of the thin horizontal heterostructure that our effective medium model has evidenced is then confirmed by the RCWA calculation, which fully describes the horizontal structure of the system.
For thick layers, the effective model (section II.3) predicts that this uniaxial character disappears and, consequently, when \(E_{tan}\) is perpendicular to the ribbon, the thick and thin ribbon effective models should differ. In fig. 9, the reference RCWA simulations are compared to the thin-ribbon and the thick-ribbon effective approaches for a single layer of graphene (top) and for a multilayer of 100 sheets of graphene (bottom). The thin-ribbon model reproduces perfectly the reference results for ribbons of a single sheet of 2D materials while the thick-ribbon model reproduces better the results for the large multilayer. Inversely the thick-ribbon model, which has sometimes been used for 2D materials, gives inaccurate results for single-layer 2D materials [35; 51].
To better understand the transition between the two models, fig. 10 displays the relative error between each effective model and the reference (here the RCWA results) in function of the number of layers. This error is calculated as the normalized area between the curves of absorption \(A\):
\[Error\left(\%\right)=100\times\frac{\int\left|A_{eff}-A_{ref}\right|\,dE}{ \int A_{ref}\,dE} \tag{32}\]
with \(E\) the incident energy. It confirms that, as shown before, the thin ribbon model is accurate for very few layers but the thick ribbon model works better for several tens of layers. In between, for 3 to 30 layers, the error is above 15% for both models and a full description of the system is necessary. As mentioned in section II.3, this result is also valid for larger ribbons, as the effective model only depends on the filling factor, as was verified numerically up to a width of 1500 nm (not shown).
We now investigate a lateral repetition of graphene and hBN nanoribbons with of 15 nm and 5 nm width, respectively. These nanoribbons, have already been produced using CVD and etching device [62; 63], and may sustain plasmons [39; 64], which could be detected by comparison between the results of the effective model and the
Figure 8: Absorption by ribbons of graphene of 15 nm width with a filling fraction of 0.75 calculated using the RCWA for one layer (top) and one hundred layers (right). The incident angle is 70\({}^{\circ}\) and the tangential component of the electric field is either perpendicular (solid red) or parallel to the ribbons (dotted black).
Figure 10: Relative error of the effective models (thin ribbon and thick ribbon) compared to the reference as a function of the number of layer.
Figure 9: Absorption by ribbons of graphene with filling factor of 0.75 at an incident angle of 70\({}^{\circ}\), the tangential part of the in-plane electric field is perpendicular to the ribbons, for one layer (top), one hundred layers (bottom). The reference model (RCWA, 15 nm width ribbons) is in red, and the effective models are in dotted (thin-ribbon model) and solid line (thick-ribbon model) black.
RCWA calculation. In fig. 11, the three models are compared: the reference model, the effective thin-film model and the effective surface polarization model. The three models agree almost perfectly, except at two specific energies. Around \(5\,\mathrm{eV}\), the two effective models fail to reproduce the details of the RCWA results because of the plasmonic resonance that cannot be captured by effective models. This discrepancy was not observed in the graphene-air system which suggests that this plasmon originates from a coupling between the \(\pi\)-plasmon of graphene and that of hBN, similarly to the case of vertical heterostructures. At high energy, above \(15\) eV, the RCWA and the effective thin film model results are similar but the surface polarization model slightly differs. In this range, the wavelength is so small that the small phase shift approximation is not valid anymore, invalidating the strictly 2D model.
## V Conclusions
We have proposed an original approach to develop effective models for vertical and horizontal heterostructures of 2D materials, based on the formal link between the microscopic and macroscopic description of the response functions of the materials. The importance of using two different surface susceptibilities, the irreducible susceptibility \(\chi_{S}^{\parallel}\) and the external susceptibility \(\xi_{S}^{\perp}\), defined independently from an arbitrary thickness of the layer, has been highlighted. In particular, it makes it possible to avoid the embarrassing definition of a surface response function that depends on the dielectric properties of the surrounding media.
It is only recently that experimental optical characterisation highlighted the role of out-of-plane susceptibility (\(\xi_{S}^{\perp}\)) for a coherent interpretation of the measurements [19]. It will become even more important with the rapid development of the study of heterostructures. This anisotropy could be taken into account together with the structuration of the material, as in the RCWA methods or in an effective medium approach. In some cases, the out-of-plane response can be neglected as for example, at normal incidence. Also, below \(10\) eV, the imaginary part of \(\xi_{S}^{\perp}\) is negligible for graphene and hBN while the real part is constant. As most of the optical features (absorption, plasmonic excitations,...) depend mainly on the imaginary part of \(\xi_{S}^{\perp}\), the optical spectra are weakly dependent on the out-of-plane response. This justifies a posteriori the use of models without out-of-plane responses in many previous research [65; 10; 47; 66].
For vertical heterostructures, we recovered the well-described effective medium model and expressed it in terms of the susceptibilities. With this effective approach, we reproduce quantitatively calculations from TDDFT although special care must be taken for plasmonic resonance and long-range interactions for out-of-plane polarization. We also demonstrated that the vertical heterostructure effective approach is robust up to tens of layers of graphene (more generally as far as the phase shift of the EM fields is negligible in the heterostructure).
The counter-intuitive uniaxial response of thin alternating nano-ribbons (horizontal heterostructures) is an unexpected outcome of the effective medium approach based on the surface susceptibilities. We successfully confront these predictions with RCWA numerical investigations and illustrate the transition towards an anisotropic behavior as the thickness increases (thick-layer model). This led to the description of an effective model for ribbons of 2D materials different from the effective model for thick ribbons or nanorods. In practice, the validity of the thin-layer model is already questioned for three-layer systems. In both cases, interface excitations, such as surface plasmons, cannot be captured by effective model approaches. As the RCWA method for horizontal heterostructure is numerically very efficient, a full description of the heterostructure rather than an effective medium approach is recommended to avoid questioning the limit of validity. However, the accurate description of systems composed of different 2D materials by a simple homogeneous thin film or surface current presents the obvious advantage of its simplicity and allows to analyse experimental data without numerical effort.
###### Acknowledgements.
This research used resources of the "Plateforme Technologique de Calcul Intensif (PTCI)" ([http://www.ptci.unamur.be](http://www.ptci.unamur.be)) located at the University of Namur, Belgium, which is supported by the FNRS-FRFC, the Walloon Region, and the University of Namur (Conventions No. 2.5020.11, GEO U.G006.15, 1610468, RW/GEQ2016 et U.G011.22). The PTCI is a member of the "Consortium des Equipements de Calcul Intensif (CECI)".
Figure 11: Absorption from alternating ribbons of graphene and hBN, with filling factor of \(0.75\) and \(0.25\) respectively, at an incident angle of \(70^{\circ}\), polarized perpendicularly to the ribbons, for three different models: reference model with RCWA (solid red line, width \(=15\) nm), effective thin film model (solid grey line), effective surface current model (dotted black line).
## References
* Rouhi _et al._ [2012]N. Rouhi, S. Capdevila, D. Jain, K. Zand, Y. Y. Wang, E. Brown, L. Jofre, and P. Burke, Terahertz graphene optics, Nano Res. **5**, 667 (2012).
* Grigorenko _et al._ [2012]A. N. Grigorenko, M. Polini, and K. S. Novoselov, Graphene plasmonics, Nat. Photonics **6**, 749 (2012).
* Bozzi _et al._ [2015]M. Bozzi, L. Pierantoni, and S. Bellucci, Applications of Graphene at Microwave Frequencies, Radioengineering **24**, 661 (2015).
* Low and Avouris [2014]T. Low and P. Avouris, Graphene Plasmonics for Terahertz to Mid-Infrared Applications, ACS Nano **8**, 1086 (2014), arXiv:1403.2799.
* Park _et al._ [2012]H. Park, P. R. Brown, V. Bulovic, and J. Kong, Graphene As Transparent Conducting Electrodes in Organic Photovoltaics: Studies in Graphene Morphology, Hole Transporting Layers, and Counter Electrodes, Nano Lett. **12**, 133 (2012).
* Das _et al._ [2019]S. Das, D. Pandey, J. Thomas, and T. Roy, The Role of Graphene and Other 2D Materials in Solar Photovoltaics, Adv. Mater. **31**, 1 (2019).
* Justino _et al._ [2017]C. I. Justino, A. R. Gomes, A. C. Freitas, A. C. Duarte, and T. A. Rocha-Santos, Graphene based sensors and biosensors, TrAC Trends Anal. Chem. **91**, 53 (2017).
* Chattopadhyay _et al._ [2015]S. Chattopadhyay, M.-S. Li, P. Kumar Roy, and C. T. Wu, Non-enzymatic glucose sensing by enhanced Raman spectroscopy on flexible 'as-grown' CVD graphene, Analyst **140**, 3935 (2015), arXiv:arXiv:1310.8002v1.
* Batrakov _et al._ [2015]K. Batrakov, P. Kuzhir, S. Maksimenko, A. Paddubskaya, S. Voronovich, P. Lambin, T. Kaplas, and Y. Svirko, Flexible transparent graphene/polymer multilayers for efficient electromagnetic field absorption, Sci. Rep. **4**, 7191 (2015).
* Lobet _et al._ [2016]M. Lobet, B. Majerus, L. Henrard, and P. Lambin, Perfect electromagnetic absorption using graphene and epsilon-near-zero metamaterials, Phys. Rev. B **93**, 235424 (2016).
* Kasry _et al._ [2010]A. Kasry, M. A. Kuroda, G. J. Martyna, G. S. Tulevski, and A. A. Bol, Chemical Doping of Large-Area Stacked Graphene Films for Use as Transparent, Conducting Electrodes, ACS Nano **4**, 3839 (2010).
* Condens. Matter Mater. Phys. **92**, 1 (2015).
* Matthes _et al._ [2016]L. Matthes, O. Pulci, and F. Bechstedt, Influence of out-of-plane response on optical properties of two-dimensional materials: First principles approach, Phys. Rev. B **94**, 205408 (2016).
* Merano [2016]M. Merano, Fresnel coefficients of a two-dimensional atomic crystal, Phys. Rev. A **93**, 013832 (2016).
* Jayaswal _et al._ [2018]G. Jayaswal, Z. Dai, X. Zhang, M. Bagnarol, A. Martucci, and M. Merano, Measurement of the surface susceptibility and the surface conductivity of atomically thin MoS 2 by spectroscopic ellipsometry, Opt. Lett. **43**, 703 (2018).
* Li and Heinz [2018]Y. Li and T. F. Heinz, Two-dimensional models for the optical response of thin films, 2D Mater. **5**, 10.1088/2053-1583/aab0cf (2018).
* Majerus _et al._ [2018]B. Majerus, E. Dremetsika, M. Lobet, L. Henrard, and P. Kockaert, Electrodynamics of two-dimensional materials: Role of anisotropy, Phys. Rev. B **98**, 125419 (2018).
* Guilhon _et al._ [2019]I. Guilhon, M. Marques, L. K. Teles, M. Palummo, O. Pulci, S. Botti, and F. Bechstedt, Out-of-plane excitons in two-dimensional crystals, Phys. Rev. B **99**, 161201 (2019).
* Xu _et al._ [2021]Z. Xu, D. Ferraro, A. Zaltron, N. Galvanetto, A. Martucci, L. Sun, P. Yang, Y. Zhang, Y. Wang, Z. Liu, J. D. Elliott, M. Marsili, L. Dell'Anna, P. Umari, and M. Merano, Optical detection of the susceptibility tensor in two-dimensional crystals, Commun. Phys. **4**, 215 (2021).
* Dell'Anna _et al._ [2022]L. Dell'Anna, Y. He, and M. Merano, Reflection, transmission, and surface susceptibility tensor of two-dimensional materials, Phys. Rev. A **105**, 053515 (2022).
* Majerus _et al._ [2023]B. Majerus, L. Henrard, and P. Kockaert, Optical modeling of single and multilayer two-dimensional materials and heterostructures, Phys. Rev. B **107**, 45429 (2023).
* Latil and Henrard [2006]S. Latil and L. Henrard, Charge Carriers in Few-Layer Graphene Films, Phys. Rev. Lett. **97**, 036803 (2006).
* Cao _et al._ [2018]Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Unconventional superconductivity in magic-angle graphene superlattices, Nature **556**, 43 (2018).
* Yelgel [2016]C. Yelgel, Electronic Structure of ABC-stacked Multilayer Graphene and Trigonal Warping:A First Principles Calculation, J. Phys. Conf. Ser. **707**, 12022 (2016).
* Hagymasi _et al._ [2022]I. Hagymasi, M. S. M. Isa, Z. Tajkov, K. Marity, L. Oroszlany, J. Koltai, A. Alassaf, P. Kun, K. Kandrai, A. Palinkas, P. Vancso, L. Tapaszto, and P. Nemes-Incze, Observation of competing, correlated ground states in the flat band of rhombohedral graphite, Sci. Adv. **8**, eab66879 (2022).
* Mann _et al._ [2014]J. Mann, Q. Ma, P. M. Odenthal, M. Isarraraz, D. Le, E. Preciado, D. Barroso, K. Yamaguchi, G. Von Son Palacio, A. Nguyen, T. Tran, M. Wurch, A. Nguyen, V. Klee, S. Bobek, D. Sun, T. F. Heinz, T. S. Rahman, R. Kawakami, and L. Bartels, 2-Dimensional transition metal dichalcogenides with tunable direct band gaps: MoS2(1-x)Se2x monolayers, Adv. Mater. **26**, 1399 (2014).
* Kadantsev and Hawrylak [2012]E. S. Kadantsev and P. Hawrylak, Electronic structure of a single MoS 2 monolayer, Solid State Commun. **152**, 909 (2012).
* Poddubny _et al._ [2013]A. Poddubny, I. Iorsh, P. Belov, and Y. Kivshar, Hyperbolic metamaterials, Nat. Photonics **7**, 948 (2013).
* Bludov _et al._ [2013]Y. V. Bludov, N. M. R. Peres, and M. I. Vasilevskiy, Unusual reflection of electromagnetic radiation from a stack of graphene layers at oblique incidence, J. Opt. **15**, 114004 (2013).
* Novoselov _et al._ [2016]K. S. Novoselov, A. Mishchenko, A. Carvalho, and A. H. Castro Neto, 2D materials and van der Waals heterostructures, Science (80-. ). **353**, 10.1126/science.aac9439 (2016).
* Li _et al._ [2017]F. Li, W. Wei, P. Zhao, B. Huang, and Y. Dai, Electronic and Optical Properties of Pristine and Vertical and Lateral Heterostructures of Janus MoSSe and WSSe, J. Phys. Chem. Lett. **8**, 5959 (2017).
* Wang _et al._ [2017]J. Wang, F. Ma, W. Liang, R. Wang, and M. Sun, Optical, photonic and optoelectronic properties of graphene, h-NB and their hybrid materials, Nanophotonics **6**, 943 (2017).
* Farmani _et al._ [2017]A. Farmani, M. Yavarian, A. Alighanbari, M. Miri, and M. H. Sheikhli, Tunable graphene plasmonic Y-branch switch in the terahertz region using hexagonal boron ni
tride with electric and magnetic biasing, Appl. Opt. **56**, 8931 (2017).
* [34] K. Ren, M. Sun, Y. Luo, S. Wang, Y. Xu, J. Yu, and W. Tang, Electronic and optical properties of van der Waals vertical heterostructures based on two-dimensional transition metal dichalcogenides: First-principles calculations, Phys. Lett. Sect. A Gen. At. Solid State Phys. **383**, 1487 (2019).
* [35] P. Li, G. Hu, I. Dolado, M. Tymchenko, C.-W. Qiu, F. J. Alfaro-Mozaz, F. Casanova, L. E. Hueso, S. Liu, J. H. Edgar, S. Velez, A. Alu, and R. Hillenbrand, Collective near-field coupling and nonlocal phenomena in infra-phonomic metasurfaces for nano-light canalization, Nat. Commun. **11**, 3663 (2020).
* [36] Q. Zhang, G. Hu, W. Ma, P. Li, A. Krasnok, R. Hillenbrand, A. Alu, and C. W. Qiu, Interface nano-optics with van der Waals polaritons, Nature **597**, 187 (2021).
* Condens. Matter Mater. Phys. **86**, 1 (2012).
* [38] J. Christensen, A. Manjavacas, S. Thongrattanasiri, F. H. L. Koppens, and F. J. Garcia De Abajo, Graphene plasmon waveguiding and hybridization in individual and paired nanoribbons, ACS Nano **6**, 431 (2012).
* [39] T. Das, S. Chakrabarty, Y. Kawazoe, and G. P. Das, Tuning the electronic and magnetic properties of graphene/ h -BN hetero nanoribbon: A first-principles investigation, AIP Adv. **8**, 10.1063/1.5030374 (2018).
* [40] D. Correas-Serrano, J. S. Gomez-Diaz, M. Tymchenko, and A. Alu, Nonlocal response of hyperbolic metasurfaces, Opt. Express **23**, 29434 (2015).
* [41] See Supplemental Material at [URL will be inserted by publisher] for details.
* [42] S. Bernadotte, F. Evers, and C. R. Jacob, Plasmons in molecules, J. Phys. Chem. C **117**, 1863 (2013).
* [43] N. Wiser, Dielectric Constant with Local Field Effects Included, Phys. Rev. **129**, 62 (1963).
* Condens. Matter Mater. Phys. **88**, 1 (2013).
* [45] M. S. Hybertsen and S. G. Louie, Ab initio static dielectric matrices from the density-functional approach. I. Formulation and application to semiconductors and insulators, Phys. Rev. B **35**, 5585 (1987).
* [46] J. Yan, J. J. Mortensen, K. W. Jacobsen, and K. S. Thygesen, Linear density response function in the projector augmented wave method: Applications to solids, surfaces, and interfaces, Phys. Rev. B **83**, 245122 (2011).
* [47] B. Majerus, M. Cormann, N. Reckinger, M. Paillet, L. Henrard, P. Lambin, and M. Lobet, Modified Brewster angle on conducting 2D materials, 2D Mater. **5**, 025007 (2018).
* [48] F. Bechstedt and R. Enderlein, Inverse dielectric function of a superlattice including local field effects and spatial dispersion, Superlattices Microstruct. **2**, 543 (1986).
* [49] A. Mohammadi, H. Nadgaran, and M. Agio, Contour-path effective permittivities for the two-dimensional finite-difference time-domain method, Opt. Express **13**, 10367 (2005).
* [50] A. G. Marinopoulos, L. Reining, V. Olevano, A. Rubio, T. Pichler, X. Liu, M. Knupfer, and J. Fink, Anisotropy and Interplane Interactions in the Dielectric Response of Graphite, Phys. Rev. Lett. **89**, 076402 (2002).
* [51] Y. Liu, G. Bartal, and X. Zhang, All-angle negative refraction and imaging in a bulk medium made of metallic nanowires in the visible region, Opt. Express **16**, 15439 (2008).
* [52] J. Enkovaara, C. Rostgaard, J. J. Mortensen, J. Chen, M. Dulak, L. Ferrighi, J. Gavnholt, C. Glinsvad, V. Haikola, H. A. Hansen, H. H. Kristoffersen, M. Kuisma, A. H. Larsen, L. Lehtovaara, M. Ljungberg, O. Lopez-Acevedo, P. G. Moses, J. Ojanen, T. Olsen, V. Petzold, N. A. Romero, J. Stausholm-Moller, M. Strange, G. A. Tritsaris, M. Vanin, M. Walter, B. Hammer, H. Hakkinen, G. K. H. Madsen, R. M. Nieminen, J. K. Norskov, M. Puska, T. T. Rantala, J. Schiotz, K. S. Thygesen, and K. W. Jacobsen, Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method, J. Phys. Condens. Matter **22**, 253202 (2010).
* [53] J. R. M. Sevilla and D. B. Putungan, Graphene-hexagonal boron nitride van der Waals heterostructures: an examination of the effects of different van der Waals corrections, Mater. Res. Express **8**, 085601 (2021).
* [54] M. C. Payne, M. P. Teter, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients, Rev. Mod. Phys. **64**, 1045 (1992).
* [55] G. Dobrik, P. Nemes-Incze, B. Majerus, P. Sule, P. Vancso, G. Piszter, M. Menyhard, B. Kalas, P. Petrik, L. Henrard, and L. Tapaszto, Large-area nanoengineering of graphene corrugations for visible-frequency graphene plasmons, Nat. Nanotechnol. **17**, 61 (2022).
* [56] G. Onida, L. Reining, and A. Rubio, Electronic excitations: density-functional versus many-body Green's-function approaches, Rev. Mod. Phys. **74**, 601 (2002).
* [57] M. G. Moharam and T. K. Gaylord, Rigorous coupled-wave analysis of planar-grating diffraction, J. Opt. Soc. Am. **71**, 811 (1981).
* [58] A. G. Marinopoulos, L. Reining, A. Rubio, and V. Olevano, Ab initio study of the optical absorption and wave-vector-dependent dielectric response of graphite, Phys. Rev. B **69**, 245419 (2004).
* [59] P. E. Trevisanutto, C. Giorgetti, L. Reining, M. Ladisa, and V. Olevano, Ab Initio GW many-body effects in graphene, Phys. Rev. Lett. **101**, 1 (2008).
* [60] Z. Chen and X.-Q. Wang, Stacking-dependent optical spectra and many-electron effects in bilayer graphene, Phys. Rev. B **83**, 081405 (2011).
* [61] K. F. Mak, F. H. da Jornada, K. He, J. Deslippe, N. Petrone, J. Hone, J. Shan, S. G. Louie, and T. F. Heinz, Tuning Many-Body Interactions in Graphene: The Effects of Doping on Excitons and Carrier Lifetimes, Phys. Rev. Lett. **112**, 207401 (2014).
* [62] M. P. Levendorf, C.-J. Kim, L. Brown, P. Y. Huang, R. W. Havener, D. A. Muller, and J. Park, Graphene and boron nitride lateral heterostructures for atomically thin circuitry, Nature **488**, 627 (2012).
* [63] Z. Liu, L. Ma, G. Shi, W. Zhou, Y. Gong, S. Lei, X. Yang, J. Zhang, J. Yu, K. P. Hackenberg, A. Babakhani, J.-c. Idrobo, R. Vajtai, J. Lou, and P. M. Ajayan, In-plane heterostructures of graphene and hexagonal boron nitride with controlled domain sizes, Nat. Nanotechnol. **8**, 119 (2013).
* (64) C. De Angelis, A. Locatelli, A. Mutti, and A. Aceves, Coupling dynamics of 1D surface plasmon polaritons in hybrid graphene systems, Opt. Lett. **41**, 480 (2016).
* (65) A. Madani and S. Roshan Entezar, Optical properties of one-dimensional photonic crystals containing graphene sheets, Phys. B Condens. Matter **431**, 1 (2013).
* (66) J. Zhu, C. Li, J. Y. Ou, and Q. H. Liu, Perfect light absorption in graphene by two unpatterned dielectric layers and potential applications, Carbon N. Y. **142**, 430 (2019).
Anisotropy and effective medium approach in the optical response of 2D material heterostructures : Supplementary information
B. Majerus\({}^{1}\), E. Guillaume\({}^{1,2,3}\), P. Kockaert\({}^{4}\) and L. Henrard\({}^{1}\)
\({}^{1}\) Laboratoire de physique du solide (LPS) & Namur Institute of Structured Matters (NISM), University of Namur, 61 rue de Bruxelles, B-5000 Namur, Belgium \({}^{2}\) IMOMEC, IMEC vuz, Wetenschapspark 1, 3509 Diepenbeek, Belgium \({}^{3}\) UHasselt, Institute for Materials Research (IMO-IMO/EC), Agoralaan, 3590 Diepenbeek, Belgium \({}^{4}\) OPERA-photonics, Universite libre de Bruxelles (U.L.B.), 50 Avenue F. D. Roosevelt, CP 194/5, B-1050 Bruxelles, Belgium
###### Abstract
The irreducible polarizability \(\alpha^{0}\) (also noted \(\chi^{0}\) in some references [1; 2]) is the response function of non-interacting charges to a perturbing applied potential \(\tilde{V}_{app}\), or equivalently, the response function to the total potential \(\tilde{V}_{tot}\) (as defined in the main text) [1; 2]:
\[\rho\left(\mathbf{r}\right)=\int\alpha^{0}\left(\mathbf{r},\mathbf{r}^{\prime }\right)\tilde{V}_{tot}\left(\mathbf{r}^{\prime}\right)d\mathbf{r}^{\prime}\] (S1)
where \(\rho\left(\mathbf{r}\right)\) is the change in the charge density due to the applied potential. The external polarizability \(\alpha\) (also noted \(\chi\)) is the response function of interacting charges to a perturbing applied potential \(\tilde{V}_{app}\):
\[\rho\left(\mathbf{r}\right)=\int\alpha\left(\mathbf{r},\mathbf{r}^{\prime} \right)\tilde{V}_{app}\left(\mathbf{r}^{\prime}\right)d\mathbf{r}^{\prime}\] (S2)
The microscopic dielectric function \(\varepsilon\left(\mathbf{r},\mathbf{r}^{\prime}\right)\) is often calculated from the polarizabilities and the macroscopic dielectric function is obtained from the averaged values of the dielectric function or its inverse. The method propose in this paper is slightly different but equivalent. In the random-phase approximation (by neglecting the screening from the exchange-correlation term), we define susceptibilities associated to both polarizabilities using coulomb's law on equations (S1) and (S2) :
\[\chi\left(\mathbf{r},\mathbf{r}^{\prime}\right)=-\frac{1}{4\pi\varepsilon_{0} }\int\frac{\alpha^{0}\left(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime} \right)}{\left|\mathbf{r}-\mathbf{r}^{\prime\prime}\right|}d\mathbf{r}^{\prime}\] (S5)
\[\xi\left(\mathbf{r},\mathbf{r}^{\prime}\right)=-\frac{1}{4\pi\varepsilon_{0} }\int\frac{\alpha\left(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime}\right)} {\left|\mathbf{r}-\mathbf{r}^{\prime\prime}\right|}d\mathbf{r}^{\prime}\] (S6)
and we verify eq. (4) and (5) of the main paper.
## I Polarizabilities and susceptibilities
The irreducible polarizability \(\alpha^{0}\) (also noted \(\chi^{0}\) in some references [1; 2]) is the response function of non-interacting charges to a perturbing applied potential \(\tilde{V}_{app}\), or equivalently, the response function to the total potential \(\tilde{V}_{tot}\) (as defined in the main text) [1; 2]:
\[\rho\left(\mathbf{r}\right)=\int\alpha^{0}\left(\mathbf{r},\mathbf{r}^{ \prime}\right)\tilde{V}_{tot}\left(\mathbf{r}^{\prime}\right)d\mathbf{r}^{\prime}\] (S7)
where \(\rho\left(\mathbf{r}\right)\) is the change in the charge density due to the applied potential. The external polarizability \(\alpha\) (also noted \(\chi\)) is the response function of interacting charges to a perturbing applied potential \(\tilde{V}_{app}\):
\[\rho\left(\mathbf{r}\right)=\int\alpha\left(\mathbf{r},\mathbf{r}^{\prime} \right)\tilde{V}_{app}\left(\mathbf{r}^{\prime}\right)d\mathbf{r}^{\prime}\] (S8)
The microscopic dielectric function \(\varepsilon\left(\mathbf{r},\mathbf{r}^{\prime}\right)\) is often calculated from the polarizabilities and the macroscopic dielectric function is obtained from the averaged values of the dielectric function or its inverse. The method propose in this paper is slightly different but equivalent. In the random-phase approximation (by neglecting the screening from the exchange-correlation term), we define susceptibilities associated to both polarizabilities using coulomb's law on equations (S1) and (S2) :
\[\chi\left(\mathbf{r},\mathbf{r}^{\prime}\right)=-\frac{1}{4\pi\varepsilon_{0} }\int\frac{\alpha^{0}\left(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime} \right)}{\left|\mathbf{r}-\mathbf{r}^{\prime\prime}\right|}d\mathbf{r}^{\prime}\] (S9)
\[\xi\left(\mathbf{r},\mathbf{r}^{\prime}\right)=-\frac{1}{4\pi\varepsilon_{0} }\int\frac{\alpha\left(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime}\right)}{ \left|\mathbf{r}-\mathbf{r}^{\prime\prime}\right|}d\mathbf{r}^{\prime}\] (S10)
and we verify eq. (4) and (5) of the main paper.
## II Effective model for multilayers
The effective model for multilayers is well known and rigorously developed [4; 5]. However, to extend it to thin ribbon, we propose a simple physical explanation. For fields parallel to the layers of a multilayer, the electric field is parallel to the interface. It is then conserved through the interface and consequently, it is constant over all the structure. The effective permittivity is the ratio between the average displacement field and the av
erage electric field. We can thus write
\[\varepsilon_{eff}^{\parallel} =\frac{\left<D^{\parallel}\right>}{\varepsilon_{0}\left<E^{ \parallel}\right>}\] (S7) \[\varepsilon_{eff}^{\parallel} =\frac{\sum_{i}f_{i}D_{i}^{\parallel}}{\varepsilon_{0}E^{ \parallel}}\] (S8) \[\varepsilon_{eff}^{\parallel} =\sum_{i}f_{i}\varepsilon_{i}^{\parallel}\] (S9)
with \(\varepsilon_{i}^{\parallel}=\frac{D_{i}^{\parallel}}{\varepsilon_{0}E^{ \parallel}}\) the permittivity of the layer of thickness \(d_{i}\). This equation is called the parallel capacitors equation in the main article. For fields perpendicular to the layer, the displacement field is conserved at the interface and thus constant over the whole system and we have
\[\frac{1}{\varepsilon_{eff}^{\perp}} =\frac{\varepsilon_{0}\left<E^{\perp}\right>}{\left<D^{\perp}\right>}\] (S10) \[\frac{1}{\varepsilon_{eff}^{\perp}} =\frac{\sum_{i}f_{i}\varepsilon_{0}E_{i}^{\perp}}{D^{\perp}}\] (S11) \[\frac{1}{\varepsilon_{eff}^{\perp}} =\sum_{i}f_{i}\frac{1}{\varepsilon_{i}^{\perp}}.\] (S12)
This equation is called the series capacitors equation in the main article. In brief we observe that if the electric field is constant over the structure, eq. (S9) should be used and if the displacement field is conserved, eq. (S12) should be used.
## III Anisotropic rcwa
The Rigorous Coupled Wave Analysis is a method to investigate optical properties of inhomogeneous layered system, first introduced for isotropic media [6]. Here, we present the main steps of the methods in the case of anisotropic media, in a similar way than in [7; 8; 9]. Assuming that each layer of permittivity \(\overline{\varepsilon}_{h}\) is host to a finite number of regions \(l\) (e.g. the squares and circle in Fig 1) of different permittivity (\(\overline{\varepsilon}_{l}\)), we can write each component of the permittivity tensor of the layer as:
\[\varepsilon^{\left(ij\right)}\left(\vec{r}\right)=\varepsilon_{h}^{\left(ij \right)}+\sum_{l}\left(\varepsilon_{l}^{\left(ij\right)}-\varepsilon_{h}^{ \left(ij\right)}\right)\Omega^{\left(l\right)}\left(\vec{r}\right)\] (S13)
Where (ii) specifies the component of the permittivity tensor, \(\varepsilon_{h}\) and \(\varepsilon_{l}\) are the permittivities of the host medium and of the \(l\)-th island, and \(\Omega^{\left(l\right)}\left(\vec{r}\right)\) is a 2D boolean function delimiting each islands.
Now we expand the permittivity of the whole layer as a 2D Fourier series based upon the Fourier decomposition of the Boolean functions \(\Omega^{\left(l\right)}\) that define the regions \(l\). Since the Fourier expansion is performed on the local functions \(\Omega^{\left(l\right)}\left(\vec{r}\right)\), we can also compute the Fourier series
of any function of the permittivity, e.g. the inverse of any of the component of the tensor:
\[\varepsilon^{\left(ij\right)}=\sum_{g}\varepsilon_{g}^{\left(ij \right)}e^{ig\cdot\vec{r}}\text{ \ and \ }\frac{1}{\varepsilon^{\left(ij\right)}}=\sum_{g}\left.\frac{1}{\varepsilon^{i \overline{j}}}\right|_{g}e^{ig\cdot\vec{r}}\] \[\text{where \ }\varepsilon_{g}^{\left(ij\right)}=\varepsilon_{h}^{\left(ij \right)}\delta_{g,g0}+\sum_{l}\left(\varepsilon_{l}^{\left(ij\right)}- \varepsilon_{h}^{\left(ij\right)}\right)\Omega_{g}^{\left(l\right)}\] (S14) \[\text{and \ \ }\frac{1}{\varepsilon^{\left(ij\right)}}\Big{|}_{g}= \frac{1}{\varepsilon_{h}^{\left(ij\right)}}\delta_{g,g0}+\sum_{l}\left(\frac{ 1}{\varepsilon_{l}^{\left(ij\right)}}-\frac{1}{\varepsilon_{h}^{\left(ij \right)}}\right)\Omega_{g}^{\left(l\right)}\]
The Fourier coefficients are indexed along a single index \(g\) that runs over all nodes of the 2D reciprocal space, \(g_{0}\) being its origin.
Expressing the permittivity tensor as a Fourier series in Maxwell's equation allows to derive the following set of equations:
Figure 1: Single layer material (light grey) on a substrate (blue stripes). Periodic repetitions of a circle and square shaped island are depicted.
\[\frac{\mathrm{d}E_{x,g^{\prime\prime\prime}}}{\mathrm{d}z} =-i\sum_{g,g^{\prime}}\left.\frac{1}{\varepsilon^{(zz)}}\right|_{( g^{\prime\prime\prime}-g-g^{\prime})}\dot{\varepsilon}_{g^{\prime}}\vec{E}_{ \parallel,g}\cdot\left(\vec{k}_{\parallel}+\vec{g}+\vec{g^{\prime}}\right)\ \ \text{where}\ \ \dot{ \varepsilon}_{g}=\left(\begin{matrix}\varepsilon^{(xx)}\Big{|}_{g}&\varepsilon^{( xy)}\Big{|}_{g}\\ \varepsilon^{(yx)}\Big{|}_{g}&\varepsilon^{(yy)}\Big{|}_{g}\end{matrix}\right)\] (S15) \[\frac{\mathrm{d}H_{z,g}}{\mathrm{d}z} =-i\vec{H}_{\parallel,g}\cdot\left(\vec{k}_{\parallel}+\vec{g}\right)\] \[\frac{\mathrm{d}E_{x,g}}{\mathrm{d}z} =\frac{i}{\omega\varepsilon_{0}}\left(k_{x}+g_{x}\right)\left[ \sum_{g^{\prime\prime}}\left.\frac{1}{\varepsilon^{(zz)}}\right|_{g-g^{ \prime\prime}}\left[H_{x,g^{\prime\prime}}\left(k_{y}+g_{y}^{\prime\prime} \right)-H_{y,g^{\prime\prime}}\left(k_{x}+g_{x}^{\prime\prime}\right)\right] \right]+i\omega\mu_{0}H_{y,g}\] \[\frac{\mathrm{d}E_{y,g}}{\mathrm{d}z} =\frac{i}{\omega\varepsilon_{0}}\left(k_{y}+g_{y}\right)\left[ \sum_{g^{\prime\prime}}\left.\frac{1}{\varepsilon^{(zz)}}\right|_{g-g^{ \prime\prime}}\left[H_{x,g^{\prime\prime}}\left(k_{y}+g_{y}^{\prime\prime} \right)-H_{y,g^{\prime\prime}}\left(k_{x}+g_{x}^{\prime\prime}\right)\right] \right]-i\omega\mu_{0}H_{x,g}\] \[\frac{\mathrm{d}H_{x,g}}{\mathrm{d}z} =\frac{i}{\omega\mu_{0}}\left(k_{x}+g_{x}\right)\left[E_{y,g} \left(k_{x}+g_{x}\right)-E_{x,g}\left(k_{y}+g_{y}\right)\right]-i\omega \varepsilon_{0}\sum_{g^{\prime\prime}}\left(\left.\varepsilon^{(yx)}\right|_{g- g^{\prime\prime}}E_{x,g^{\prime\prime}}+\left.\varepsilon^{(yy)}\right|_{g-g^{ \prime\prime}}E_{y,g^{\prime\prime}}\right)\] \[\frac{\mathrm{d}H_{y,g}}{\mathrm{d}z} =\frac{i}{\omega\mu_{0}}\left(k_{y}+g_{y}\right)\left[E_{y,g} \left(k_{x}+g_{x}\right)-E_{x,g}\left(k_{y}+g_{y}\right)\right]+i\omega \varepsilon_{0}\sum_{g^{\prime\prime}}\left(\left.\varepsilon^{(xx)}\right|_{g- g^{\prime\prime}}E_{x,g^{\prime\prime}}+\left.\varepsilon^{(xy)}\right|_{g-g^{ \prime\prime}}E_{y,g^{\prime\prime}}\right)\]
Note that the non-diagonal components of the tensor enhance the coupling between modes (_intrinsic_ anisotropy) in the last two equations, and that unsurprisingly we can see that coupling terms (_structural_ anisotropy) are involved irrespective of an isotropic or anisotropic material.
The set of equation shown above can be used to calculate the field at any given depth in a given layer. A stacking of such layers can be described through the Transfer Matrix formalism.
|
2309.12230 | TeV Detection of the Extreme HSP Blazar RBS 1366 by VERITAS | Extreme high-synchrotron-peak blazars (EHSPs) are postulated as the most
efficient and extreme particle accelerators in the universe but remain
enigmatic as a possible new class of TeV gamma-ray blazars. Blazars are active
galactic nuclei (AGNs) with jets of relativistic particles that generate
non-thermal emission pointed along the line-of-sight. Their spectral energy
distribution (SED) are characterized by synchrotron and inverse-Compton peaks,
indicating acceleration of leptonic and possibly hadronic particle populations
in the jet. EHSPs are characterized by a peak synchrotron frequency > 1017 Hz
with their Compton peak expected to fall in the TeV range. Indeed, the handful
of EHSPs detected by Imaging Air Cherenkov Telescopes (IACTs) have presented
challenges where some may be a high-frequency extension of the blazar sequence
while others peaking around 10 TeV may represent a different class of TeV
emitters. Detections of the high-energy and very-high-energy (HE; E > 100 MeV,
VHE; E > 100 GeV) components of the Compton peak will play an important role in
constraining the acceleration model derived from the SED. We present the
discovery of TeV emission from RBS 1366, a candidate EHSP, by the VERITAS
observatory. Using HE and VHE data from the Fermi-LAT and VERITAS
observatories, respectively, we characterize the detection by providing an SED
and model fit in the context of other EHSP candidates. Our work confirms the
status of RBS 1366 as an EHBL. | Deivid Ribeiro | 2023-09-21T16:25:06Z | http://arxiv.org/abs/2309.12230v1 | # TeV Detection of the Extreme HSP Blazar RBS 1366 by VERITAS
###### Abstract:
Extreme high-synchrotron-peak blazars (EHSPs) are postulated as the most efficient and extreme particle accelerators in the universe but remain enigmatic as a possible new class of TeV gamma-ray blazars. Blazars are active galactic nuclei (AGNs) with jets of relativistic particles that generate non-thermal emission pointed along the line-of-sight. Their spectral energy distribution (SED) are characterized by synchrotron and inverse-Compton peaks, indicating acceleration of leptonic and possibly hadronic particle populations in the jet. EHSPs are characterized by a peak synchrotron frequency \(>10^{17}\) Hz with their Compton peak expected to fall in the TeV range. Indeed, the handful of EHSPs detected by Imaging Air Cherenkov Telescopes (IACTs) have presented challenges where some may be a high-frequency extension of the blazar sequence while others peaking around 10 TeV may represent a different class of TeV emitters. Detections of the high-energy and very-high-energy (HE; E > 100 MeV, VHE; E > 100 GeV) components of the Compton peak will play an important role in constraining the acceleration model derived from the SED. We present the discovery of TeV emission from RBS 1366, a candidate EHSP, by the VERITAS observatory. Using HE and VHE data from the _Fermi_-LAT and VERITAS observatories, respectively, we characterize the detection by providing an SED and model fit in the context of other EHSP candidates. Our work confirms the status of RBS 1366 as an EHBL.
## 1 Introduction
Blazars are a subclass of active galactic nuclei with relativistic jets pointed toward the observer, emitting gamma rays in the very-high-energy (VHE; E\(>\)100 GeV) or "TeV" regime. The underlying mechanisms for the emission of these gamma rays are evident in the observed spectral energy distribution (SED), which is modeled to include the underlying particle populations, and acceleration and cooling mechanisms in the jets.
It is commonly understood that the low energy and high energy peaks of the SED are produced by synchrotron and inverse Compton processes, respectively [1]. In this synchrotron self-compton model (SSC), a population of electrons emit synchrotron radiation and then the same electron population emit gamma rays through Inverse Compton scattering the synchrotron photons.
At a redshift of \(z=0.2365\), RBS 1366 is an AGN in giant elliptical host galaxy with a central black hole mass of \(9.31\pm 0.32\) M\({}_{\odot}\)[2]. RBS 1366 has been a prime candidate for detection and classification as an EHSP [3, 4, 5]. It was selected for VERITAS observation based on the high synchrotron peak frequency in the Swift XRT X-ray band (\(>10^{17}\) Hz), along with detection in the 20 MeV to 300 GeV _Fermi_-LAT band displaying a hard spectrum (and placement in the 3FHL catalog [6]). RBS 1366 was expected to behave very much like the EHBL 1ES 0229+200 (see Table 1 of [5]) with a measured a redshift of z = 0.237 based on Ca II, G band, Fe I, Mg I and Na absorption [7]. In the 1-300 GeV band, Toomey (2020) [3] found a power law fit with normalization \(k=(7.20\pm 1.65)\times 10^{-11}\) cm\({}^{-2}\)s\({}^{-1}\)GeV\({}^{-1}\) and index \(\gamma=1.63\pm 0.08\).
It is notable that a VERITAS upper limit has been reported [5, 8], anticipating that a detection of this source with an accompanying spectrum would enable further EHSP characterization. A differential upper limit of \(1.7\times 10^{-11}\) cm\({}^{-1}\)s\({}^{-1}\)erg\({}^{-1}\) at 327 GeV was calculated for this source by VERITAS based on 10 hours of observations in 2016 [8].
## 2 Methods
### Fermi-LAT
The Large Area Telescope (LAT) on board the _Fermi_ satellite has operated since 2008 [9]. It is sensitive to photons between \(\sim\)20 MeV and \(\sim\)1 TeV and has an \(\sim\)60\({}^{\circ}\) field of view, enabling it to survey the entire sky in about 3 hours.
RBS 1366 was observed from 2008 to 2023, where the data was fit with a power law as the base spectral model. The _Fermi_-LAT 4FGL catalog defines this source as J1417.9+2543 located at \(14^{h}17^{m}58.6^{s}\), \(25^{\circ}43^{\prime}26^{\prime\prime}\).
The publicly available _Fermi_-LAT data were analyzed using with the Fermitools suite of tools provided by the _Fermi_ Science Support Center (FSSC). Using the Fermipy analysis package [10]1, the data were prepared for a binned likelihood analysis in which a spatial spectral model is fit over the energy bins. The data were selected using the SOURCE class of events, which are optimized for point-source analysis, within an angle of 15\({}^{\circ}\) from the analysis target position. A 90\({}^{\circ}\) zenith angle cut was applied to remove any external background events due to the effect of the Earth. The standard background models were applied to the test model, incorporating an isotropic
background and a galactic diffuse emission model without any modifications (_gll_iem_v07_ and _iso_P8R3_SOURCE_V2_v1_). The standard 4FGL catalog was then queried for sources within the field of view and their default model parameters [11]. With the improvements to _Fermi_-LAT low-energy sensitivity in PASS8 reconstruction, the energy range was expanded to 100 MeV - 1 TeV.
### Veritas
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an Imaging Atmospheric Cherenkov Telescope (IACT) array consisting of four 12 m telescopes separated by approximately 100 m, at the Fred Lawrence Whipple Observatory (FLWO) in southern Arizona, USA [12, 13]. The observatory is sensitive to photons within the energy range \(\sim\)100 GeV to \(\sim\) 30 TeV, with the ability to detect 1% of the emission of the Crab Nebula in 25 hours (at 5\(\sigma\)). The instrument has an angular resolution (68% containment) of \(\sim\)0.1\({}^{\circ}\) at 1 TeV, an energy resolution of \(\sim\)15% at 1 TeV, and 3.5\({}^{\circ}\) field of view.
VERITAS observed RBS 1366 for 56 hr between 2008 and 2021, under dark sky conditions. The data in this paper were taken using "wobble" pointing mode, where the source is offset from the center of the camera by 0.5\({}^{\circ}\). This mode creates space for a radially symmetric "off" region to be used for background estimation in the same field of view, saving time from targeted background observations that contain the same data observing conditions.
The data were processed with standard VERITAS calibration and reconstruction pipelines, and then cross-checked with a separate analysis chain [14, 15]. Specifically, we used an Image Template Method (ITM) to improve event angular and energy reconstruction [16], where analysis cuts are determined with a set of a priori data-selection cuts optimized on sources with a soft power law index (from 2.5 to 3).
## 3 Results
### _Fermi_-Lat
Using the _Fermi_-LAT data from this work, RBS 1366 (as 4FGL J1417+2543) was detected with a test statistic TS=353.9 (\(\sim 18\sigma\)) over an observation period of 14.5 years in the energy range from 100 GeV to 1 TeV. The source was fit to a power law model \(dn/dE=(4.13\pm 0.76\times 10^{-15})(E/10.4\ \mathrm{GeV})^{-1.6\pm 0.1}\ \mathrm{MeV}^{-1}\ \mathrm{cm}^{-2}\ \mathrm{s}^{-1}\). The spectrum is shown in the left of Figure 1.
RBS 1366 was also tested for time variability by constructing a light curve over the entire observing period in 6 month bins, integrating over the same energy range of 100 GeV to 1 TeV. The light curve is binned into periods determined by the Bayesian Blocks algorithm where a false alarm rate of \(p0=0.0027\) (equivalent to 3 sigma) is used to compute the prior of the number of bins, yielding 3 bins of relative stability, see Figure 2[17]. This is consistent with the 4FGL catalog entry for RBS 1366, where the variability index is measured to be 17.4 (an index above 24.72 over 12 intervals indicates <1% chance of being a steady source [18]).
### Veritas
VERITAS has detected RBS 1366 at a significance of 6.5 sigma or standard deviation, with an exposure of 56.8 hours. A summary of the detection is shown in Table 1. The observed spectrum was fit to a power law model, shown in Figure 1.
An effective energy threshold of \(\sim\)200 GeV is calculated. The spectrum is fit between 200 GeV and 2 TeV where the photon count in each bin is above 10 excess counts and the significance is above 1\(\sigma\), since this source is weakly detected. Upper limit points were calculated to a confidence limit of 99%. The spectrum is fit with a power law, where upper limits are not included. The power law fit gives \((2.39\times 10^{-12}\pm 5.2\times 10^{-13})\times(E/400\mbox{ GeV})^{(-2.7\pm 0.5)} \mbox{ TeV}^{-1}\mbox{cm}^{-1}\mbox{s}^{-1}\).
VHE photons are absorbed by the extragalactic background light (EBL) throughout the universe; the flux must be corrected to account for this effect. This absorption is energy and redshift dependent. Deabsorption is applied to the flux using the model of [19]. After deabsorbing the spectrum from the EBL, the power law fit is \((7.49\times 10^{-12}\pm 1.7\times 10^{-12})\times(E/400\mbox{ GeV})^{(-1.1\pm 0.6)}\) TeV\({}^{-1}\mbox{cm}^{-1}\mbox{s}^{-1}\). Although these flux points are harder than the overlapping _Fermi_-LAT data, they are consistent within error. Both observed and EBL deabsorbed spectral points are shown in Figure
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Epoch & T & On & Off & Norm & Excess & Significance & Flux (\(>\)200 GeV) & Crab \\ \hline & [hr] & & & & & [\(\sigma\)] & [\(10^{-12}\,\mbox{cm}^{-2}\mbox{s}^{-1}\)] & \% \\
2008-2022 & 57 & 3006 & 29180 & 0.0909 & 353 & 6.53 & \(1.7\pm 0.5\) & 0.5 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of VERITAS detection. The quality-selected live time, number of gamma-ray-like events in the on- and off-source regions, the normalization for the larger off-source region, the observed excess of gamma-rays and the corresponding statistical significance are shown. For each observation epoch, the integral flux corresponding to the observed excess is given. The flux is reported above the observation threshold of 200 GeV, and is also given in percentage of Crab Nebula flux above the same threshold.
Figure 1: _Left)_ Differential spectrum for _Fermi_-LAT source 4FGL J1417+2543, fit using a power law. _Right)_ VERITAS spectrum. A TS threshold was set to 1\(\sigma\). _Grey open square_: observed differential spectrum with power law fit to data. _Red circle_: EBL deabsorbed differential spectrum using the Dominguez model [19], for redshift z=0.237 [7]. For low significance bins (TS<1), the Rolke upper limit is derived for a power law of index 2.5, and is not included in the overall fit of the larger energy range [20].
The combined HE and VHE spectral fit is shown in Figure 3. Although the VERITAS flux point are hardened by the lowest energy point at 200 GeV, the combined power law fit is consistent within error with a \(\chi^{2}\) statistic of 13.49 (ndof=12). These fit results are summarized in Table 2.
In addition to the VERITAS SED, a light curve was also computed over 30 day bins, integrating above 300 GeV. This is shown in Figure 2 along with the corresponding _Fermi_-LAT light curve.
## 4 Discussion & Conclusion
The source exhibited a hard HE+VHE spectrum (index < 2) after correcting for EBL absorption. With a synchrotron peak at log frequency of \(17.7\pm 0.2\) (\(\sim\)2 keV), a redshift of 0.237, and the known hardness of the _Fermi_-LAT power law, the evidence already points toward an EHSP classification. The combined HE+VHE fit in this work supports this classification and merits investigation into the subclassification posited by [5]. The power law index \(1.62\pm 0.05\) on the combined EBL-deabsorbed HE+VHE spectrum in Figure 3 gives a VHE gamma-ray slope (\(S=2-\gamma\) where \(\gamma\) is power law index) of \(S=0.38\), which places RBS 1366 similar to other blazars such as 1ES
\begin{table}
\begin{tabular}{l l l l} \hline Parameter & Value & Uncertainty & Unit \\ Index & 1.62 & 0.05 & \\ Amplitude & 9.0 & 1.4 & \(10^{-12}\) cm\({}^{-2}\) s\({}^{-1}\) TeV\({}^{-1}\) \\ Reference & 0.4 & - & TeV \\ \hline \end{tabular}
\end{table}
Table 2: Power law spectral fit to HE and VHE data.
Figure 2: Light curve for VERITAS (red) and _Fermi_-LAT (blue) observations. VERITAS data is binned into 30 day bins, and the flux is calculated above350 GeV. The _Fermi_-LAT data is binned into 6 month bins and the flux is calculated above 100 MeV. Dotted vertical lines mark the VERITAS observation on the _Fermi_-LAT light curve. A significance threshold of \(3\sigma\) was used to determine upper limits, and the Bayesian Block algorithm was used with a false positive probability equivalent to \(3\sigma\) to determine bin edges in the _Fermi_-LAT light curve.
1101-232, 1ES 0347-121 and 3FGL J0710+5808, which were labeled as Hard-TeV EHBLs in [5]]. A full multi-wavelength-SED will enable a complete classification in a future study.
The HE-VHE luminosity of RBS 1366 appears to peak above an energy of \(\sim 1\) TeV, where it reaches an intrinsic luminosity of \(3\times 10^{44}\) erg/s (assumed to be isotropic). However, since the turnover of the Compton component is not yet evident, we are not able to claim that the Compton component is not dominant over the synchrotron component, which peaks at a log frequency of \(17.7\pm 0.2\) (\(\sim\)2 keV) with a luminosity of \(\sim 10^{45}\) erg/s [5]. A more sensitive observation from the Cherenkov Telescope Array (CTA), a next generation IACT observatory, in the future may yield a more comprehensive estimation of the Compton peak with improved sensitivity above 1 TeV [21, 22].
A basic estimate of the relativistic boosting in the SSC model fit to available published data is very high, between \(\delta\sim 50-100\), implying a significantly efficient acceleration that is common to EHSPs. This indicates the emission region is probably in the jet where the emission region is in the range of \(10^{17}\) cm and the magnetic field is very weak. While variability has been noted in the optical band [8], the lack of variability in the HE-VHE band is expected for the rising end of the Compton peak seen here.
Figure 3: Combined _Fermi_-LAT and VERITAS spectrum with a power law fit (dotted line). The peak of the Compton component is not clear, but is above 0.1 TeV and possibly higher.
The observational circumstances above all support the conclusion that RBS 1366 is an EHSP now detected in the TeV energy range. Figure 4 is a reproduction of the index versus synchrotron peak frequency plot sampling the other TeV detection EHSPs in [5], which cluster into Hard-TeV and HBL-like blazars. Using the results presented in this work, RBS 1366 may be classified as a Hard-TeV EHSP. A complete study using a full multi-wavelength SED to confirm the Hard-TeV EHSP classification is underway.
## Acknowledgments
This work was partially supported by NSF award PHY 2110737. This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, and by the Helmholtz Association in Germany. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument.
|
2303.00109 | Linear Size Universal Point Sets for Classes of Planar Graphs | A finite set $P$ of points in the plane is $n$-universal with respect to a
class $\mathcal{C}$ of planar graphs if every $n$-vertex graph in $\mathcal{C}$
admits a crossing-free straight-line drawing with vertices at points of $P$.
For the class of all planar graphs the best known upper bound on the size of a
universal point set is quadratic and the best known lower bound is linear in
$n$. Some classes of planar graphs are known to admit universal point sets of
near linear size, however, there are no truly linear bounds for interesting
classes beyond outerplanar graphs.
In this paper, we show that there is a universal point set of size $2n-2$ for
the class of bipartite planar graphs with $n$ vertices. The same point set is
also universal for the class of $n$-vertex planar graphs of maximum degree $3$.
The point set used for the results is what we call an exploding double chain,
and we prove that this point set allows planar straight-line embeddings of many
more planar graphs, namely of all subgraphs of planar graphs admitting a
one-sided Hamiltonian cycle. The result for bipartite graphs also implies that
every $n$-vertex plane graph has a $1$-bend drawing all whose bends and
vertices are contained in a specific point set of size $4n-6$, this improves a
bound of $6n-10$ for the same problem by L\"offler and T\'oth. | Stefan Felsner, Hendrik Schrezenmaier, Felix Schröder, Raphael Steiner | 2023-02-28T22:15:38Z | http://arxiv.org/abs/2303.00109v1 | # Linear Size Universal Point Sets for Classes of Planar Graphs
###### Abstract
A finite set \(P\) of points in the plane is \(n\)-universal with respect to a class \(\mathcal{C}\) of planar graphs if every \(n\)-vertex graph in \(\mathcal{C}\) admits a crossing-free straight-line drawing with vertices at points of \(P\).
For the class of all planar graphs the best known upper bound on the size of a universal point set is quadratic and the best known lower bound is linear in \(n\).
Some classes of planar graphs are known to admit universal point sets of near linear size, however, there are no truly linear bounds for interesting classes beyond outerplanar graphs.
In this paper, we show that there is a universal point set of size \(2n-2\) for the class of bipartite planar graphs with \(n\) vertices. The same point set is also universal for the class of \(n\)-vertex planar graphs of maximum degree \(3\). The point set used for the results is what we call an exploding double chain, and we prove that this point set allows planar straight-line embeddings of many more planar graphs, namely of all subgraphs of planar graphs admitting a one-sided Hamiltonian cycle.
The result for bipartite graphs also implies that every \(n\)-vertex plane graph has a \(1\)-bend drawing all whose bends and vertices are contained in a specific point set of size \(4n-6\), this improves a bound of \(6n-10\) for the same problem by Loffler and Toth.
Graph drawing, Universal point set, One-sided Hamiltonian, \(2\)-page book embedding, Separating decomposition, Quadrangulation, \(2\)-tree, Subcubic planar graph
###### Acknowledgements.
We are highly indebted to Henry Forster, Linda Kleist, Joachim Orthaber and Marco Ricci due to discussions during GG-Week 2022 resulting in a solution to the problem of separating \(2\)-cycles in our proof for subcubic graphs.
## 1 Introduction
Given a family \(\mathcal{C}\) of planar graphs and a positive integer \(n\), a point set \(P\subseteq\mathbb{R}^{2}\) is called an \(n\)_-universal point set_ for the class \(\mathcal{C}\) or simply \(n\)_-universal_ for \(\mathcal{C}\) if for every graph \(G\in\mathcal{C}\) on \(n\) vertices there exists a straight-line crossing-free drawing of \(G\) such that every vertex of \(G\) is placed at a point of \(P\).
To determine the minimum size of universal sets for classes of planar graphs is a fundamental problem in geometric graph theory, see e.g. Problem [17] in the Open Problem Garden. More specifically, the quest is for good bounds on the minimum size \(f_{\mathcal{C}}(n)\) of an \(n\)-universal point set for a class \(\mathcal{C}\).
Schnyder [21] showed that for \(n\geq 3\) the \([n-1]\times[n-1]\)-grid forms an \(n\)-universal point set for planar graphs, even if the combinatorial embedding of the planar graph is prescribed. This shows that \(f(n):=f_{\mathcal{P}}(n)\leq n^{2}\in O(n^{2})\), where \(\mathcal{P}\) is the class of all planar graphs. Asymptotically, the quadratic upper bound on \(f(n)\) remains the state of the art. Only the multiplicative constant in this bound has seen some improvement, the current upper bound is \(f(n)\leq\frac{1}{4}n^{2}+O(n)\) by Bannister et al. [5]. For several subclasses \(\mathcal{C}\) of planar graphs, better upper bounds are known: A classical result by Gritzmann et al. [13] is that every outerplanar \(n\)-vertex graph embeds straight-line on _any_ set of \(n\) points in general position, and hence \(f_{\text{out-pl}}(n)=n\). Near-linear upper bounds of \(f_{\mathcal{C}}(n)=O(n\ \text{polylog}(n))\) are known for 2-outerplanar graphs, simply nested graphs, and for the classes of bounded pathwidth [4, 5]. Finally, for the class \(\mathcal{C}\) of planar 3-trees (also known as Apollonian networks or stacked triangulations), \(f_{\mathcal{C}}(n)=O(n^{3/2}\log n)\) has been proved by Fulek and Toth [12].
As for lower bounds, the trivial bounds \(n\leq f_{\mathcal{C}}(n)\leq f(n)\) hold for all \(n\in\mathbb{N}\) and all planar graph classes \(\mathcal{C}\). The current lower bound \(f(n)\geq 1.293n-o(n)\) from [20] has been shown using planar 3-trees, we refer to [6, 8, 9, 15] for earlier work on lower bounds.
Choi, Chrobak and Costello [7] recently proved that point sets chosen uniformly at random from the unit square must have size \(\Omega(n^{2})\) to be universal for \(n\)-vertex planar graphs with high probability. This suggests that universal point sets of size \(o(n^{2})\) -if they exist- will not look nice, e.g., they will have a large ratio between shortest and largest distances.
In this paper we study a specific ordered point set \(H\) (the exploding double chain) and denote the initial piece of size \(2n-2\) in \(H\) as \(H_{n}\). Let \(\mathcal{C}\) be the class of all planar graphs \(G\) which have a plane straight-line drawing on the point set \(H_{n}\) where \(n=|V(G)|\). That is, \(H_{n}\) forms an \(n\)-universal point set for \(\mathcal{C}\).
A graph is POSH (partial one-sided Hamiltonian) if it is a spanning subgraph of a graph admitting a plane embedding with a one-sided Hamiltonian cycle (for definitions see Section 2). Triangulations with a one-sided Hamiltonian cycle have been studied before by Alam et al. [2] in the context of cartograms. They conjectured that every plane 4-connected triangulation has a one-sided Hamiltonian cycle. Later Alam and Kobourov [3] found a plane 4-connected triangulation on 113 vertices which has no one-sided Hamiltonian cycle.
Our main result (Theorem 3) is that every POSH graph is in \(\mathcal{C}\). We let
\[\mathcal{C}^{\prime}:=\{\mathcal{G}:\text{$\mathcal{G}$ is POSH}\}.\]
Theorem 3 motivates further study of \(\mathcal{C}^{\prime}\). On the positive side we show that every bipartite plane graph is POSH (proof in Section 4). We proceed to use the construction for bipartite graphs to show that subcubic planar graphs have a POSH embedding in Section 5. On the negative side, we also show that not all 2-trees are POSH. We conclude with some conjectures and open problems in Section 7.
An exploding double chain was previously used by Loffler and Toth [16]. They show that every planar graph with \(n\) vertices has a 1-bend drawing on a subset \(S_{n}\) of \(H\) with \(|S_{n}|=6n-10\). Our result about bipartite graphs implies a better bound:
There is a point set \(P=H_{2n-2}\) of size \(4n-6\) such that every \(n\)-vertex planar graph admits a 1-bend drawing with bends and vertices on \(P\).
Proof.: The dual of a plane triangulation is a bridgeless 3-regular graph of \(2n-4\) vertices; it has a perfect matching by Petersen's Theorem [19]. Hence, subdividing at most \(n-2\) edges can make any planar graph on \(n\) vertices bipartite. Thus \(H_{n+n-2}\) of size \(2(n+n-2)-2=4n-6\) is sufficient to accomodate 1-bend drawings of all \(n\)-vertex planar graphs.
Universality for 1-bend and 2-bend drawings with no restriction on the placement of bends has been studied by Kaufmann and Wiese [14], they show that every \(n\)-element point set is universal for 2-bend drawings of planar graphs.
## 2 The point set and the class of POSH graphs
In this section we define the exploding double chain \(H\) and the class \(\mathcal{C}^{\prime}\) of POSH graphs and show that for every \(n\geq 2\) the initial part \(H_{n}\) of size \(2n-2\) of \(H\) is \(n\)-universal for \(\mathcal{C}^{\prime}\).
A sequence \((y_{i})_{i\in\mathbb{N}}\) of real numbers satisfying \(y_{1}=0\), \(y_{2}=0\) is _exploding_ and the corresponding point set \(H=\{p_{i},q_{i}|i\in\mathbb{N}\}\), where \(p_{i}=(i,y_{i}),q_{i}=(i,-y_{i})\), is an _exploding double chain_, if for all \(n\in\mathbb{N}\), \(y_{n+1}\) is large enough that all intersections of lines going through two points of \(H_{n}=\{p_{i},q_{i}|i\in[n]\}\) with the line \(x=n+1\) lie strictly between \(y_{n+1}\) and \(-y_{n+1}\). It is \(p_{1}=q_{1}\) and \(p_{2}=q_{2}\), thus \(|H_{n}|=2n-2\). Figure 1 shows \(H_{6}\). This fully describes the order type of the exploding double chain. Note that the coordinates given here can be made integers, but the largest coordinate of \(H_{n}\) is exponential in \(n\), which is unavoidable for the order type. However, the ratio of largest to smallest distance does not have to be: We can alter the construction setting \(y_{i}=i\), but letting the \(x\)-coordinates grow slowly enough as to achieve the same order type, but with a linear ratio.
An explicit construction of a point set \(H\) in this order type is given now.
A sequence \(Y=(y_{i})_{i\geq 1}\) of real numbers satisfying \(y_{1}=0\), \(y_{2}=0\), and \(y_{i+1}>2y_{i}+y_{i-1}\) for all \(i\geq 2\) is exploding. Note that if \(\alpha>1+\sqrt{2}\), then \(y_{1}=y_{2}=0\) and \(y_{i}=\alpha^{i-3}\) for \(i\geq 3\) is an exploding sequence, e.g. \(\alpha=3\). Given an exploding sequence \(Y\) let \(P(Y)=(p_{i})_{i\geq 1}\) be the set of points with \(p_{i}=(i,y_{i})\) and let \(\tilde{P}(Y)=(q_{i})_{i\geq 1}\) be the set of points with \(q_{i}=(i,-y_{i})\), i.e., the point set reflected at the \(x\)-axis.
Let \(H=H(Y)\) for some exploding sequence \(Y\). For two points \(p\) and \(q\) let \(H(p,q)\) be the set of points of \(H\) in the open right half-plane of the directed line \(\overrightarrow{p_{q}}\). Note that1
Footnote 1: In cases where \(i\) or \(j\) are in \(\{1,2\}\) the following may list one of the two points defining the halfspace with its second name as member of the halfspace. For correctness such listings have to be ignored.
\[H(p_{i},q_{j})=\begin{cases}(p_{k})_{k\leq j}\cup(p_{k})_{k>i}\cup(q_{\ell})_ {\ell<j}&\text{if}\quad i>j\\ (p_{k})_{k<i}\cup(q_{\ell})_{\ell<i}&\text{if}\quad i=j\\ (p_{k})_{k<i}\cup(q_{\ell})_{\ell\leq i}\cup(q_{\ell})_{\ell>j}&\text{if}\quad i <j\end{cases}\]
Moreover, if \(i<j\) then \(H(q_{i},q_{j})=H(p_{i},q_{j})\backslash\{q_{i}\}\) and if \(i>j\) then \(H(p_{i},p_{j})=H(p_{i},q_{j})\backslash\{p_{j}\}\). These sidedness conditions characterize the order type of the exploding double chain.
Figure 1: An example of a point set \(H_{6}\) in a rotated coordinate system.
A plane graph \(G\) has a _one-sided Hamiltonian cycle with special edge \(vu\)_ if it has a Hamiltonian cycle \((v=v_{1},v_{2},\ldots,v_{n}=u)\) such that \(vu\) is incident to the outer face and for every \(j=2,\ldots,n\), the two edges incident to \(v_{j}\) in the Hamiltonian cycle, i.e., edges \(v_{j-1}v_{j}\) and \(v_{j+1}v_{j}\), are consecutive in the rotation of \(v_{j}\) in the subgraph induced by \(v_{1},\ldots,v_{j},v_{j+1}\) in \(G\). In particular, the one-sided condition depends on the Hamiltonian cycle, its direction and its special edge. A more visual reformulation of the second condition is obtained using the closed bounded region \(D\) whose boundary is the Hamiltonian cycle. It is that in the embedding of \(G\) for every \(j\) either all the back-edges \(v_{i}v_{j}\) with \(i<j\) are drawn inside \(D\) or in the open exterior of \(D\). We let \(V_{I}\) be the set of vertices \(v_{j}\) which have a back-edge \(v_{i}v_{j}\) with \(i<j-1\) drawn inside \(D\) and \(V_{O}=V\setminus V_{I}\). The set \(V_{I}\) is the set of vertices having back-edges only inside \(D\) while vertices in \(V_{O}\) have back-edges only outside \(D\).
Recall that \(\mathcal{C}^{\prime}\) is the class of planar graphs which are _spanning_ subgraphs of plane graphs admitting a one-sided Hamiltonian cycle. It is worth noting all subgraphs are POSH.
Any subgraph of a POSH graph is POSH.
Proof.: As edge deletions preserve the POSH property by definition, it suffices to show that deleting a vertex preserves it as well. Let \(G\) be a POSH graph and let \(G^{\prime}\) be its supergraph with a one-sided Hamiltonian cycle. Now after deleting \(v\) from \(G^{\prime}\), adding an edge between its neighbours on the Hamiltonian cycle (if it does not exist) can be done along the two edges of \(v\) along the cycle. This is a supergraph of \(G\setminus v\) with a one-sided Hamiltonian cycle.
## 3 The embedding strategy
Our interest in POSH graphs is motivated by the following theorem.
Let \(G^{\prime}\) be POSH and let \(v_{1},\ldots,v_{n}\) be a one-sided Hamiltonian cycle of a plane supergraph \(G\) of \(G^{\prime}\) on the same vertex set. Then there is a crossing-free embedding of \(G^{\prime}\) on \(H_{n}\) with the property that \(v_{i}\) is placed on either \(p_{i}\) or \(q_{i}\).
Proof.: It is sufficient to describe the embedding of the supergraph \(G\) on \(H_{n}\). For the proof we assume that in the plane drawing of \(G\) the sequence \(v_{1},\ldots,v_{n}\) traverses the boundary of \(D\) in counter-clockwise direction. For each \(i\) vertex \(v_{i}\) is embedded at \(\bar{v}_{i}=p_{i}\) if \(v_{i}\in V_{I}\) and at \(\bar{v}_{i}=q_{i}\) if \(v_{i}\in V_{O}\).
Let \(G_{i}=G[v_{1},\ldots,v_{i}]\) be the subgraph of \(G\) induced by \(\{v_{1},\ldots,v_{i}\}\). The path \(\Lambda_{i}=v_{1},\ldots,v_{i}\) separates \(G_{i}\). The _left part_\(GL_{i}\) consists of the intersection of \(G_{i}\) with \(D\), the _right part_\(GR_{i}\) is \(G_{i}\) minus all edges which are interior to \(D\). The intersection of \(GL_{i}\) and \(GR_{i}\) is \(\Lambda_{i}\) and their union is \(G_{i}\). The counter-clockwise boundary walk of \(G_{i}\) consists of a path \(\partial R_{i}\) from \(v_{1}\) to \(v_{i}\) which is contained in \(GR_{i}\) and a path from \(v_{i}\) to \(v_{1}\) which is contained in \(GL_{i}\), let \(\partial L_{i}\) be the reverse of this path.
Figure 2: \(K_{4}\) and a slightly larger graph both with a one-sided Hamiltonian cycle. Red angles indicate a side with no back-edge.
Let \(\bar{G}_{i}\) be the straight-line drawing of the plane graph \(G_{i}\) obtained by placing each vertex \(v_{j}\) at the corresponding \(\bar{v}_{j}\). A vertex \(\bar{v}\) of \(\bar{G}_{i}\) is said to _see a point \(p\)_ if there is no crossing between the segment \(\bar{v}p\) and an edge of \(\bar{G}_{i}\). By induction on \(i\) we show:
1. The drawing \(\bar{G}_{i}\) is plane, i.e., non-crossing.
2. \(\bar{G}_{i}\) and \(G_{i}\) have the same outer boundary walks.
3. Every vertex of \(\partial L_{i}\) in \(\bar{G}_{i}\) sees all the points \(p_{j}\) with \(j>i\) and every vertex of \(\partial R_{i}\) in \(\bar{G}_{i}\) sees all the points \(q_{j}\) with \(j>i\).
For \(i=2\) the graph \(G_{i}\) is just an edge and the three claims are immediate, for Property 3 just recall that the line spanned by \(p_{1}\) and \(p_{2}\) separates the \(p\)-side and the \(q\)-side of \(H_{n}\).
Now assume that \(i\in\{3,\ldots,n\}\), the properties are true for \(\bar{G}_{i-1}\) and suppose that \(v_{i}\in V_{I}\) (the argument in the case \(v_{i}\in V_{O}\) works symmetrically). This implies that all the back-edges of \(v_{i}\) are in the interior of \(D\) whence all the neighbors of \(v_{i}\) belong to \(\partial L_{i-1}\). Since \(v_{i}\in V_{I}\) we have \(\bar{v}_{i}=p_{i}\) and Property 3 of \(\bar{G}_{i-1}\) implies that the edges connecting to \(\bar{v}_{i}\) can be added to \(\bar{G}_{i-1}\) without introducing a crossing. This is Property 1 of \(\bar{G}_{i}\).
Since \(G_{i-1}\) and \(\bar{G}_{i-1}\) have the same boundary walks and \(v_{i}\) (respectively \(\bar{v}_{i}\)) belong to the outer faces of \(G_{i}\) (respectively \(\bar{G}_{i}\)) and since \(v_{i}\) has the same incident edges in \(G_{i}\) as \(\bar{v}_{i}\) in \(\bar{G}_{i}\), the outer walks of \(G_{i}\) and \(\bar{G}_{i}\) again equal each other, i.e., Property 2.
Let \(j\) be minimal such that \(v_{j}v_{i}\) is an edge and note that \(\partial L_{i}\) is obtained by taking the prefix of \(\partial L_{i-1}\) whose last vertex is \(v_{j}\) and append \(v_{i}\). The line spanned by \(\bar{v}_{j}\) and \(\bar{v}_{i}=p_{i}\) separates all the edges incident to \(\bar{v}_{i}\) in \(\bar{G}_{i}\) from all the segments \(\bar{v}_{i}\rho_{k}\) with \(\ell<j\) and \(\bar{v}_{\ell}\in\partial L_{i}\) and \(k>i\). This shows that every vertex of \(\partial L_{i}\) in \(\bar{G}_{i}\) sees all the points \(p_{k}\) with \(k>i\). For the proof of the second part of Property 3 assume some edge \(\bar{v}_{i}\bar{v}_{j}\) crosses the line of sight from \(\bar{v}_{l}\) to \(q_{k},k>i\), we refer to Figure 3. First note that this is only possible if \(l\leq j\), since otherwise \(\bar{v}_{j}\bar{v}_{l}\) separates \(\bar{v}_{i}=p_{i}\) and \(q_{k}\), because \(p_{i}\) is on the left as can be seen at \(x=i\) and \(q_{k}\) is on the right as can be seen at \(x=k\) by definition. Since \(j=l\) is impossible by construction, we are left with the case \(l<j\). Then one of \(\bar{v}_{i}\) and \(\bar{v}_{l}\), say \(\bar{v}\), lies to the right of the oriented line \(\bar{v}_{j}q_{k}\). However that implies that \(\bar{v}_{j}\bar{v}\) has \(q_{k}\) on its left, which is a contradiction to the definition of \(q_{k}\) at \(x=k\). This completes the proof of Property 3 and thus the inductive step.
Finally, Property 1 for \(\bar{G}_{n}\) implies the theorem.
## 4 Plane bipartite graphs
In this section we consider bipartite plane graphs and show that they are POSH.
Every bipartite plane graph \(G=(V,E)\) is a subgraph of a plane graph \(G^{\prime}\) on the same vertex set \(V\) which has a one-sided Hamiltonian cycle, i.e., \(G\) is POSH.
Proof.: Quadrangulations are the plane graphs with all faces of degree four. Equivalently they are the maximal plane bipartite graphs, i.e., any bipartite plane graph except stars is a subgraph of a quadrangulation. Thus since POSH graphs are closed under taking subgraphs, it suffices to prove the theorem for quadrangulations.
Let \(Q\) be a quadrangulation and let \(V_{B}\) and \(V_{W}\) be the _black_ and _white_ vertices of a 2-coloring. Label the two black vertices of the outer face as \(s\) and \(t\). Henceforth, when talking about a quadrangulation we think of an embedded quadrangulation endowed with \(s\) and \(t\). A _separating decomposition_ is a pair \(D=(Q,Y)\) where \(Q\) is a quadrangulation and \(Y\) is an orientation and coloring of the edges of \(Q\) with colors red and blue such that:
1. The edges incident to \(s\) and \(t\) are incoming in color red and blue, respectively.
2. Every vertex \(v\not\in\{s,t\}\) is incident to a non-empty interval of red edges and a non-empty interval of blue edges. If \(v\) is white, then, in clockwise order, the first edge in the interval of a color is outgoing and all the other edges of the interval are incoming. If \(v\) is black, the outgoing edge is the clockwise last in its color (see Figure 4).
Separating decompositions of a quadrangulation \(Q\) have been defined by de Fraysseix and Ossona de Mendez [18]. They show a bijection between separating decompositions and 2-orientations (orientations of the edges of \(Q\) such that every vertex \(v\not\in\{s,t\}\) has out-degree 2) and show the existence of a 2-orientation of \(Q\) with an argument related to flows and matchings. An inductive proof for the existence of separating decompositions was given by Felsner et al. [11], this proof is based on identifying pairs of opposite vertices on faces.
In a separating decomposition the red edges form a tree directed towards \(s\), and the blue edges form a tree directed towards \(t\). Each of the trees connects all the vertices \(v\not\in\{s,t\}\) to the respective root. Felsner et al. ([10, 11]) show that the edges of the two trees can be separated by a curve which starts in \(s\), ends in \(t\), and traverses every vertex and every inner face of \(Q\). This curve is called the _equatorial line_.
If \(Q\) is redrawn such that the equatorial line is mapped to the \(x\)-axis with \(s\) being the left end and \(t\) being the right end of the line, then the red tree and the blue tree become _alternating trees_ ([11], defined below) drawn in the upper respectively lower half-plane defined by the \(x\)-axis. Note that such a drawing of \(Q\) is a 2-page book embedding, we call it an _alternating 2-page book embedding_ to emphasize that the graphs drawn on the two pages of the book are alternating trees.
An _alternating tree_ is a plane tree \(T\) with a plane drawing such that the vertices of \(T\) are placed at different points of the \(x\)-axis and all edges are embedded in the half-plane above the \(x\)-axis (or all below). Moreover, for every vertex \(v\) it holds that all its neighbors are on one side, either they are all left of \(v\) or all right of \(v\). In these cases we call the vertex \(v\) respectively a _right_ or a _left vertex_ of the alternating layout. Note that every vertex is a left vertex in one of the two trees and a right vertex in the other.
Let \(Q\) be a plane quadrangulation on \(n\) vertices and let \(S\) be a separating decomposition of \(Q\). Let \(s=v_{1},v_{2},\ldots,v_{n}=t\) be the spine of the alternating 2-page book embedding of
Figure 4: Edge orientations and colors at white and black vertices.[10]
based on \(S\). Let \(Q^{+}\) be obtained from \(Q\) by adding \(v_{n}v_{1}\) and all the edges \(v_{i}v_{i+1}\) which do not yet belong to the edge set of \(Q\). By construction \(v_{1},v_{2},\ldots,v_{n}\) is a Hamiltonian cycle of \(Q^{+}\) and since the trees are alternating, black vertices have only blue edges to the left and white vertices have only red edges to the left. Thus this Hamiltonian cycle is one-sided with reverse edge \(v_{n}v_{1}=ts\). Hence \(Q\) is POSH.
It is worth noting that the Hamiltonian cycle read in the reverse direction, i.e., as \(v_{n},v_{n-1},\ldots,v_{1}\), is again one-sided, now the reverse edge is \(v_{1}v_{n}=st\).
## 5 Planar subcubic graphs
In this section we identify another large subclass of the \(\mathcal{C}^{\prime}\). Recall that \(3\)-regular graphs are also known as cubic graphs and in subcubic graphs all vertices have degree at most \(3\).
Every planar subcubic graph \(G\) is a spanning subgraph of a planar graph \(G^{\prime}\) which has an embedding with a one-sided Hamiltonian cycle, i.e., \(G\) has a POSH embedding.
Note that we do _not_ claim the theorem for all _plane_ subcubic graphs. However, we are not aware of any connected subcubic plane graph, which is not POSH.
To prove this, we use Theorem 4 and the following lemmas:
Let \(G\) be a subcubic graph. Then \(G\) admits a matching \(M\) such that contracting all the edges of \(M\) results in a bipartite multi-graph.
Proof.: Let \((X,Y)\) be a partition the vertex-set of \(G\) such that the size of the cut, i.e., the number of edges in \(G\) with one endpoint in \(X\) and one endpoint in \(Y\), is maximized. We claim that the induced subgraphs \(G[X]\) and \(G[Y]\) of \(G\) are matchings. Suppose that a vertex \(v\in X\) has at least two neighbors in \(G[X]\). Then \(v\) has at most one neighbor in \(Y\), and hence moving \(v\) from \(X\) to \(Y\) increases the size of the cut by at least one, a contradiction. The same argument works for \(G[Y]\).
Let \(M\) be the matching in \(G\) consisting of all the edges in \(G[X]\) and \(G[Y]\). Contracting the edges in \(M\) transforms \(G[X]\) and \(G[Y]\) into independent sets, and hence results in a bipartite multi-graph \(G/M\).
A _separating \(k\)-cycle_ of a plane graph \(D\) is a simple cycle of length \(k\), i.e., \(k\) edges, such that there are vertices of \(D\) inside the cycle.
Figure 5: A quadrangulation \(Q\) with a separating decomposition \(S\), and the alternating \(2\)-page book embedding induced by the equatorial line of \(S[10]\).
**Lemma 8**.: _Let \(G\) be a subcubic planar graph. Then \(G\) admits a plane embedding \(D_{G}\) and a matching \(M\) such that contracting all the edges of \(M\) in \(D_{G}\) results in a bipartite multi-graph without separating 2-cycles._
Proof.: Let \(G\) be a subcubic planar graph. Without loss of generality \(G\) is connected, otherwise we just deal with the components first, then embed \(G\) in a way that all components are incident to the outer face.
Note that a 2-cycle can only arise by contracting one matching edge of a triangle or two matching edges of a quadrilateral. Consider an embedding \(D\) of \(G\) which minimizes the number of separating 3-cycles and among those minimizes the number of separating 4-cycles.
**Claim 1**.: \(D\) _has no separating 3-cycle._
Proof.: For illustration, see Figure 6. We will first show \(D\) has no _separating diamond_, that is, two triangles sharing an edge \(e=uv\), at least one of which is a separating 3-cycle. Otherwise place \(u\) very closely to \(v\). Now \(e\) is short and we reroute the other two edges of \(u\) such that they stay close to the corresponding edge of \(v\). Since one of the triangles containing \(e\) was assumed to be separating the new drawing has fewer separating 3-cycles, a contradiction.
We are ready to show \(D\) has no separating 3-cycle. If \(T\) is a separating 3-cycle some edge has to go from a vertex \(v\) of \(T\) into its interior. Since \(v\) has degree at most 3 it has no edge to the outside of \(T\). We can then redraw the edge \(e\) of \(T\) not incident to \(v\) outside of \(T\) closely to its two other edges. Again the new drawing has fewer separating 3-cycles: indeed, if the redrawn edge would be part of another 3-cycle, \(T\) is part of a separating diamond.
Now choose an edge set \(M\) of minimum cardinality, such that contracting it yields a bipartite multi-graph. The proof of Lemma 7 implies that \(M\) is a matching. Among those matchings, we choose \(M\) such that the number of separating 4-cycles which have 2 edges in \(M\) is minimized. Such separating 4-cycles are said to be _covered_ by \(M\).
**Claim 2**.: \(M\) _covers no separating 4-cycle._
Proof.: Suppose \(Q=v_{1}v_{2}v_{3}v_{4}\) is a separating 4-cycle such that \(v_{1}v_{2}\) and \(v_{3}v_{4}\in M\) and \(v_{1}\) has an edge \(e_{I}\) to the inside, thus no edge to the outside.
Figure 6: Procedure to eliminate triangles with an inner vertex. The procedure on the left eliminates isolated separating triangles, while the one on the right deals with separating diamonds.
Figure 7: Procedure to eliminate quadrilaterals with an inner vertex. The redrawing (left) cannot be applied in the right case, where we are changing the blue matching to avoid a separating 2-cycle.
If \(v_{4}\) has no edge to the outside either, we change \(D\) to a drawing \(D^{\prime}\) by redrawing the part \(\Gamma\) of \(D\) inside \(Q\) outside of it reflected across \(v_{1}v_{4}\), see Figure 7. In \(D^{\prime}\) the original separating 4-cycle is no longer separating. We claim that no new separating 3-cycle or 4-cycle that is covered by \(M\) was created. The claim contradicts the choice of \(D\) or \(M\).
To prove the claim note that \(S=\{v_{2},v_{3}\}\) is a 2-separator, unless \(Q\) is the outer face of \(D\), so let's assume first that it is not. Thus a separating 3- or 4-cycle has to live on one side of \(S\), since the shortest path between them in \(Q\cup\Gamma\) except their edge is of length 3 except if both \(v_{2}\) and \(v_{3}\) are adjacent to the same vertex of \(\Gamma\), in which case \(Q\) is the outer face, a contradiction. Let \(X\) be the component of \(G\setminus S\) containing \(\Gamma\). Then the number of vertices inside 3- or 4-cycles that are not part of \(X\) is unchanged in \(D^{\prime}\), since the face \(X\) is located in is still the same. The only 3- or 4-cycles in \(X\cup S\) that were not reflected in their entirety are the ones containing the edge \(v_{2}v_{3}\). Since \(Q\) is assumed not to be the outer face, at least one of \(v_{2}\) and \(v_{3}\) is not connected to \(\Gamma\). Thus such a cycle \(C\) is a 4-cycle consisting of \(v_{2},v_{3}\), one of \(v_{1}\) or \(v_{4}\) as well as a common neighbour of \(v_{2}\) and \(v_{4}\) or \(v_{1}\) and \(v_{3}\) in \(\Gamma\). However \(v_{1}v_{2}\) or \(v_{3}v_{4}\) respectively would be the only edge in \(M\cap C\). This is a contradiction to the fact that contracting \(M\) yields a bipartite graph.
Now if \(Q\) is the outer face of \(D\), it is still true that the only cycles not reflected in their entirety contain \(v_{2}v_{3}\). However \(v_{2}\) and \(v_{3}\) could both be adjacent to a vertex in \(\Gamma\), either a common neighbour for a 3-cycle or two adjacent neighbours for a 4-cycle. Since \(v_{2}\) and \(v_{3}\) are already covered by \(M\), this 3-cycle would contain no edge in \(M\), whereas the 4-cycle would contain at most one. Therefore both of these contradict the definition of \(M\).
Therefore, we know that \(v_{4}\) has an edge \(e_{O}\) to the outside. This edge does not go to any vertex of the quadrilateral, because the only candidate left would be \(v_{2}\), but this would yield that one of the triangles \(v_{2}v_{3}v_{4}\) and \(v_{1}v_{2}v_{4}\) is separating.
Change the matching \(M\) to an edge set \(M^{\prime}\) by removing \(v_{1}v_{2}\) and \(v_{3}v_{4}\) from it and adding \(e_{O}\) and \(e_{I}\). Contracting \(M^{\prime}\) still results in a bipartite graph, because the same four facial cycles that contained our previous edges contain exactly one new edge each as well, so their size after contraction does not change. Thus \(M^{\prime}\) is a matching, because it has the same cardinality as \(M\) and is therefore minimal as well. We conclude \(M^{\prime}\) does not cover \(v_{2}\) or \(v_{3}\), because \(M\) did not contain any other edge than \(v_{1}v_{2}\) and \(v_{3}v_{4}\) at them either. Since \(M^{\prime}\) does not contain two edges from quadrilateral \(v_{1},\ldots,v_{4}\) but \(M\) is minimal, there has to be a separating quadrilateral, of which \(M^{\prime}\) contains two edges, but \(M\) doesn't. If such a separating quadrilateral \(Q\) contains \(e_{I}\), then it has to contain another edge incident to \(v_{1}\). It cannot contain \(v_{1}v_{2}\), because we know \(v_{2}\) is not covered by \(M^{\prime}\). Therefore it contains \(v_{1}v_{4}\) and consequently \(e_{O}\). The same argumentation works to show that if it contains \(e_{O}\), then it also contains \(e_{I}\). This is a contradiction to the existence of \(M^{\prime}\) because the endpoints of \(e_{O}\) and \(e_{I}\) are on the outside and the inside of the quadrilateral respectively and therefore non-adjacent.
So we proved that our choice of \(M\) makes sure that no separating 2-cycles will be present in the contracted plane bipartite multi-graph.
The embedding \(D\) and the matching \(M\) can be constructed starting from an arbitrary embedding and matching by iterative application of the operations used in the proof.
Proof of Theorem 3.: Now let \(B\) be the plane bipartite multi-graph obtained from \(G\) by contracting the edges in \(M\) without changing the embedding any further. Let \(B^{\prime}\) be the underlying simple graph of \(B\) and let \(Q\) be a quadrangulation or a star which has \(B^{\prime}\) as a spanning subgraph. The proof of Theorem 3 shows that there is a left to right placement
\(v_{1},\ldots,v_{s}\) of the vertices of \(Q\) on the \(x\)-axis such that for each \(i\in[s]\) all the edges \(v_{j}v_{i}\) with \(j<i-1\) are in one half-plane and all edges \(v_{i}v_{j}\) with \(j>i+1\) are in the other half-plane. Delete all the edges from \(Q\) which do not belong to \(B^{\prime}\), and duplicate the multi-edges of \(B\) in the drawing. This yields a 2-page book embedding \(\Gamma\) of \(B\).
Let \(v\) be a contracted vertex of \(B\). Vertex \(v\) was obtained by contracting an edge \(uw\in M\). If \(u\) and/or \(w\) did not have degree 3, we add edges at the appropriate places into the embedding that end in leaves, see Figure 8. To add an edge to \(u\) for instance, choose a face \(f\) incident to \(u\) that is not contracted into a 2-cycle. Let \(e\) and \(e^{\prime}\) be the two edges incident to both \(v\) and \(f\). If the angle between \(e\) and \(e^{\prime}\) contains part of the spine (the \(x\)-axis), we put the leaf on the spine close to \(v\) connected to \(v\) with a short edge below or above the spine, in a way to accomodate the local vertex condition of \(v\). If it doesn't, assume without loss of generality it is in the upper half-plane and that edge \(e\) is the edge closer to the spine. This edge is unique because both edges at \(v\) delimiting \(f\) go upwards and therefore both to the same side, say right of \(v\). Route the new edge closely along \(e\) then put the leaf just next to the other endpoint \(x\) of \(e\). Edges that would cross this new edge cannot cross \(e\), thus the only possibility are edges incident to \(x\) that emanate into the upper halfspace. However those edges have to go to the left of \(x\) by its local vertex condition. These edges do not exist, as any such edge would have to cross \(e^{\prime}\), see the dashed line in Figure Figure 8. Thus the new edge is uncrossed. This procedure will be done to every vertex first. Note that the resulting graph stays bipartite and the local vertex conditions are still fulfilled, but now every contracted vertex has degree 4. This makes the case distinction of splitting the vertices easier.
We now show how to undo the contractions, i.e., _split_ vertices, in the drawing \(\Gamma\) in such a way that at the end we arrive at a one-sided 2-page book drawing \(\Gamma^{\star}\) of \(G\), that is, a 2-book embedding of \(G\) with vertex-sequence \(v_{1},\ldots,v_{n}\) such that for every \(j\in\{1,\ldots,n\}\) the incident back-edges \(v_{i}v_{j}\) with \(1\leq i<j\) are all drawn either on the spine or on the same page of the book embedding (all above or all below the spine). Once we have obtained such a book embedding, we can delete the artificial added leaves, then add the spine edges (including the back edge from the rightmost to the leftmost vertex) to \(G\) to obtain a supergraph \(G^{+}\) of \(G\) which has a one-sided Hamiltonian cycle, showing that \(G\) is POSH.
Before we advance to show how we split a single vertex \(v\) of degree four into an edge \(uw\in M\), we first want to give an overview of the order in which the different splits, the _far splits_ and _local splits_ are applied. We will then describe what these different splits actually mean. To split all the degree four vertices we proceed as follows:
First we split all vertices which are subject to a far split, from the outside inwards. More precisely, define a partially ordered set on the edges incident2 to vertices subject to a far
Figure 8: How to add leaves: The leaf is plotted as a square, its new adjacent edge fat.
split in the following way: Every edge \(e\) defines a region \(R_{e}\) which is enclosed by \(e\) and the spine. Now order the edges by the containment order of regions \(R_{e}\). From this poset, choose a maximum edge and then a vertex that needs a far split incident to that edge. When no further far split is possible we do all the local splits. These splits are purely local, so they cannot conflict with each other. Therefore their order can be chosen arbitrarily.
We label the edges of \(v\) in clockwise order as \(e_{1},e_{2},e_{3},e_{4}\) such that in \(G\) the edges \(e_{1},e_{2}\) are incident to \(u\) and \(e_{3},e_{4}\) are incident to \(w\). If the two angles \(\angle e_{2}e_{3}\) and \(\angle e_{4}e_{1}\) together take part of both half-planes defined by the spine, then it is possible to select two points left and right of the point representing \(v\) in \(\Gamma\) and to slightly detour the edges \(e_{i}\) such that no crossings are introduced and one of the two points is incident to \(e_{1},e_{2}\) and the other to \(e_{3},e_{4}\). The addition of an edge connecting the two points completes the split of \(v\) into the edge \(uw\in M\). Figure 9 shows a few instances of this _local_ split.
The above condition about the two angles is not fulfilled if and only if all four edges of \(v\) emanate into the same halfspace, say the upper one, and the clockwise numbering starting at the \(x\)-axis is either \(e_{4},e_{1},e_{2},e_{3}\) or \(e_{2},e_{3},e_{4},e_{1}\). The two cases are the same up to exchanging the names of \(u\) and \(w\), therefore we can assume the first one. A more important distinction is whether most \(e_{i}\) end to the left or right of \(v\). Note that in the ordering given by \(\Gamma\), all \(e_{i}\) go to the same side, since they are all in the same halfplane. However, if \(v\) is not the first vertex we are splitting, it may happen, that a single edge on the spine to the other side exists, see Figure 10. For all \(i\in[4]\) let \(v_{i}\) be the other endpoint of \(e_{i}\) than \(v\). While it can happen that some of the \(v_{i}\) coincide due to multi-edges, we will first discuss the case that they don't. In the left case we put \(u\) slightly left of \(v_{1}\) while in the right case \(u\) is put slightly right of \(v_{2}\), connecting \(u\) to this close vertex by a spine edge. In both cases we leave \(w\) at the former position of \(v\). Figure 10 shows the right case and Figure 11 the left.
To see that in the left case edges \(uv_{2}\) and \(uw\) are completely free of crossings, observe that we can route them close to the path \(v_{2}vv_{1}\) and the edge \(v_{1}v\) respectively in the original drawing (dashed in Figure 11). It is important to note here, that due to the order in which
Figure 10: Far split with \(v_{i}\) to the right except for the spine edge neighbor.
Figure 9: Four cases for the local split of a vertex \(v\).
we chose to do the splits, \(v_{1}\) and \(v_{2}\) are still original vertices of \(B\), that is, they have not been split in the upper half-plane and thus still don't have two edges emanating to the upper half-plane to both sides. Therefore, similarly to the argumentation for adding leaves, no edge incident to \(v_{1}\) crosses \(uw\) or \(uv_{2}\). The right case is analogous, just exchange the roles of \(v_{1}\) and \(v_{2}\).
This kind of split is a _far_ split. For the purposes of incidence in the poset structure mentioned above, vertices are not only considered incident to any edge they are an endpoint of, but the spine neighbour of \(u\) (\(v_{1}\) or \(v_{2}\)) is also considered to be incident to the edge \(uw\). For illustration, consider the outermost black edge in Figure 10 (left), it is considered incident to \(v\).
In the following we describe how the different kinds of splits are affected by the presence of multi-edges. The first thing to note is that local splits can be done in the same way, since we did not mention the end vertices at all.
Concerning the far splits, firstly we talk about the case that exactly two edges go from one vertex to another: As depicted in Figures 10 and 11 the case \(v_{2}=v_{3}\) and/or \(v_{4}=v_{1}\) is unproblematic, in this case we keep the dashed line(s) in the drawing. Double-edges are consecutive because non-consecutive double-edges are separating 2-cycles, which we avoided in the construction. Thus the last case of a double-edge to consider is \(v_{1}=v_{2}\). In this case, we follow the same strategy of placement of \(u\) and \(w\), but this results in a double-edge on the spine between \(u\) and \(v_{1}=v_{2}\), see Figure 12. As in later local splits, we might be interested what half-space the angle between the two spine edges is part of, we interpret one of these edges as a spine edge and the other as an edge which is above or below the spine depending on the right vertex of the two. This might be \(u\) or \(v_{1}\), depending on whether we are in the left or right case. It is important for the one-sidedness condition to choose this direction so that all left neighbours of the right vertex of the two are reached by edges emanating into the same halfspace and/or spine edges.
Secondly, if there are three edges between a left vertex \(v_{\ell}\) and a right vertex \(v_{r}\), say in the upper half-plane, we will split both simultaneously, for illustration, see Figure 13. Since three edges go between these two vertices, there is just one more edge \(e\) left for \(v_{\ell}\). Therefore we can find a place on the spine just to the right or to the left of \(v_{\ell}\) which is free, because the edge \(e\) is on the other side. Now we split \(v_{\ell}\) into \(u_{\ell}\) and \(w_{\ell}\) and \(v_{r}\) into \(u_{r}\) and
Figure 11: Far split within the gray region with \(v_{i}\) to the left in the upper half-plane.
Figure 12: If \(v_{1}=v_{2}\), a double spine edge is created. Here \(e_{3}=vv_{3}\) is a spine edge.
simultaneously where \(w_{\ell}\) and \(w_{r}\) are the vertices with the edge that goes somewhere else on both sides. From left to right we put \(u_{r}\) then \(u_{\ell}\) just left of the position of \(v_{l}\), which is the new position of \(w_{\ell}\). The three of them are connected by spine edges, just \(u_{r}\) and \(w_{\ell}\) have an edge in the lower half-plane. These edges are not crossed, because the vertices are close enough together. Finally we put \(w_{r}\) at the position of \(v_{r}\) and add edges to \(w_{r}\) and \(w_{\ell}\) in the upper half-plane. These edges are not crossed, because any edge crossing them would have crossed the triple edge in the original drawing.
This kind of split is a _double_ split. These splits are purely local, so they can be performed together with the local splits in the end.
The last case is that all four edges of a given vertex go to the same vertex, this is a full connected component of the bipartite graph, because it has maximum degree 4. This component goes back to a \(K_{4}\) component in the cubic graph that had two independent edges contracted. A one-sided Hamiltonian cycle of \(K_{4}\) is illustrated in Figure 2. We apply another local double split which consists of replacing the 4 parallel edges by this drawing, embedded close to the place of one of the original vertices.
This completes the proof of Theorem 4.2.
## 6 2-Trees
From the positive results in Sections 4 and 5 one might expect that "sufficiently sparse" planar graphs are POSH. This section shows that 2-trees are not.
A 2-_tree_ is a graph which can be obtained, starting from a \(K_{3}\), by repeatedly selecting an edge of the current graph and adding a new vertex which is made adjacent to the endpoints of that edge. We refer to this operation as _stacking_ a vertex over an edge.
From the recursive construction it follows that a 2-tree on \(n\) vertices is a planar graph with \(2n-3\) edges. We also mention that 2-trees are series-parallel planar graphs. Another well studied class which contains 2-trees as a subclass is the class of (planar) Laman graphs.
Fulek and Toth have shown that planar 3-trees admit \(n\)-universal point sets of size \(O(n^{3/2}\log n)\). Since every 2-tree is an induced subgraph of a planar 3-tree the bound carries over to this class.
There is a 2-tree \(G\) on 499 vertices that is not POSH.
Throughout the proof we assume that a 2-tree \(G\) is given together with a left to right placement \(v_{1},\ldots,v_{n}\) of the vertices on the \(x\)-axis such that adding the spine edges and the reverse edge \(v_{n}v_{1}\) to \(G\) yields a plane graph with a one-sided Hamiltonian cycle.
For an edge \(e\) of \(G\) we let \(X(e)\) be the set of vertices which are stacked over \(e\) and \(S(e)\) the set of edges which have been created by stacking over \(e\), i.e., each edge in \(S(e)\) has one vertex of \(e\) and one vertex in \(X(e)\). We partition the set \(X(e)\) of an edge \(e=v_{i}v_{j}\) with \(i<j\) into a
Figure 13: Doing a double split means splitting two vertices simultaneously.
left part \(\mathit{X\!L}(e)=\{v_{k}\in X(e):k<i\}\), a middle part \(\mathit{X\!M}(e)=\{v_{k}\in X(e):i<k<j\}\), and a right part: \(\mathit{X\!R}(e)=\{v_{k}\in X(e):j<k\}\).
For every edge \(|\mathit{X\!R}(e)|\leq 2\).
Suppose that \(|\mathit{X\!R}(e)|\geq 3\). Each vertex in this set has all its back-edges on the same side. Two of them use the same side for the back edges to the vertices of \(e\). This implies a crossing pair of edges, a contradiction.
If for all \(e^{\prime}\in S(e)\) we have \(|X(e^{\prime})|\geq 3\), then \(|\mathit{X\!M}(e)|\leq 3\).
Suppose that \(e=v_{i}v_{j}\) with \(i<j\) is in the upper half-plane and there are four vertices \(x_{1},x_{2},x_{3},x_{4}\) in \(\mathit{X\!M}(e)\). One-sidedness implies that the four edges \(x_{k}v_{j}\) are in the upper half-plane. Thus if \(x_{1},x_{2},x_{3},x_{4}\) is the left to right order, then the edges \(v_{i}x_{2}\), \(v_{i}x_{3}\), and \(v_{i}x_{4}\) have to be in the lower half-plane. Now let \(e^{\prime}=v_{i}x_{3}\) and consider the three vertices in \(X(e^{\prime})\). Two of them, say \(y_{1},y_{2}\), are on the same side of \(x_{3}\). First suppose \(y_{1},y_{2}\in X(e^{\prime})\) are left of \(x_{3}\). The edges of \(v_{i}x_{2}\) and \(x_{2}v_{j}\) enforce that \(y_{1},y_{2}\) are between \(x_{2}\) and \(x_{3}\). Due to edge \(x_{2}v_{j}\) the edges \(v_{i}y_{1},v_{i}y_{2}\) are in the lower half-plane. One-sidedness at \(x_{3}\) requires that \(y_{1}x_{3}\) and \(y_{2}x_{3}\) are also in the lower half-plane. This makes a crossing unavoidable.
Now suppose that \(y_{1},y_{2}\in X(e^{\prime})\) are right of \(x_{3}\). The edges \(v_{i}x_{4}\) and \(x_{4}v_{j}\) enforce that \(y_{1},y_{2}\) are between \(x_{3}\) and \(x_{4}\). Due to the edge \(x_{3}v_{j}\) the edges \(v_{i}y_{1}\) and \(v_{i}y_{2}\) are in the lower half-plane. Now let \(y_{1}\) be left of \(y_{2}\). One-sidedness at \(y_{2}\) requires that \(x_{3}y_{2}\) is also in the lower half-plane, whence, there is a crossing between \(v_{i}y_{1}\) and \(x_{3}y_{2}\). This completes the proof of the claim.
If \(\mathit{X\!L}(e)\geq 2\) and \(x\) is the rightmost element of \(\mathit{X\!L}(e)\), then \(\mathit{X\!L}(e^{\prime})\leq 1\) for some \(e^{\prime}\in S(e)\) incident with \(x\) and \(\mathit{X\!R}(e^{\prime})=\emptyset\) for both.
Suppose that \(e=v_{i}v_{j}\) with \(i<j\) is in the upper half-plane and there are two vertices \(x_{1},x_{2}\) in \(\mathit{X\!L}(e)\). We assume that \(x_{2}\) is the rightmost element of \(\mathit{X\!L}(e)\). From one-sidedness at \(v_{j}\) we know that \(x_{1}v_{j}\) and \(x_{2}v_{j}\) are in the upper half-plane. Now \(x_{1}v_{i}\) and hence also \(x_{2}v_{i}\) are in the lower half-plane. All the vertices of \(X(x_{2}v_{i})\) and \(X(x_{2}v_{j})\) are in the region bounded by \(x_{1}v_{j},v_{j}v_{i},v_{i}x_{1}\), in particular \(\mathit{X\!R}(e^{\prime})=\emptyset\) for both. Suppose for contradiction that we have \(y_{1},y_{2}\in\mathit{X\!L}(x_{2}v_{i})\) and \(z_{1},z_{2}\in\mathit{X\!L}(x_{2}v_{j})\). By one-sidedness the edges from \(x_{2}\) to the four vertices \(y_{1},y_{2},z_{1},z_{2}\) are in the same half-plane. If they are in the lower half-plane and \(y_{1}\) is left of \(y_{2}\) there is a crossing between \(y_{1}x_{2}\) and \(y_{2}v_{i}\). If they are in the upper half-plane and \(z_{1}\) is left of \(z_{2}\) there is a crossing between \(z_{1}x_{2}\) and \(z_{2}v_{j}\). The contradiction shows that \(\mathit{X\!L}(x_{2}v_{i})\leq 1\) or \(\mathit{X\!L}(x_{2}v_{j})\leq 1\), since \(x=x_{2}\) this completes the proof of the claim.
We are ready to define the graph \(G\) and then use the claims to prove that \(G\) is not POSH. The graph \(G\) contains a _base edge_\(e\) and seven vertices stacked on \(e\), i.e., \(|X(e)|=7\). For each
Figure 14: Illustrating the proofs of the claims.
edge \(e^{\prime}\in S(e)\) there are five vertices stacked on \(e^{\prime}\). Finally, for each edge \(e^{\prime\prime}\) introduced like that three vertices are stacked on \(e^{\prime\prime}\). Note that there are \(7\cdot 2=14\) edges \(e^{\prime}\), \(14\cdot 5\cdot 2=140\) edges \(e^{\prime\prime}\) and \(140\cdot 3\cdot 2=840\) edges introduced by stacking on an edge \(e^{\prime\prime}\). In total the number of edges is \(995=2n-3\), hence, the graph has \(499\) vertices.
Now suppose that \(G\) is POSH and let \(v_{1},\ldots,v_{n}\) be the order of vertices on the spine of a certifying 2-page book embedding. Let \(e=v_{i}v_{j}\) with \(i<j\) be the base edge. Assume by symmetry that \(e\) is in the upper half-plane. From Claim 3 we get \(|\mathit{X\!R}(e)|\leq 2\) and from Claim 4 we get \(|\mathit{X\!M}(e)|\leq 3\), it follows that \(|\mathit{X\!L}(e)|\geq 2\). Let \(x_{1}\) and \(x_{2}\) be elements of \(\mathit{X\!L}(e)\) such that \(x_{2}\) is the rightmost element of \(\mathit{X\!L}(e)\). Let \(e^{\prime}=x_{2}v_{i}\) and \(e^{\prime\prime}=x_{2}v_{j}\) then \(\mathit{X\!R}(e^{\prime})=\emptyset=\mathit{X\!R}(e^{\prime\prime})\) by Claim 5. From Claim 4 applied to \(e^{\prime}\) and \(e^{\prime\prime}\) we deduce that \(|\mathit{X\!M}(e^{\prime})|\leq 3\) and \(|\mathit{X\!M}(e^{\prime\prime})|\leq 3\). Hence \(|\mathit{X\!L}(e^{\prime})|\geq 2\) and \(|\mathit{X\!L}(e^{\prime\prime})|\geq 2\). This is in contradiction with Claim 3. Thus there is no spine ordering for \(G\) which leads to a one-sided crossing-free 2-page book embedding.
## 7 Concluding remarks
We have examined the exploding double chain as a special point set (order type) and shown that the initial part \(H_{n}\) of size \(2n-2\) is \(n\)-universal for graphs on \(n\) vertices that are POSH. We believe that the class of POSH graphs is quite rich. On the sparse side, the result on bipartite graphs might be generalized, while for triangulations, the sheer number of Hamiltonian cycles in 5-connected graphs [1] makes it likely one of them is one-sided.
Every triangle-free planar graph is POSH.
Every 5-connected planar triangulation is POSH.
We have shown that 2-trees and their superclasses series-parallel and planar Laman graphs are not contained in the class \(\mathcal{C}^{\prime}\) of POSH graphs. The question whether these classes admit universal point sets of linear size remains intriguing.
|
2309.15239 | Degenerate interpretations of O$_3$ spectral features in exoplanet
atmosphere observations due to stellar UV uncertainties: a 3D case study with
TRAPPIST-1e | TRAPPIST-1e is a potentially habitable terrestrial exoplanet orbiting an
ultra-cool M Dwarf star and is a key target for observations with the James
Webb Space Telescope (JWST). One-dimensional photochemical modelling of
terrestrial planetary atmospheres has shown the importance of the incoming
stellar UV flux in modulating the concentration of chemical species, such as
O$_3$ and H$_2$O. In addition, three-dimensional (3D) modelling has
demonstrated anisotropy in chemical abundances due to transport in tidally
locked exoplanet simulations. We use the Whole Atmosphere Community Climate
Model Version 6 (WACCM6), a 3D Earth System Model, to investigate how
uncertainties in the incident UV flux, combined with transport, affect
observational predictions for TRAPPIST-1e (assuming an initial Earth-like
atmospheric composition). We use two semi-empirical stellar spectra for
TRAPPIST-1 from the literature. The UV flux ratio between them can be as large
as a factor of 5000 in some wavelength bins. Consequently, the
photochemically-produced total O$_3$ columns differ by a factor of 26. Spectral
features of O$_3$ in both transmission and emission spectra vary between these
simulations (e.g. differences of 19 km in transmission spectra effective
altitude for O$_3$ at 0.6 $\mu$m). This leads to potential ambiguities when
interpreting observations, including overlap with scenarios that assume
alternative O$_2$ concentrations. Hence, to achieve robust interpretations of
terrestrial exoplanetary spectra, characterisation of the UV spectra of their
host stars is critical. In the absence of such stellar measurements,
atmospheric context can still be gained from other spectral features (e.g.
H$_2$O), or by comparing direct imaging and transmission spectra in
conjunction. | Gregory Cooke, Dan Marsh, Catherine Walsh, Allison Youngblood | 2023-09-26T20:05:40Z | http://arxiv.org/abs/2309.15239v1 | Degenerate interpretations of O\({}_{3}\) spectral features in exoplanet atmosphere observations due to stellar UV uncertainties: a 3D case study with TRAPPIST-1e
###### Abstract
TRAPPIST-1e is a potentially habitable terrestrial exoplanet orbiting an ultra-cool M Dwarf star and is a key target for observations with the James Webb Space Telescope (JWST). One-dimensional photochemical modelling of terrestrial planetary atmospheres has shown the importance of the incoming stellar UV flux in modulating the concentration of chemical species, such as O\({}_{3}\) and H\({}_{2}\)O. In addition, three-dimensional (3D) modelling has demonstrated anisotropy in chemical abundances due to transport in tidally locked exoplanet simulations. We use the Whole Atmosphere Community Climate Model Version 6 (WACCM6), a 3D Earth System Model, to investigate how uncertainties in the incident UV flux, combined with transport, affect observational predictions for TRAPPIST-1e (assuming an initial Earth-like atmospheric composition). We use two semi-empirical stellar spectra for TRAPPIST-1 from the literature. The UV flux ratio between them can be as large as a factor of 5000 in some wavelength bins. Consequently, the photochemically-produced total O\({}_{3}\) columns differ by a factor of 26. Spectral features of O\({}_{3}\) in both transmission and emission spectra vary between these simulations (e.g. differences of 19 km in transmission spectra effective altitude for O\({}_{3}\) at 0.6 um). This leads to potential ambiguities when interpreting observations, including overlap with scenarios that assume alternative O\({}_{2}\) concentrations. Hence, to achieve robust interpretations of terrestrial exoplanetary spectra, characterisation of the UV spectra of their host stars is critical. In the absence of such stellar measurements, atmospheric context can still be gained from other spectral features (e.g. H\({}_{2}\)O), or by comparing direct imaging and transmission spectra in conjunction.
Exoplanets (498) -- Exoplanet atmospheres (487) -- Transmission spectroscopy (2133) -- Exoplanet atmospheric composition (2021) 0000-0002-0002-0002]G. J. Cooke
0000-0002-4882-7885]D. R. Marsh
0000-0002-4880-7885]C. Waish
0000-0002-4880-0002]A. Youngblood
## 1 Introduction
On account of their frequency and relative ease of characterisation, planetary systems orbiting M dwarf stars are prime targets in the search for potentially habitable exoplanets. TRAPPIST-1 is an M8.5V star, orbited by seven terrestrial exoplanets, and each of them could be tidally locked to their host star (Gillon et al., 2017). TRAPPIST-1e is of particular interest because with current knowledge it is more likely than the close-in planets to have retained its atmosphere due to lower predicted escape rates (Dong et al., 2018), and is probably more able to sustain surface liquid water when compared to the outer planets which receive less stellar irradiation (Gillon et al., 2017; Wolf, 2017). To begin to characterise the TRAPPIST-1 exoplanets and determine the composition of their atmospheres, at the time of writing, several observational programs with JWST are scheduled, including the observation of four transits of TRAPPIST-1e in 2023 (see program 1331)1.
Footnote 1: [https://www.stsci.edu/jwst/science-execution/program-information.html?id=1331](https://www.stsci.edu/jwst/science-execution/program-information.html?id=1331), accessed Wed April 12 2023
In general, the detection of molecular oxygen (O\({}_{2}\)) on an exoplanet is of profound interest because of its importance for life on Earth (Segura et al., 2003; Meadows et al., 2018). Ozone (O\({}_{3}\)) is produced in exoplanetary atmospheres by
UV radiation which is able to dissociate O\({}_{2}\), and it has been calculated that in some situations the detection of O\({}_{3}\) is easier to achieve than a detection of O\({}_{2}\); for example, for low O\({}_{2}\) concentrations like those potentially present during Earth's Proterozoic con, but where O\({}_{3}\) concentrations are still detectable (Kozakis et al., 2022). Thus, in such cases, it has been proposed that the detection of O\({}_{3}\) may be used as a proxy to confirm the presence of O\({}_{2}\)(Leger et al., 1993; Segura et al., 2003; Meadows et al., 2018; Quanz et al., 2021).
One-dimensional (1D) photochemical modelling has demonstrated that planetary atmospheric composition (including O\({}_{3}\) and H\({}_{2}\)O) is influenced by the strength and shape of the incoming ultraviolet (UV) radiation from the host star (see Grenfell et al., 2014; Rugheimer et al., 2013; Kozakis et al., 2022; Meadows et al., 2018, and references therein). For example, Teal et al. (2022) used MUSCLES Treasury survey M-dwarf spectra combined with UV spectra reconstructions as stellar spectra input to Atmos (a coupled 1D photochemistry and climate model), and similarly demonstrated that UV irradiation can modulate hydrocarbon haze concentrations. Because the atmospheric composition with respect to altitude affects molecular detectability in remote sensing, the link between O\({}_{3}\) abundance and O\({}_{2}\) abundance will be difficult to ascertain because it depends on several parameters, including the catalytic cycles that remove O\({}_{3}\) (e.g. HO\({}_{\rm x}\), NO\({}_{\rm x}\), ClO\({}_{\rm x}\) and BrO\({}_{\rm x}\) chemical families), and atmospheric pressure. A well-characterised spectrum of the host star is required for confident modelling of planetary climate (Eager-Nash et al., 2020), atmospheric chemistry (Kozakis et al., 2022), and atmospheric escape (Dong et al., 2018). However, the host star's spectral energy distribution may not be known to high precision when analysing and interpreting exoplanet observations.
The UV flux from TRAPPIST-1 remains uncertain because of the intrinsic faintness of the star (\(V=18.798\) mag; Costa et al., 2006). Peacock et al. (2019), henceforth known as P19, modelled the spectrum of TRAPPIST-1. To do this, they used the PHOENIX stellar atmospheric code (Baron and Hauschildt, 2007; Hauschildt, 1993; Hauschildt and Baron, 2006), and added a treatment of the chromosphere to produce synthetic stellar spectra of cool dwarf stars, including TRAPPIST-1, whose ultraviolet light have negligible flux contribution from the photosphere. More recently, Wilson et al. (2021), hereafter known as W21, used new observations of TRAPPIST-1 to create a semiempirical spectrum for use in atmospheric modelling simulations. Whilst neither spectrum wholly represents the true stellar irradiation environment of the TRAPPIST-1 planets, the W21 spectrum is in significantly better agreement with available observations of TRAPPIST-1.
In addition to the incoming stellar spectrum, the 3D transport and chemistry of the exoplanet's atmosphere is important for understanding the distribution and abundance of chemical species. Chen et al. (2019) investigated exoplanets orbiting at the inner edge of the habitable zone using a 3D chemistry climate model (WACCM4), showing how different assumed UV spectra can influence atmospheric mixing ratios of species such as H\({}_{2}\)O, O\({}_{3}\), and H. Additionally, 3D simulation studies have demonstrated the influence of UV radiation on 3D transport (Chen et al., 2019), stratospheric temperature (Godolt et al., 2015; Chen et al., 2019), and the distribution and abundance of chemical species (Chen et al., 2018, 2019, 2021; Braam et al., 2022; Ridgway et al., 2022). Tidally locked terrestrial exoplanets modelled in 3D exhibit atmospheric jets that transport heat and chemical constituents to the night side (Showman and Polvani, 2011; Eager-Nash et al., 2020; Yates et al., 2020). Proedrou and Hocke (2016), Chen et al. (2018) and Yates et al. (2020) found that O\({}_{3}\), which is photochemically generated on the dayside, can be transported to the night side, where its lifetime increases due to the lack of UV irradiation and a reduction in catalytic cycle destruction. This body of previous work motivates the need to use 3D models when investigating the climate and chemistry of specific exoplanets, in particular, with respect to their molecular observability linked to the oxygenation state of the atmosphere.
In this study we simulate the atmosphere of TRAPPIST-1e using the WACCM6 Earth System Model and the stellar spectra from P19 (\({}^{2}\)model 1A, version 1) and W21 (\({}^{3}\) version 7) which differ in UV flux in some wavelength bins by up to a factor of 5000. Our aim is to quantify the effects of such uncertainties in the strength of UV from the host star on the climate and composition of the atmosphere. This is the first time a 3D global climate model has been used to simultaneously assess the influence of uncertain UV spectra and transport on the climate and chemistry of TRAPPIST-1e. Additionally, we simulate possible future observations (transmission and emission spectra) of TRAPPIST-1e using the outputs from the WACCM6 simulations. We discuss the implications of uncertainties in the stellar UV spectrum in the interpretation of future observations of terrestrial exoplanets.
## 2 Methods
### UV spectra input
This work employs two different assumed stellar spectra as input to the simulations. P19 (Peacock et al., 2019) generated three different models of TRAPPIST-1's spectrum (Model 1A, 2A, and 2B). Model 1A was created such that the emission was benchmarked to the Ly-\(\alpha\) reconstruction that was presented in Bourrier et al. (2017), who used the Hubble Space Telescope (HST) Space Telescope Imaging Spectrograph instrument to observe the TRAPPIST-1 Ly-\(\alpha\) line. Using an alternative approach to construct Model 2A and Model 2B, P19 aligned the stellar emission to be within with the range of distance-corrected Galaxy Evolution Explorer NUV photometry of stars with a spectral type akin to TRAPPIST-1, whilst remaining compatible with FUV upper limits. The EUV estimates were extracted from empirical scaling relationships based on X-ray and Ly-\(\alpha\) emission.
W21 (Wilson et al., 2021) used new HST (1100 - 5500 A) and XMM-Newton (10 - 50 A) observations of TRAPPIST-1 from the Mega-MUSCLES (Measurements of the Ultraviolet Spectral Characteristics of Low-Mass Exoplanetary Systems) Treasury Survey (Froning et al., 2019; Wilson et al., 2021) to construct a 5 A - 100 um spectrum of the star. Four models were used to fill in gaps in wavelength coverage, including a PHOENIX model for wavelengths \(>5500\) A (\(>0.55\) um). Because of TRAPPIST-1's relatively low luminosity, W21 substituted the noisy 1100 - 4200 A HST spectrum with a semi-empirical, noiseless spectrum that reproduced the measured flux of detected UV emission lines and agreed with the upper limits on the stellar continuum established by the HST spectra.
W21 found that whilst the P19 Model 1A shows good agreement with the C II and Ca II stellar lines, several other lines are inconsistent with measured fluxes, exhibiting a discrepancy \(\sim 10\) times more than the W21 upper limits permit. In the WACCM6 simulations, we decided to use both the P19 and W21 spectra in order to illustrate important differences that may occur in instances where the exoplanet's photochemical environment is highly uncertain.
### Waccm6
TRAPPIST-1 is an ultra cool M dwarf at a distance of 12.4 pc. Its stellar properties are summarised in Table 1. We assume that TRAPPIST-1e receives 900 W m\({}^{-2}\) of irradiation (0.66 \(S_{\oplus}\), where \(S_{\oplus}\) is the total insolation received by the Earth). This is consistent with the value used in the TRAPPIST-1 Habitable Atmosphere Intercomparison project (THAI; Fauchez et al., 2020; Turbet et al., 2022), although note that the latest data available in the NASA Exoplanet Archive (Akeson et al., 2013) lists a value of \(0.646\pm 0.025\)\(S_{\oplus}\)(Agol et al., 2021).
We use the Earth System Model WACCM6 to simulate the climate of TRAPPIST-1e with the properties indicated in Table 1, using given values from from Delrez et al. (2018) and Grimm et al. (2018). Note that the simulations were started in 2020, before Agol et al. (2021) published their work. WACCM6 is a specific configuration of the Community Earth System Model version 2 (CESM2). The model release we use in this work is CESM2.1.34. We use a pre-industrial atmosphere (approximating the atmosphere of 1850 in terms of pollutants and greenhouse gas mixing ratios) with the modern ocean and land configuration, a horizontal resolution of \(1.875^{\circ}\) by \(2.5^{\circ}\) (latitude by longitude), and 70 vertical atmospheric levels. The ocean and atmosphere are fully interactive, meaning that they respond to physical perturbations such as temperature, or in the case of the atmosphere, chemical perturbations. Because TRAPPIST-1e is suspected to be tidally locked (Gillon et al., 2017), we lock the substellar point by fixing the solar zenith angle in each grid cell, and we set the exoplanet's obliquity and orbital eccentricity to zero (note that the eccentricity may be non-zero, albeit \(<0.01\); Luger et al., 2017). The substellar point is placed the Pacific Ocean at \(180^{\circ}\) longitude and \(0^{\circ}\) latitude. The chemical mechanism, which is described in Emmons et al. (2020), has 98 chemical species and 298 chemical reactions (including photochemical reactions). Absorption by CO\({}_{2}\) and H\({}_{2}\)O in the Schumann-Runge bands is included (Ji et al., 2023). The full details of the model set-up, alongside simulation scripts, are available via GitHub5.
Footnote 4: [http://www.cesm.ucar.edu/models/cesm2/](http://www.cesm.ucar.edu/models/cesm2/)
Footnote 5: [https://github.com/exo-cesm/CESM2.1.3/tree/main/Tidally_locked_exoplanets](https://github.com/exo-cesm/CESM2.1.3/tree/main/Tidally_locked_exoplanets)
We scale the P19 and W21 spectra to the irradiance received by TRAPPIST-1e (900 W m\({}^{-2}\)), rebinning them to match the wavelength grid required for WACCM6 simulations. The resulting spectra are shown in Fig. 1. For the wavelength regions over which O\({}_{2}\) and O\({}_{3}\) photolyse, the integrated flux is listed in Table 2.
We present ten simulations with WACCM6, five with the P19 spectrum, and five with the W21 spectrum - see Table 3 for a summary. The initial conditions for the five simulations with each spectrum consist of one with the standard initial pre-industrial Earth composition (PI), then three lower O\({}_{2}\) simulations with a composition with 10,
100, and 1000 times less O\({}_{2}\) (10% PAL, 1% PAL, and 0.1% PAL), and one in which the planet is not tidally locked (noTL). These simulations allow us to assess the influence of different strengths of incoming UV spectra, the influence of tidal locking, and the effects of reducing O\({}_{2}\) in order to quantify any degeneracies between O\({}_{2}\), O\({}_{3}\), and incident UV light. Each simulation is run for over 250 model Earth years, and the last ten years of the simulation are used for time-averaged results (Figs 2 - 6 and Figs 9 - 11). The WACCM6 simulations were run on 120 cores at a model cost of 1,332 pe-hours per simulated year. A 250 year simulation therefore costs 333,000 pe-hours.
at a model time step rather than an average over model time steps) for each produced spectrum. The WACCM6 instantaneous data are rebinned to a resolution of 10\({}^{\circ}\) in longitude only and we keep the same latitudinal grid resolution (1.875\({}^{\circ}\)). This is done to reduce the data size so it is compatible with GlobES in PSG, and is the same approach as used in Cooke et al. (2023). The same data at the same time step is used for both transmission spectra and emission spectra in this paper. PSG ingests the data and integrates across the whole observable disk to produce a reflection or emission spectrum. It uses the grid cells at the terminator to produce a transmission spectrum. Because model grid cells either side of the terminator would contribute to the opacity of the atmosphere, this is not the most realistic way to represent the atmospheric geometry during a planetary transit (Caldas et al., 2019). However, it is adequate for the relatively low temperature contrasts between the day and night sides for the TRAPPIST-1e atmospheres simulated here.
The molecules we use for the computation of the transmission and emission spectra are N\({}_{2}\), O\({}_{2}\), CO\({}_{2}\), H\({}_{2}\)O, O\({}_{3}\), CH\({}_{4}\), and N\({}_{2}\)O which is a list that includes possible biosignatures and indicators of habitability (Kaltenegger, 2017). The default HITRAN opacity data (Gordon et al., 2017) is used for each molecule, as well as all available collision-induced absorption coefficients (e.g. O\({}_{2}\)-O\({}_{2}\), N\({}_{2}\)-N\({}_{2}\), and O\({}_{2}\)-N\({}_{2}\)). The radiative transfer model used in PSG is the Planetary and Universal Model of Atmospheric Scattering (PUMAS) model. The correlated-k method is used by PUMAS for the spectral resolving powers used in this paper (Caldas et al., 2019). If high resolution is required, it would instead use the line-by-line method. Scattering effects are included, as are ice clouds and water clouds. The effective radius of
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Spectrum & Total & S-RC & S-RB & HC & HaB & HuB & CB \\ & 10.5–99975 nm & 130–175 nm & 176-192 nm & 200-240 nm & 200-310 nm & 310–340 nm & 400-650 nm \\ \hline Sun & 1361 & 0.0092 & 0.0373 & 1.2997 & 19.8916 & 23.4556 & 453.7313 \\ TP-1 P19 & 900 & 0.0382 & 0.0239 & 0.0841 & 0.3953 & 0.0829 & 2.1355 \\ TP-1 W21 & 900 & 0.0025 & 0.0001 & 0.0002 & 0.0050 & 0.0133 & 2.3045 \\ Flux ratio \(\frac{\mathrm{P19}}{\mathrm{W21}}\) & 1.00 & 15.54 & 451.81 & 528.85 & 79.59 & 6.21 & 0.93 \\ \hline \end{tabular}
\end{table}
Table 2: For the two different spectra used in the simulations, the integrated flux (in units of W m\({}^{-2}\)) is given for each spectrum for the total flux and six different wavelength bands: Schumman-Runge continuum (S-RC), Schumman-Runge bands (S-RB), Herzberg continuum (HC), Hartley band (HaB), Huggins band (HuB), and the Chappuis band (CB). For reference, the Earth receives 1360 Wm\({}^{-2}\) of irradiation. Photons in the Schumman-Runge continuum, Schumann-Runge bands, and Herzberg continuum are able to photolgy O\({}_{2}\). Photons in the Hartley band, Higgins band, and Chappuis band, are able to photolgy O\({}_{3}\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Simulation & Spectrum & O\({}_{2}\) mixing ratio [PAL] & Orbital parameters \\ \hline P19 PI & P19 & 1 & Tidally locked, 6.1 day rotational period \\ P19 10\% PAL & P19 & 0.1 & Tidally locked, 6.1 day rotational period \\ P19 1\% PAL & P19 & 0.01 & Tidally locked, 6.1 day rotational period \\ P19 0.1\% PAL & P19 & 0.001 & Tidally locked, 6.1 day rotational period \\ P19 noTL & P19 & 1 & Not tidally locked, 1 day rotational period \\ W21 PI & W21 & 1 & Tidally locked, 6.1 day rotational period \\ W21 10\% PAL & W21 & 0.1 & Tidally locked, 6.1 day rotational period \\ W21 1\% PAL & W21 & 0.01 & Tidally locked, 6.1 day rotational period \\ W21 0.1\% PAL & W21 & 0.001 & Tidally locked, 6.1 day rotational period \\ W21 noTL & W21 & 1 & Not tidally locked, 1 day rotational period \\ \hline \end{tabular}
\end{table}
Table 3: Ten simulations have been performed, each with an obliquity of 0\({}^{\circ}\) and a circular orbit, using two different spectra for TRAPPIST-1 from P19 and W21 (see text for UV spectra input details). Eight simulations were set up in a tidally locked configuration with a 6.1 day rotation rate. The “PI” simulations have an initial pre-industrial Earth composition and are the same apart from using the P19 and W21 spectra. The “noTL” simulations have an initial pre-industrial Earth composition, are not tidally locked, and have a diurnal cycle (rotational period of 1 day). For the lower O\({}_{2}\) scenarios, the volume mixing ratio of atmospheric O\({}_{2}\) is reduced by a factor of 10, 100, and 1000 (the “10% PAL”,“1% PAL”, and “0.1% PAL” simulations, respectively). PAL means present atmospheric level, where the present atmospheric level of oxygen is a volume mixing ratio of 0.21.
the cloud particles is assumed to be 5 um for water and 100 um for ice clouds. Scripts and data to generate the PSG files are provided with the data associated with this article.
## 3 Results
### Atmospheric chemistry
Atmospheric O\({}_{3}\) is created and destroyed via the Chapman cycle (Chapman, 1930), which is initiated by ultraviolet light (of wavelength \(\lambda\)) dissociating an O\({}_{2}\) molecule:
\[\mathrm{O}_{2}+h\nu\ (175.9\ \mathrm{nm}<\lambda<242.4\ \mathrm{nm}) \longrightarrow \mathrm{O}+\mathrm{O}, \tag{1}\] \[\mathrm{O}_{2}+h\nu\ (\lambda<175.9\ \mathrm{nm}) \longrightarrow \mathrm{O}(^{1}\mathrm{D})+\mathrm{O}(^{3}\mathrm{P}),\] (2) \[\mathrm{O}+\mathrm{O}_{2}+\mathrm{M} \longrightarrow \mathrm{O}_{3}+\mathrm{M},\] (3) \[\mathrm{O}+\mathrm{O}_{3} \longrightarrow 2\,\mathrm{O}_{2},\] (4) \[\mathrm{O}_{3}+h\nu\ (\lambda\geq 320\ \mathrm{nm}) \longrightarrow \mathrm{O}_{2}(^{3}\Sigma_{g}^{-})+\mathrm{O}(^{3}\mathrm{P}),\] (5) \[\mathrm{O}_{3}+h\nu\ (\lambda\leq 320\ \mathrm{nm}) \longrightarrow \mathrm{O}_{2}(^{1}\Delta_{g})+\mathrm{O}(^{1}\mathrm{D}). \tag{6}\]
The latter two photolysis reactions do not contribute to O\({}_{3}\) destruction because the O produced rapidly recombines with O\({}_{2}\) to produce O\({}_{3}\) via reaction 3. However, the reaction between O and O\({}_{3}\) (reaction 4) does lead to a loss of O\({}_{3}\), and it can be sped up through catalytic agents, which are denoted below as X. An example of a catalytic cycle is shown here:
Figure 2: The P19 (left column) and W21 (right column) irradiation scenarios are compared to each other for the PI (top row) and 0.1% PAL (bottom row) cases. The zonal mean (averaged over longitude) of the production of O from O\({}_{2}\) photolysis is displayed, with latitude in \({}^{\circ}\) on the horizontal axis and pressure in hPa on the vertical axis. O\({}_{2}\) photolysis takes place at wavelengths less than 242 nm. The white regions show where the production has dropped below 10\({}^{8}\) molecules m-3 s-1.
\[\begin{split}\mathrm{X}+\mathrm{O}_{3}\longrightarrow\mathrm{XO}+ \mathrm{O}_{2},\\ \mathrm{XO}+\mathrm{O}\longrightarrow\mathrm{X}+\mathrm{O}_{2}, \\ \mathrm{Overall:}\mathrm{O}_{3}+\mathrm{O}\longrightarrow 2\,\mathrm{O}_{2}. \end{split} \tag{7}\]
Catalytic agents may be NO, H, OH, Cl, or Br (Brasseur and Solomon, 2005). These species can be produced through other photolysis reactions (e.g., H\({}_{2}\)O photolysis producing H and OH).
Fig. 2 shows the zonal mean of O production from O\({}_{2}\) photolysis in the P19 PI, P19 0.1% PAL, W21 PI, and W21 0.1% PAL simulations. In the two PI simulations, O\({}_{2}\) can be photolysed lower down in the P19 case because there is more incoming UV radiation than in the W21 case. When O\({}_{2}\) is reduced to 0.1% PAL, it can be seen that the peak of O\({}_{2}\) photolysis moves to higher pressures (downward in altitude). Where O is produced impacts the number density of O\({}_{3}\). Fig. 3 shows the zonal mean of the O\({}_{3}\) number density for the same set of simulations as in Fig. 2. Reducing O\({}_{2}\) in the P19 cases causes a drop in O\({}_{3}\) number density at pressures above 1 hPa, whilst the opposite occurs in the W21 cases. This is because of the pressure dependency on the reaction that produces O\({}_{3}\) (reaction 3).
In Fig. 4 we show the global mean vertical profiles for temperature, and the mixing ratio of O, O\({}_{2}\), O\({}_{3}\), H\({}_{2}\)O, CH\({}_{4}\), N\({}_{2}\)O, and CO\({}_{2}\). On global average, the temperature profile shows deviations of up to 9 K in the troposphere and up to 19 K below the thermosphere between the W21 PI and P19 PI simulations. The W21 PI middle atmosphere (between the troposphere and thermosphere) is colder than the P19 PI middle atmosphere due to reduced O\({}_{3}\) heating because of the lower O\({}_{3}\) concentration. The P19 noTL simulation has a lower temperature in the troposphere by up to 23 K and in the middle atmosphere by up to 6 K, resulting in lower concentrations of H\({}_{2}\)O compared to the P19 PI simulation.
Figure 3: The P19 (left column) and W21 (right column) irradiation scenarios are compared to each other for the PI (top row) and 0.1% PAL (bottom row) cases. The zonal mean (average over longitude) of the O\({}_{3}\) number density is shown, with latitude in \({}^{\circ}\) on the horizontal axis and pressure in hPa on the vertical axis. The white regions show where the number density has dropped below \(10^{16}\) molecules m\({}^{-3}\).
Figure 4: The global mean temperature (a) is plotted against pressure for the P19 PI (orange), P19 noTL (dark red), P19 10% PAL (brown), P19 1% PAL (yellow), and P19 0.1% PAL (red) simulations, and the W21 PI (light blue), W21 noTL (black), W21 10% PAL (blue), W21 1% PAL (lilac), and W21 0.1% PAL (grey) simulations. The globally averaged mixing ratios for O (b), O\({}_{2}\) (c), O\({}_{3}\) (d), H\({}_{2}\)O (e), CH\({}_{4}\) (f), N\({}_{2}\)O (g), and CO\({}_{2}\) (h), are also shown.
Generally, between 200 and 10 hPa, the relative H\({}_{2}\)O number density in each simulation correlates with the relative temperature profile in each simulation (i.e. a lower temperature results in less H\({}_{2}\)O).
The O\({}_{3}\) mixing ratio profile in Fig. 4 shows large deviations between the P19 PI (orange) and W21 PI (light blue) simulations, with a difference of a factor of 116 in the mixing ratio at the surface. The O\({}_{3}\) number density peaks at \(2.0\times 10^{19}\) molecules m\({}^{-3}\) at 50 hPa in the P19 PI case and at \(6.3\times 10^{17}\) molecules m\({}^{-3}\) at 50 hPa in the W21 PI case. Despite the amount of O\({}_{3}\) present in the P19 PI simulated atmosphere, the temperature inversion in the middle atmosphere is \(<8\) K because the UV intensity is not sufficient to provide enough O\({}_{3}\) UV heating to create a similar temperature inversion to Earth's stratosphere (\(\approx 60\) K change between the tropopause and stratopause). The P19 noTL simulation (dark red) shows that tidal locking increases the amount of O\({}_{3}\) in the middle atmosphere, whilst reducing the O\({}_{3}\) concentration in the troposphere. The non-tidally locked cases are colder in the lower atmosphere, and warmer in the middle atmosphere. This reduces the rate of catalytic destruction reactions and allows O\({}_{3}\) formation to occur faster.
Fig. 5 displays the longitudinal and latitudinal variation of the O\({}_{3}\) column for the simulations averaged over the last ten years of each scenario. O\({}_{3}\) is inhomogeneously distributed horizontally, which highlights the importance of using 3D models. The P19 PI simulation has a global mean O\({}_{3}\) column of 1310 DU (approximately 4.4 times Earth's global mean value of \(\approx 300\) DU, where 1 DU = \(2.687\times 10^{20}\) molecules m\({}^{-2}\)), whilst the W21 PI simulation predicts a global mean O\({}_{3}\) column of 50 DU. Both of the O\({}_{3}\) column maxima, at 7152 DU and 134 DU for the P19 PI and W21 PI simulations, respectively, occur near the southern pole.
Fig. 6 shows the total range of O\({}_{3}\) columns produced across the atmospheric latitude-longitude grid against the O\({}_{2}\) volume mixing ratio. The O\({}_{3}\) column depends on the incident UV radiation, O\({}_{2}\) concentration, atmospheric pressure, temperature, and transport, as well as the rates of destruction. In the W21 PI case, the UV radiation is absorbed high in the atmosphere where the pressure is low, so little O\({}_{3}\) is produced and the O\({}_{x}\) (O + O\({}_{3}\)) is primarily in O. As O\({}_{2}\) is decreased, the O\({}_{3}\) column increases. In contrast, in the P19 simulations, which have larger incoming fluxes of UV, as O\({}_{2}\) increases above 1% PAL, the O\({}_{3}\) column increases, similar to the relationship on Earth. Both of these data sets are rather different when compared to previous results that have simulated the atmospheres of planets around late M dwarf stars (e.g., Rugheimer & Kaltenegger, 2018; Kozakis et al., 2022, although note that the total instellation in those simulations were set to Earth's modern value). Whilst there are likely many interacting parameters which will cause dissimilar O\({}_{3}\) column predictions, the very large discrepancies with 1D models could be due to 3D transport. This seems plausible because Braam et al. (2023) reported a stratospheric circulation in simulations of a terrestrial Proxima Centauri b scenario, in which the winds move O\({}_{3}\) from the day side to the night side, with similar circulation effects occurring in the WACCM6 simulations here (the large-scale dynamics of the atmosphere will be explored in future work).
### Transmission spectra
In Fig. 7 we show idealised transmission spectra between 0.1 - 11 um generated using the WACCM6 simulations with PSG (excluding the non-tidally locked cases). The model date chosen for the transit is arbitrary. Because there are time-dependent fluctuations for several variables in the WACCM6 simulations (e.g. clouds, chemical mixing ratios), we could have investigated time variability in the transmission spectra. However, Fauchez et al. (2022) showed that such variability is within the measurement uncertainties of JWST. The spectra are binned to approximate a resolving power of \(R=250\) to show detail in spectral features, where \(R=\lambda/\Delta\lambda\), \(\lambda\) is the wavelength, and \(\Delta\lambda\) is width of the wavelength bin. A 5 ppm error bar corresponding to the lowest achievable noise with JWST instruments, which may be between 5 - 20 ppm as calculated by Matsuo et al. (2019), Schlawin et al. (2020), Schlawin et al. (2021), and Rustamkulov et al. (2022), is indicated.
The differences in the effective altitude of O\({}_{3}\) spectral features between the P19 PI (orange) and W21 PI (light blue) transmission spectra are \(-\)4 km, \(+\)19 km, \(+\)15 km, and \(+\)17 km for the 0.25 um, 0.6 um, 4.71 um, and 9.6 um O\({}_{3}\) features, respectively. Despite the W21 PI simulation having an O\({}_{3}\) column \(\approx 26\) times lower than the P19 PI simulation, the W21 PI UV feature (centred at 0.25 um) due to the Hartley band (0.2 - 0.31 um) actually has the largest effective altitude between 0.2 - 0.3 um of all the simulations. This is because the Hartley band saturates quickly and the W21 PI atmosphere has more total O\({}_{3}\) than the P19 PI atmosphere above \(\approx 0.5\) hPa. Between 0.3 - 0.35 um, the temperature dependence of the Hartley and Huggins bands reduces the effective height of the W21 PI transmission spectra due to the colder middle atmosphere in the W21 PI simulation. At 0.6 um, a significant detection of O\({}_{3}\) with JWST in the W21 PI simulation scenario would be improbable given that the noise floor is larger than the
Figure 5: The O\({}_{3}\) column in Dobson Units [DU] across the simulation’s latitudinal and longitudinal grid is plotted for all the simulations used in this work (the P19 scenarios are in the left column and the W21 scenarios are in the right column). 1 DU is equal to \(2.6867\times 10^{20}\) molecules m\({}^{-2}\). In all simulations, O\({}_{3}\) column maxima occur at high latitudes. The substellar point for the tidally locked cases is at 180\({}^{\circ}\) longitude and 0\({}^{\circ}\) latitude on Earth’s coordinate grid (in the centre of each Robinson projection). Note that the colour bars are extended for some simulations in order to show the O\({}_{3}\) column structure in each scenario. Each panel has a different colour bar range.
height of the feature. Therefore, assuming that the W21 spectrum is closest to the true spectrum of TRAPPIST-1, or the case that the true stellar UV emission is weaker, a null detection of the 0.6 um O\({}_{3}\) feature should not rule out the presence of O\({}_{2}\) abundances at levels as high as the present-day Earth. For H\({}_{2}\)O, the spectral features are stronger in the P19 PI transmission spectra compared to W21 PI by up to 8 km which is a result of a larger number density of H\({}_{2}\)O in the middle atmosphere. Despite the difference in temperature and O\({}_{3}\) number density profiles between the noTL simulation and PI simulations, the transmission spectra are remarkably similar (within \(\pm\)4 km effective altitude, not shown for clarity). The P19 0.1% PAL transmission spectrum (red) produces a quantitatively similar transmission spectrum feature at 9.6 um to the W21 PI simulation (light blue), even though there is a 1000 times difference in O\({}_{2}\) mixing ratio between the two cases. The same can be said of the following pairs at 9.6 um: W21 1% PAL and P19 PI; W21 0.1% PAL and P19 1% PAL; W21 10% PAL and P19 10% PAL. There is a noticeable difference between the spectra at 4.71 um and 9 um, but this would require reaching the most optimistic 5 ppm noise floor in order to demonstrate that the two O\({}_{2}\) scenarios are distinguishable when there exist uncertain stellar UV flux estimates. Also note that the effective height of the O\({}_{2}\) -X collision-induced absorption feature at 6.4 um (see Fauchez et al. 2020b, for more details) is 7 km shallower when O\({}_{2}\) is at 0.1% PAL in the P19 and W21 scenarios, compared to the PI cases.
### Emission spectra
In Fig. 8 we show the emission spectra from each simulation for the atmospheric absorption O\({}_{3}\) features at 4.71 um and 9.6 um at 90\({}^{\circ}\) orbital phase (the maximum planet-star separation as viewed in an edge-on system with a circular orbit). The 4.71 um feature overlaps with a CO\({}_{2}\) feature, but O\({}_{3}\) is the dominant absorber at 4.71 um. The P19 PI simulation (orange) predicts higher O\({}_{3}\) columns than the W21 PI case (light blue); hence, the depth of the features relative to the continuum in the W21 PI case are weaker than the P19 PI scenario by a factor of 4.2 and 4.5 at 4.71 um and 9.6 um, respectively. With respect to the P19 0.1% PAL emission spectrum (red), the P19 PI emission spectrum (orange) has a greater depth by a factor of 1.3 and 1.1 at 4.71 um and 9.6 um, respectively, even though the P19 PI simulation has a mean O\({}_{3}\) column which is 3.2 times higher than the P19 0.1% PAL simulation (1310 DU versus 405 DU). The 3D effects are important here because the largest O\({}_{3}\) columns are found at the poles, but PSG is integrating over the whole observable disk. At 4.71 um, the noTL absorption features are deeper than the tidally locked PI cases by a factor of 2.0 and 1.2 in the P19 and W21 scenarios, respectively. At 9.6 um, these values are 1.2 and 1.0. In terms of the relative depths of O\({}_{3}\) features shown in Fig. 8, increasing depth is generally shown with increasing mean O\({}_{3}\) columns (see Fig. 5).
Figure 6: The oxygen (O\({}_{2}\)) concentration is shown against the ozone (O\({}_{3}\)) column in each simulation (orange for P19 scenarios and blue for W21 scenarios). The ‘error bar’ lines show the range between the minimum and maximum O\({}_{3}\) columns in each simulation. The circular dots show the mean O\({}_{3}\) columns. The O\({}_{3}\) columns simulated by WACCM6 for Earth, from Cooke et al. (2022), are shown in black for comparison.
The tidally locked simulations exhibit strong convection and high clouds around the substellar point, whereas the P19 noTL simulation has mainly low clouds with comparatively little high cloud coverage. Therefore, this results in the P19 noTL simulation (dark red) having the deepest O\({}_{3}\) emission spectral features.
It is important to note that other orbital phases may show quantitatively different results because variability in spectral features may occur due to climate variations throughout the orbit (e.g. Cooke et al., 2023). Additionally, longer term variations may be expected due to the possible presence of a 'longitudinally asymmetric stratospheric oscillation' (Cohen et al., 2022).
## 4 Discussion
We have demonstrated that large differences in assumed stellar UV spectra can lead to different predictions for the strength of O\({}_{3}\) spectral features. These spectral features may overlap at an assumed minimum observational uncertainty of 5 ppm, despite the fact that the O\({}_{2}\) concentration differs by factors of up to 1000. In this section, we compare our results to previous work, consider known uncertainties, and discuss work that should be done in preparation for future exoplanet observations.
1D photochemical modelling of M dwarf terrestrial exoplanet atmospheres has demonstrated that CH\({}_{4}\) and N\({}_{2}\)O could have greater abundances in the middle atmosphere compared to the modern Earth's atmosphere (e.g. Segura et al., 2005; Wunderlich et al., 2019), which we also find in our WACCM6 simulations of TRAPPIST-1e (see Fig. 4). Teal et al. (2022) showed changes of over two orders of magnitude in the middle atmosphere O\({}_{3}\) mixing ratios when modelling a modern Earth-like exoplanet that receives 1 \(S_{\oplus}\) of irradiation around GJ 176 (an M2.5V star), but with various UV irradiation scenarios. They derived transmission spectra predictions from their atmospheric simulations and found the maximum transit depth differences to be \(<2\) ppm, which is below the noise floor (5 ppm or greater) for JWST. This is in contrast to the W21 PI and P19 PI transmission spectra results shown here, where the strengths of
Figure 7: The transmission spectrum atmospheric effective altitude is plotted against wavelength between 0.2 μm and 11 μm for the P19 PI (orange), P19 10% PAL (brown), P19 1% PAL (yellow), and P19 0.1% PAL (red) simulations, and the W21 PI (light blue), W21 10% PAL (blue), W21 1% PAL (illac), and W21 0.1% PAL (grey) simulations. The non-tidally locked cases are excluded for clarity, but show little differences compared to the equivalent tidally locked case. The transit depth, in terms of contrast with respect to the star, is indicated on the right vertical axis in parts per million (ppm). The spectra are binned to a spectral resolving power of \(R=250\). Spectral features are indicated in grey. The wavelength range of the proposed Habitable Worlds Observatory (HWO; blue shaded region), and that of the JWST NIRSpec instrument (yellow shaded region), are shown. They overlap in the green shaded region. The wavelength range of the JWST MIRI instrument is indicated in the magenta shaded region. The wavelength spacing between 0.2 – 1 μm is changed between 1 – 11 μm in order to clearly show the UV and visible regions. The black bar represents the uncertainty that would be present on a measurement that has reached the noise floor of the instrument, where the noise floor is indicated as 5 ppm. Note this error bar is an estimate of the performance of the telescope and does not indicate a measurement.
estimated O\({}_{3}\) features here are distinct at the 5 ppm level. Whilst different atmospheric modelling methods are used in Teal et al. (2022) compared with this work, the difference in observational significance is primarily due to the size of the stars modelled: the radius of GJ 176 is 0.45 \(R_{\odot}\), and the radius of TRAPPIST-1 is 0.1192 \(R_{\odot}\). Teal et al. (2022) also demonstrated that hazy Archean Earth atmospheres were more sensitive to changes in the incoming UV spectra compared to the modern Earth's atmosphere, which warrants future investigations for how uncertainties in the UV spectrum of the host star affect hazy atmospheres in 3D models.
In terms of 3D modelling, the THAI series (Fauchez et al., 2020; Sergeev et al., 2022; Turbet et al., 2022) has investigated the climate of TRAPPIST-1e using four different 3D GCMs, assuming either an N\({}_{2}\) or CO\({}_{2}\) dominated atmosphere and not including interactive chemistry, where the composition evolves depending on chemical and photochemical reactions. The surface temperatures in the WACCM6 simulations are similar although slightly lower (219 - 231 K global mean compared to 230 - 240 K in the THAI simulations. See appendix A and Fig. 9 for more details), which may be due to differences in assumptions regarding the surface (including the distribution of the continents and the fact that an interactive ocean is used here, in contrast to a slab ocean with no meridional heat transport) or the composition of the atmosphere. With a previous version of WACCM (CESM1), Chen et al. (2019) investigated a planet with a 43.87 day orbital period around a star with an effective temperature of 4000 K and an insolation of 1.9 \(S_{\oplus}\), as opposed to the 0.66 \(S_{\oplus}\) used here, and assessed the impact of uncertain host-star UV flux on the atmosphere. Chen et al. (2019) showed that two different spectra (representing a quiescent and an active M dwarf star) impacted the middle atmospheric concentrations of O\({}_{3}\), OH, N\({}_{2}\)O, CH\({}_{4}\), and H\({}_{2}\)O. They calculated transmission spectra for the two simulated atmospheres, finding that the only observable difference was for the O\({}_{3}\) feature at 9.6 um (although the UV O\({}_{3}\) feature is not shown in their figure 11). On the other hand, the transmission spectra simulations shown here in Fig. 7 display noticeable spectral differences for O\({}_{3}\) at 0.25, 0.6, 4.7, 9.0 and 9.6 um, as well as for H\({}_{2}\)O between 5 - 6 um. The differences in predicted observations between our work and that of Chen et al. (2019) likely arise due to the differences in exoplanetary system setup, the different stellar spectra, and the calculated lower O\({}_{3}\) columns from Chen et al. (2019), compared to the simulated atmospheres here. 43.87 days is in the'slow rotator' regime (for the definition of tidally locked rotation regimes see Haqq-Misra et al., 2018), and 6.1 days for TRAPPIST-1e can correspond to either the 'Rhines rotator' or 'fast rotator' regime (Sergeev et al., 2022). Thus, our results, alongside those from Chen et al. (2019), demonstrate that 3D modelling results are sensitive to the choice of the assumed stellar UV spectra for potentially habitable tidally locked exoplanets across early and late M dwarf stars and different rotation periods. Future
Figure 8: The left panel shows the PSG simulations of planetary spectral radiance from emission spectra focused on the 4.71 μm O\({}_{3}\) feature (which overlaps with a CO\({}_{2}\) feature, both shown by the grey shaded regions) for the P19 PI (orange), P19 10% PAL (brown), P19 1% PAL (yellow), and P19 0.1% PAL (red) simulations, and the W21 PI (light blue), W21 noTL (black), W21 10% PAL (blue), W21 1% PAL (lilac), and W21 0.1% PAL (grey) simulations. The right panel shows the same for the 9.6 μm O\({}_{3}\) feature, shown by the grey shaded region. Telescopes and their instruments which may be able to probe atmospheres in the infrared include: the Extremely Large Telescope (ELT; Brandl et al., 2021) METIS instrument, the Large Interferometer for Exoplanets (LIFE; Konrad et al., 2022), and the JWST (Morley et al., 2017) NIRSpec and MIRI instruments.
work should also investigate the influence of orbital perturbations away from a synchronous 1:1 spin-orbit resonance (e.g. Chen et al., 2023) on composition.
The anisotropy of chemical molecules will affect the emission spectra and photometry of terrestrial exoplanets, by modulating when the exoplanet is brightest and dimmest at particular wavelengths (Selsis et al., 2011; Chen et al., 2018). The phase curve photometry amplitude, and where the maximum brightness occurs during the orbit, depends on the particular rotation regime that the exoplanet exists within (Haqq-Misra et al., 2018). Thermal emission features are influenced by molecular abundance, the atmospheric temperature, and the temperature difference between the emitting and absorbing region (e.g., for Earth, the infrared emission emanates from the troposphere, whilst the absorbing region is the O\({}_{3}\) layer in the stratosphere). This temperature difference is larger for M dwarf exoplanets which exhibit a less pronounced stratospheric temperature inversion due to lower incident UV emission for the same total instellation.
Note that detecting O\({}_{3}\) will be difficult with JWST within the nominal 5 year mission lifetime (although JWST is expected to continue science operations for at least 10 years), even for a modern Earth scenario (Lin et al., 2021), and Fauchez et al. (2019) found that gases other than CO\({}_{2}\) may require hundreds or thousands of transits to be detectable. Simulations of high-resolution observations with the extremely large class of telescopes indicate that O\({}_{2}\) at 0.76 um may be detectable in the case of TRAPPIST-1e within \(\sim 100\) transits (Snellen et al., 2013; Rodler and Lopez-Morales, 2014; Serindag and Snellen, 2019).
The derived Mega-MUSCLES spectrum of TRAPPIST-1 (W21; Wilson et al., 2021) is constrained by more observations than the P19 spectrum, but both spectra have significant flux uncertainties. Whilst neither spectrum used in this study is likely to wholly represent the true stellar irradiation environment of TRAPPIST-1e, there are at least observational constraints on the 'ground truth' of its parent star's spectrum. For many planetary systems, there will only be estimates from stellar models, and this will cause significant problems for predicting the photochemical environment of potentially habitable exoplanets. Furthermore, in each wavelength bin, we have assumed that the flux does not vary with time. Due to M dwarf stellar activity, such an assumption is unlikely to be accurate (Loyd et al., 2018). The O\({}_{3}\) abundance will be perturbed by the inclusion of incident stellar flares (Segura et al., 2010; Tilley et al., 2019; Chen et al., 2021; Ridgway et al., 2022) which we have not investigated here. Based on previous results, it seems that stellar flares will exacerbate the interpretation of observed spectra, so future work on incoming UV uncertainties could evaluate the additional impact of stellar flares. The present modelling uncertainties in the O\({}_{2}\)-O\({}_{3}\) non-linear relationship arising from differences in predictions between 1D and 3D models (Cooke et al., 2022; Kozakis et al., 2022; Yassin Jaziri et al., 2022; Ji et al., 2023) will compound this issue. In addition to previous work, our simulations, which focus on the specific target of TRAPPIST-1e, further motivates the need for a dedicated next generation observatory with UV capabilities to characterise exoplanet host stars.
UV flux measurements from a telescope such as the \(\sim 6\) m UV/VIS/NIR telescope (currently referred to as the Habitable Worlds Observatory) that was recommended by the Decadal Survey (National Academies of Sciences, Engineering, and Medicine, 2021) will aid the interpretation of observed exoplanet spectra and help to infer the concentration of O\({}_{2}\) and trace gases in the atmosphere without direct measurements (Kozakis et al., 2022). However, this telescope is not expected to be operational until the late 2030s at the earliest. Determining the EUV fluxes from a host star (which will require a dedicated observatory; Youngblood et al., 2019) will also provide important information about atmospheric escape, habitability, and help to examine the atmospheric history of the exoplanets in the system.
Before next generation telescopes are online, there are other clues available to characterise oxygenated terrestrial atmospheres if the interpretation of the spectral features (e.g. O\({}_{3}\)) leaves degeneracies in the parameter space between O\({}_{2}\) concentration, O\({}_{3}\) concentration, UV irradiation, and O\({}_{3}\) depleting catalytic cycles. For example, the major differences between the P19 PI and the P19 0.1% PAL transmission spectra are between the H\({}_{2}\)O, O\({}_{2}\), and O\({}_{3}\) features. Moreover, the estimated inter-simulation trends with wavelength in transmission spectra are not mirrored in emission spectra predictions. Namely, the depth relative to the continuum in emission spectra for O\({}_{3}\) at 4.71 um and 9.6 um contrasts with the relative strength of associated transmission spectra features between the simulations. This means that if both transmission spectra and emission spectra are acquired with adequate precision, multi-wavelength observations combined with atmospheric retrieval methods (Quanz et al., 2021) will be useful when delineating between possible atmospheric composition scenarios. Nevertheless, Batalha et al. (2018) showed that confident estimates on atmospheric composition from emission spectra observed with JWST MIRI LRS will prove difficult to achieve for temperate exoplanets, using TRAPPIST-1 f as an example.
Finally, future modeling work should explore the impact of chemical boundary conditions on simulated transmission and emission spectra. For example, the choice of boundary conditions, such as the upward flux and abundances of
species in the HO\({}_{\rm x}\), NO\({}_{\rm x}\), ClO\({}_{\rm x}\), and BrO\({}_{\rm x}\) families, will not only affect the O\({}_{3}\) distribution but also the atmospheric temperature and incoming radiation. Provided the atmospheric profile of O\({}_{3}\) derived from atmospheric retrievals is well-constrained, by assuming minimal and maximal O\({}_{3}\) catalytic cycle destruction, limits could be placed on O\({}_{2}\).
## 5 Conclusions
For the first time using a 3D chemistry-climate model (WACCM6) to simulate TRAPPIST-1e (assuming an initial Earth-like composition) and including two different incoming UV spectra, we demonstrated that using a single observed O\({}_{3}\) feature outside of the UV range to extrapolate to undetected molecules, such as O\({}_{2}\), will lead to degeneracies over multiple orders of magnitude in the parameter space for atmospheric composition. The UV spectrum in both of the incoming stellar spectra varies by up to a factor of \(\approx 500\) for important photolysis bands, and up to \(\approx 5000\) for individual wavelength bins. Whilst the atmospheric columns of many species (including O\({}_{2}\) and CO\({}_{2}\)) are virtually unaffected by the difference between the two spectra, for an O\({}_{2}\) mixing ratio of 0.21, the O\({}_{3}\) columns differ by a factor of 26 due to different O\({}_{3}\) production rates that are sensitive to the incoming spectrum. Consequently, the associated O\({}_{3}\) transmission spectral features differ in effective altitude by up to 19 km, whilst the O\({}_{3}\) features in emission spectra differ by a factor of up to 4.5 in relative depth. One implication is that a non-detection of O\({}_{3}\) at visible wavelengths may not indicate the absence of an oxygenated atmosphere. Furthermore, tidal locking of the model results in substantially different emission spectra features which are shallower relative to the emission continuum.
Without the direct detection of O\({}_{2}\), additional context for determining the oxygenation state of the atmosphere can be gained from either 1) future missions that are able to better characterise the UV spectra of faint stars, or 2) sensitive direct imaging observations combined with transmission spectra observations targeting individual features.
## Acknowledgments
G.J.C. acknowledges the studentship funded by the Science and Technology Facilities Council of the United Kingdom (STFC). C.W. acknowledges financial support from the University of Leeds and from the Science and Technology Facilities Council (grant numbers ST/T000287/1 and MR/T040726/1). This work was undertaken on ARC4, part of the High Performance Computing facilities at the University of Leeds, UK.
We would like to acknowledge high-performance computing support from Cheyenne (doi:10.5065/D6RX99HX) provided by NCAR's Computational and Information Systems Laboratory, sponsored by the National Science Foundation. The CESM project is supported primarily by the National Science Foundation (NSF). This material is based upon work supported by the National Center for Atmospheric Research (NCAR), which is a major facility sponsored by the NSF under Cooperative Agreement 1852977.
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
|
2309.13890 | Bitstream-Corrupted Video Recovery: A Novel Benchmark Dataset and Method | The past decade has witnessed great strides in video recovery by specialist
technologies, like video inpainting, completion, and error concealment.
However, they typically simulate the missing content by manual-designed error
masks, thus failing to fill in the realistic video loss in video communication
(e.g., telepresence, live streaming, and internet video) and multimedia
forensics. To address this, we introduce the bitstream-corrupted video (BSCV)
benchmark, the first benchmark dataset with more than 28,000 video clips, which
can be used for bitstream-corrupted video recovery in the real world. The BSCV
is a collection of 1) a proposed three-parameter corruption model for video
bitstream, 2) a large-scale dataset containing rich error patterns, multiple
corruption levels, and flexible dataset branches, and 3) a plug-and-play module
in video recovery framework that serves as a benchmark. We evaluate
state-of-the-art video inpainting methods on the BSCV dataset, demonstrating
existing approaches' limitations and our framework's advantages in solving the
bitstream-corrupted video recovery problem. The benchmark and dataset are
released at https://github.com/LIUTIGHE/BSCV-Dataset. | Tianyi Liu, Kejun Wu, Yi Wang, Wenyang Liu, Kim-Hui Yap, Lap-Pui Chau | 2023-09-25T06:06:26Z | http://arxiv.org/abs/2309.13890v2 | # Bitstream-corrupted Video Recovery:
###### Abstract
The past decade has witnessed great strides in video recovery by specialist technologies, like video inpainting, completion, and error concealment. However, they typically simulate the missing content by manual-designed error masks, thus failing to fill in the realistic video loss in video communication (e.g., telepresence, live streaming, and internet video) and multimedia forensics. To address this, we introduce the bitstream-corrupted video (BSCV) benchmark, the first benchmark dataset with more than 28,000 video clips, which can be used for bitstream-corrupted video recovery in the real world. The BSCV is a collection of 1) a proposed three-parameter corruption model for video bitstream, 2) a large-scale dataset containing rich error patterns, multiple corruption levels, and flexible dataset branches, and 3) a plug-and-play module in video recovery framework that serves as a benchmark. We evaluate state-of-the-art video inpainting methods on the BSCV dataset, demonstrating existing approaches' limitations and our framework's advantages in solving the bitstream-corrupted video recovery problem. The benchmark and dataset are released at [https://github.com/LIUTIGHE/BSCV-Dataset](https://github.com/LIUTIGHE/BSCV-Dataset).
## 1 Introduction
As Cisco's report [10] shows, video traffic is expected to account for 82% of all internet traffic by 2022, making it the commonest multimedia type on the internet. However, due to unreliable channels and physical damage of the storage medium, videos are vulnerable to generated errors in the case of packet loss during transmission and context corruption during compression and storage [5]. Meanwhile, malicious attacks on the video decoder ecosystem may cause the risk of severe damage to video bitstreams [45]. Therefore, bitstream damage during compression, storage, and transmission chains is a common and crucial problem. The various types of damage factors yield different corruption degrees and error patterns in decoded frames, which are irreversible and unpredictable. Recovering the video content in corrupted bitstreams is of vital importance but beset with difficulties.
Researchers have been dedicated to video recovery at the encoding, transmission, and decoding stages. Reed-Solomon codes [50] add redundant information during the encoding stage to enable error correction for the receiver. Checksum [14] is used in the transmission process to detect errors and initiate re-transmission when errors are detected. These methods introduce additional requirements and inflexibility in hardware design and system reliability, and they cannot deal with long sequence loss in bitstreams. More research has focused on visual-based solutions in the decoding stage due to
its intuitive and easy access to images, such as error concealment, completion, frame interpolation, and video inpainting.
Typically, the error concealment is to mitigate the effect of errors on video quality [25]. However, the error patterns are generally simulated by error masks of the slice or block shapes directly on the decoded video content. The fixed and simple error simulation limits the application scenarios, as the error patterns in realistic scenarios are neither fixed nor simple. Frame interpolation [37, 36, 4, 18, 59] is another visual-based solution. It synthesizes intermediate frames from a given set of correctly decoded frames to replace damaged or lost frames. Nonetheless, interpolation methods are barely satisfactory when there exists a large scene motion between frames [40], and when they encounter errors spread across a sequence of frames. Video inpainting is similar to video completion, which aims to complete missing regions in a given video [53]. Generally, video inpainting takes the surrounding temporal and spatial content as a reference to fill corrupted regions by learning underlying patterns and structural features of videos [29, 33]. However, the corrupted regions are commonly simulated by user-predefined binary error masks instead of natural errors generated from the real bitstream.
For the application scenarios of video storage, communication, and internet video, the manually created masks have difficulty in reflecting the shapes and patterns of real corruption. The requirements of video content coherence in temporal and spatial dimensions are hard to meet when large motion and details are missing across frames [29]. Therefore, the real bitstream and video datasets, as well as video content recovery methods, are highly necessary and urgently needed. So far, there is no large-scale dataset specialized for bitstream-corrupted video recovery. Existing inpainting datasets are limited to simulated error masks, and error concealment datasets are small-scale and may require extracting motion information from bitstream, which is not always available [23].
In this paper, we construct the first large-scale benchmark to facilitate the research of bitstream-corrupted video (BSCV) recovery. Our BSCV dataset includes more than 28,000 bitstream-corrupted video clips (over 3,500,000 frames), which are extracted and elaborated from the most popular video inpainting datasets, YouTube-VOS [55] and DAVIS [39]. Specifically, we compress these video clips into bitstreams using the most popular H.264 video codec [21]. Segments in bitstreams are randomly removed to simulate the effect of packet loss error and storage damage error on decoded videos, and
Figure 1: Summary of the corruption pattern in video recovery problem. Compared with the simulated video corruption in existing inpainting or error concealment reseach, our dataset contains various realistic corruption patterns including **(1)** block artifacts (arfts.), **(2)** color artifacts, **(3)** duplication artifacts, **(4)** misalignment, **(5)** texture loss, **(6)** trailing artifacts, which is closer to the corrupted videos1\({}^{,2,3}\) in real world.
these error types are common in real-world multimedia communications [3]. The simulated error patterns used in typical video recovery tasks and the real error patterns of our dataset are shown in Fig. 1. It can be observed that the video error types in our dataset are sophisticated and unpredictable, while others are simple and fixed. Therefore, our dataset enables us to reveal the problem of real-world video corruption in multimedia communications completely. Furthermore, we also provide a specialized recovery method for bitstream-corrupted video. The remaining semantic information in corrupted regions is incorporated with the spatially and temporally adjacent information to recover the corrupted regions.
The main contributions are as follows: (i) We construct BSCV, the first large-scale dataset used for bitstream-corrupted video recovery in the real world. The provided videos are decoded from real corrupted bitstreams, which are generated by our three-parameter bitstream corruption model. The dataset contains over 28,000 challenging corrupted video clips with realistic and unpredictable error patterns, multiple corruption levels, and flexible dataset branches. (ii) We propose a plug-and-play module to flexibly embed in existing video inpainting frameworks. It enhances the feature representation capability by extracting residual visual information from the corrupted region, achieving higher recovery quality. (iii) We perform a comprehensive evaluation of our dataset to reveal the limitation of existing video inpainting algorithms and point out the future direction.
## 2 Related Works
**Benchmark and dataset.** To the best of our knowledge, currently, there is no bitstream-corrupted video benchmark for the research of bitstream-corrupted video recovery.
As shown in Table 1, for conventional error concealment and corruption recovery research, using a small set of YUV sequence [1] to test the algorithm performance is a common practice. In that case, researchers usually simulate different kinds of stripe or packet loss-caused masks on those video sequences [9; 51; 26; 25]. However, the scale issue limits the application of deep learning methods. Along with the development of video datasets for different computer vision applications, some datasets such as Vimeo90K [54], REDS [34] are proposed for video restoration tasks including super-resolution, deblurring, and so on. Recently, deep learning-based video completion assumes a very similar task setting with error concealment which is a sub-task of video inpainting. By accepting arbitrary masks, learning-based video inpainting can be trained by a large number of samples. The setting of the mask is usually a fixed mask [53] or an object-like mask with limited size, random shape, and motion [33; 58; 29; 60]. Most datasets involve the content of the videos in the DAVIS [39] and YouTube-VOS [55]. However, DAVIS is still a relatively small scale with only 150 video clips, and therefore it is usually used for qualitative evaluation in video inpainting research. Recently, YouTube-VOS has been a widely-used large-scale dataset for various video inpainting algorithm training because of its content diversity. Nevertheless, the large-scale dataset never considers real video corruption. With the simulated mask setting, video inpainting and different kinds of video recovery research are difficult to perform well in complex and unpredictable video corruptions because of the gap between the human-predefined binary mask and unpredictable mask supervision. Besides, modern video datasets are usually packed in frame sequences, and bitstream-related research still hungers for data in the current deep learning era.
**Video restoration.** As videos can be treated as multiple consecutive images/frames, earlier works [43; 11] simply reuse the ideas from image restoration with the temporal redundancy of neighboring frames fails to be explored. To fully utilize temporal information, Xue _et al_. [54] proposed a task-oriented flow to achieve feature alignment explicitly. Other studies utilized dynamic upsampling filter [20] or deformable convolution [44] to achieve implicit motion compensation. As for feature fusion, either a one-stage direct fusion structure [44; 47] or multi-stage progressive fusion structure [57] have been used in existing methods.
\begin{table}
\begin{tabular}{l c c c c c} \hline Dataset & Clip & Biocurant & Biocurant & Mask & Application \\ Name & Number & Postload & Curception & Provision & Spatial \\ \hline \hline YUV Sequence [1] & 26 & - & - & - & - & VC, VR \\ Vimeo90K [54] & 900,000\({}^{+}\) & \(\times\) & \(\times\) & \(\times\) & SR, TTP \\ REDS [24] & 300 & \(\times\) & \(\times\) & \(\times\) & \(\times\) & SR, DB \\ DAVIS [19] & 150 & - & \(\times\) & \(\times\) & \(\prime\) & OS, NP \\ YouTube-VOS [55] & 4,000+ & \(\times\) & \(\times\) & \(\times\) & \(\prime\) & OS, NP \\ BSCV(Omn) & 28,000+ & \(\times\) & \(\checkmark\) & \(\checkmark\) & VR, SR, ITTP, NP \\ \hline \hline \end{tabular}
* CV: Video Coding, SR: Super Resolution, ITP: Interpolation, DB, Deblurring, OS: Object Segmentation, NP: Inpainting, VR: Video Recovery
* Frame numbers are fixed at 7 frames.
\end{table}
Table 1: Comparisons among video datasets
**Video error concealment.** Video error concealment, a commonly-used post-processing technique at the decoder side, aims to recover the error regions in decoded videos [51]. It can be divided into various categories in the bitstream and pixel levels, including spatial, temporal, and hybrid spatial-temporal methods [23; 25; 56]. Traditionally, at the bitstream level, missing motion information can be estimated by surrounding motion vector and block partitioning in the previous frame [8; 32]. At the pixel level, pixel-wise processing is capable but relies on deficient spatial information [23]. Recently, deep learning-based methods still assume a traditional corruption pattern and use experimental mask settings to simulate stripe or patch loss [42; 52; 51; 9]. It makes these methods not suitable for recovering bitstream-corrupted videos because the corruption caused by realistic packet loss is generally unpredictable and irregular.
**Video inpainting/completion.** Video inpainting is to generate content in unfilled regions of a video, which accepts arbitrarily defined masks to indicate corrupted regions. Traditionally, video inpainting is considered as a patch matching or pixel diffusion problem [19; 19; 12; 16; 35]. In the era of deep learning, the patch-based method also makes significant success [33; 41]. Flow-guided generative methods are currently mainstream in video inpainting, leveraging motion information for spatial and temporal relationships between frames [53; 13; 58; 28; 22; 29; 61]. DFVI [53] was a pioneering work that formulates the generative video inpainting problem as a pixel propagation task rather than simply filling RGB values in corrupted regions. Li _et al._[29] built the traditional 3-stage video inpainting pipeline optimized jointly and achieved an efficient end-to-end framework for video inpainting. In the context of bitstream-corrupted video recovery, video inpainting is closely related. However, existing research often overlooks the performance of inpainting algorithms when dealing with dynamic and large masks. Consequently, they fail to address complex recovery scenarios with significant corrupted areas and partially remained content caused by bitstream corruption.
## 3 Bitstream-corrupted Video Dataset
**H.264 bitstream and bitstream corruption.** The most popular video codec, H.264, was used by 85% of video developers in 2022 [2]. The compatibility of H.264 with a variety of devices and platforms empowers the delivery of video content bitstream over the internet. The H.264 bitstream domain is shown in Fig. 2. The typical format of H.264 bitstream consists of successive NALUs (network abstraction layer units). A bitstream contains several bytes of start code prefix and Header. The SPS (sequence parameter sets) and PPS (picture parameter sets) also occupy a small number of bytes. By analyzing the bitstream component, we find that the bytes of SPS, PPS, header, and start code only take up a negligible proportion of NALU bytes. In contrast, the frame data occupy a dominant proportion of an H.264 bitstream (usually more than 99.9%).
The bitstream segments and packets are possibly corrupted or lost in the chains of video storage, encoding, transmission, and decoding. Therefore, video recovery from corrupted bitstreams is in surging demand. Due to the significant proportion of frame data in a bitstream file, corruption is most likely to occur in the frame data parts, which is the basic assumption in this paper. As shown in the frame domain of Fig. 2, there exist inter-frame correlations among frames of a video when encoding a video into the bitstream. A frame can refer to other previously coded frames for high coding efficiency. The inter-NALU dependencies are accordingly created in a bitstream. Thus, error propagation among frames tends to be irregular and unpredictable.
Figure 2: Left: H.264 bitstream statistics and the proposed corruption model. Right: Inter-frame correlations and error propagation in the frame domain.
**Bitstream-corrupted video generation.** Due to the popularity of H.264, video clips are encoded by H.264 codec to generate bitstream files of these videos. The coding configuration selects widely used close-GOP (group of pictures), and the GOP sizes adopt 16 frames for a long-range reference. We simulate the corruption pattern by removing specific segments of some NALUs in a bitstream, as shown in the bottom part of Fig. 2. As for the start code and NALU headers, we skip them because of their negligible proportion less than 0.01% of a H.264 bitstream and corruption on these parts may cause severe errors, e.g., sequential frame missing, even decoding failures. In such cases, bitstream corrupted videos are not recoverable by existing vision-based methods. Based on the analysis, we can randomly corrupt frames in visual level. Therefore, we parameterize a three-parameter corruption model \((P,L,S)\), where the corrupted fragments are defined by frame corruption probability \(P\), corruption location \(L\), and fragment size \(S\). The corrupted bitstreams are parsed and decoded by H.264 decoder, generating videos with unpredictable regional errors.
**Dataset construction and statistics.**
By the above bitstream corruption, we construct a bitstream-corrupted video (BSCV) dataset based on two commonly used datasets in video inpainting, i.e., YouTube-VOS [55] and DAVIS [39]. In detail, we extract 4,132 original videos from YouTube-VOS and DAVIS datasets to generate the main branch.
DAVIS provides 480P videos with dense object segmentation annotation which is mostly used for evaluation in prior works, we followed its application in this paper as well. YouTube-VOS is mainly a 720P dataset which also contains several 1080P videos. Our method is also applicable for those 1080P videos which proves its scalability. Then we additionally provide an additional 1080P branch with 256 videos from YouTube-UGC dataset [48]. It contains longer frame sequence and higher resolution which could enrich the source of our dataset, allowing further extension based on it. Further, we provide a small 4K branch using videos from Videozy4K [17] as an reference example for the future extension to higher resolution videos.
By setting the parameter combinations of the proposed corruption model, multiple branches of the BSCV dataset can be generated. We also provide error region masks in the dataset. Specifically, grayscale difference maps are calculated by subtracting the corrupted videos from the corresponding original videos decoded from the corruption-free bitstream. The slight changes below the default threshold are suppressed, and the small outliers inside or outside masks are removed by morphological filtering.
For the BSCV dataset branch in the parameters of \((1/16,0.4,4096)\), Fig. 3 illustrates the corruption statistics of the branch and the corruption degree of randomly selected video samples. The area ratio of corrupted regions to their corresponding frame is referred to as the "corrupted area ratio". The ratios range from 0-10%, 10-30%, and above 30% are defined as minor (min.), moderate (mod.), and severe (sev.) corruption levels. The ratio of 0 is defined as corruption-free (unc.). We observe that nearly 30% of frames are corrupted for this example dataset branch. Compared with the existing video inpainting tasks with fixed mask area settings (e.g., 1/16), the frame corruption in our dataset is complex, variable, and unpredictable, making it closer to realistic scenarios and more challenging.
We further analyze the rich error patterns of our dataset shown in Fig. 1. Color artifacts occur when chrominance information is corrupted, which is more severer than edge color bleeding in typical compression artifacts [31]. The trailing artifacts come from the corruption of motion information, which causes a floating trailing effect in subsequent frames. The texture information corruption
Figure 3: Taking the BSCV dataset branch in the parameters of \((1/16,0.4,4096)\) as an example for illustration. (a) The statistics of corruption distribution. (b) The corruption level changes among frames for some sampled videos.
and error propagation may cause blocking artifacts. Duplicate artifacts are common in intra-coding regions, which duplicate the error pixels in the adjacent regions. More details on dataset construction and analysis of error patterns can refer to the Supplementary Material.
**Flexibility and extensibility.** The constructed dataset and proposed three-parameter corruption model can provide flexibility in dataset customization and extensibility in application scenarios. By setting different parameter combinations, it is flexible to construct custom datasets to meet specific application scenarios, which is demonstrated in the experiment section. We also developed a video recovery framework without relying on the motion, partition, and residual information in case they are not available in a corrupted bitstream. Thus, the provided dataset and recovery framework can extend to broad bitstream corruption scenarios, such as packet loss during transmission, segment corruption in compression, and deletion of partial data in storage. The application scenarios are not limited to bitstream-related video recovery. It is also suitable for local and cloud video processing tasks, like video inpainting, completion, and manipulation.
## 4 Bitstream-corrupted Video Recovery Framework
In this section, we propose a specialized plug-and-play feature enhancement module and implement it on existing video inpainting frameworks.
**Framework overview.** The overview of the proposed bitstream-corrupted video recovery (BSCVR) framework is illustrated in Fig. 4. We propose a plug-and-play feature enhancement module that provides an additional perception channel to existing video inpainting frameworks. It extracts and fuses local features from corrupted and corruption-free regions. By encoding the residual information inside the corrupted regions into the local features, it can enhance the feature completion and representation capability greatly compared to existing video inpainting frameworks. Consequently, the enhanced feature can provide a solid reference for the subsequent recovery process. Then a flow-guided feature propagation module is used in [29] to propagate the content. Combining with reference content from non-local frames, the content generation module is implemented by stacking several temporal focal transformers [29].
To be specific, given a corrupted video frame sequence \(\big{\{}X^{t}\in\mathcal{R}^{3\times h\times w}|t=1,2,...,T\big{\}}\), and its corresponding mask sequence indicating the corrupted regions \(\big{\{}M^{t}\in\mathcal{R}^{1\times h\times w}|t=1,2,...,T\big{\}}\). The video recovery framework is expected to recover the corrupted region with spatially and temporally plausible content. According to Fig. 4, for the input corrupted frame sequence, we use a context
Figure 4: Overview of our bitstream-corrupted video recovery (BSCVR) framework. Compared with existing methods, we follow the common practice by inputting the corruption-free content as the basic information source when constructing local features for recovery. We additionally enable a new input channel for the corrupted region and extract the feature of its partial contents which is completely ignored by existing methods. With transformer-based architecture, the local feature can be enhanced by encoding the feature of partial contents into it.
encoder (\(E\)) [38] to perform region-based encoding. \(\left\{Q^{t}\in\mathcal{R}^{3\times h\times w}|t=1,2,...,T_{l}\right\}\) indicated by masks will be separately input into the recovery framework. Then, we propose to use several transformer encoder layers [46] to fuse and re-encode these two features. By attention-based decoding and channel fusion, an intermediate feature is generated. Consequently, with skip connection and output projection, the representative capability of the resulting feature can be further enhanced by fusing multi-scale and multi-level information. We then follow the approach of flow-guided video inpainting to extract and complete optical flows from neighboring frames to serve as guidance for feature alignment and propagation. Afterward, a content generation module based on temporal focal transformer and soft spliting will combine the enhanced, aligned, and propagated features of local neighboring frames with the reference features of non-local frames' corruption-free regions \(\left\{R^{t}\in\mathcal{R}^{3\times h\times w}|t=1,2,...,T_{nl}\right\}\) to generate content and finally reconstruct a result frame sequence \(\left\{\hat{Y}^{t}\in\mathcal{R}^{3\times h\times w}|t=1,2,...,T_{l}\right\}\) through a decoder (\(D\)) module.
**Training method.** Inspired by existing video inpainting methods, we apply multiple loss terms from different perspectives [29] to our BSCVR framework. The overall loss function is composed of three terms \(\mathcal{L}_{rec}\), \(\mathcal{L}_{adv}\), and \(\mathcal{L}_{flow}\), expressed as:
\[Loss=\mathcal{L}_{rec}+\mathcal{L}_{adv}+\mathcal{L}_{flow}=\left\|\hat{\mathbf{ Y}}-\mathbf{Y}\right\|_{1}+(-E_{z\sim P_{\hat{\mathbf{y}}}(z)}[D(z)])+\mathcal{L}_{ flow}. \tag{1}\]
Meanwhile, since the model is trained in GAN [15] paradigm, the applied T-PatchGAN [7] based discriminator should minimize the loss, expressed as:
\[\mathcal{L}_{\mathcal{D}}=E_{x\sim P_{\mathbf{Y}}(x)}[\mathrm{ReLU}(1-D(x))]+ E_{x\sim P_{\hat{\mathbf{y}}}(x)}[\mathrm{ReLU}(1+D(z))]. \tag{2}\]
Detailed descriptions of the methodology can be found in the Supplementary Material.
## 5 Experiment
The proposed BSCVR and state-of-the-art (SOTA) video inpainting methods are performed on the constructed BSCV dataset. We conduct comprehensive quantitative and qualitative evaluations to
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c} \hline \hline \multirow{3}{*}{Test res.} & \multirow{3}{*}{Method} & \multicolumn{6}{c|}{Accuracy} & \multicolumn{2}{c}{Efficiency} \\ \cline{3-10} & & \multicolumn{3}{c|}{YouTube-VOS (720P) Subset} & \multicolumn{3}{c|}{DAVIS (480P) Subset} & \multicolumn{3}{c|}{Runtime} \\ \cline{3-10} & & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & VFID\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & VFID\(\downarrow\) & s/frame \\ \hline \multirow{9}{*}{240P} & Input & 18.8749 & 0.8160 & 0.1527 & 0.2015 & 18.4562 & 0.7921 & 0.1541 & 0.4189 & - \\ & STTN [58] & 29.3840 & 0.9174 & 0.0465 & 0.0566 & 26.2172 & 0.8600 & 0.0638 & 0.1589 & 0.120 \\ & STTN [58]* & 29.9172 & 0.9303 & 0.0394 & 0.0544 & 26.5453 & 0.8722 & 0.0575 & 0.1534 & 0.120 \\ & FuseFormer [33] & 28.8012 & 0.9047 & 0.0549 & 0.0641 & 26.2547 & 0.8618 & 0.0659 & 0.1645 & 0.200 \\ & FuseFormer [33]* & 29.8108 & 0.9328 & 0.0381 & 0.0526 & 26.7367 & 0.8834 & 0.0531 & 0.1477 & 0.200 \\ & E2FGVI-HQ [29] & 29.6866 & 0.9228 & 0.0469 & 0.0555 & 26.7850 & 0.8765 & 0.0600 & 0.1513 & 0.160 \\ & E2FGVI-HQ [29]* & 31.0030 & 0.9473 & 0.0341 & 0.0479 & 27.6551 & 0.9018 & 0.0491 & 0.1387 & 0.160 \\ & **BSCVR-S (Ours)* & 31.8345 & 0.9584 & 0.0262 & 0.0427 & 28.4211 & 0.9180 & 0.0381 & 0.1196 & 0.172 \\ & **BSCVR-P (Ours)* & **31.9534** & **0.9598** & **0.0258** & **0.0426** & **28.5430** & **0.9199** & **0.0375** & **0.1165** & 0.178 \\ \hline \multirow{9}{*}{Ori- ginal} & Input & 19.1490 & 0.8244 & 0.1415 & 0.0575 & 18.4384 & 0.7979 & 0.1490 & 0.1999 & - \\ & E2FGVI-HQ [29] & 28.5039 & 0.8783 & 0.0453 & 0.0126 & 25.7803 & 0.8236 & 0.0504 & 0.0468 & 0.192 / 0.176 \\ \cline{1-1} & E2FGVI-HQ [29]* & 29.5666 & 0.9023 & 0.3955 & 0.0161 & 26.6723 & 0.8611 & 0.0530 & 0.0577 & 0.192 / 0.176 \\ \cline{1-1} & **BSCVR-S (Ours)* & **30.2235** & **0.9185** & **0.0335** & 0.0143 & **27.2770** & **0.8809** & 0.0427 & 0.0500 & 0.250 / 0.203 \\ \cline{1-1} & **BSCVR-P (Ours)* & 29.9943 & 0.9144 & 0.0343 & **0.0104** & 26.3564 & 0.8511 & **0.0416** & **0.0406** & 0.261 / 0.213 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative results of SOTA pre-trained video inpainting methods, their corresponding models trained on our dataset (denoted by Star Mark*), and our method. In our method, BSCVR-S means that the feature enhancement module considers the input feature as a sequence like traditional Transformer [46], and BSCVR-P indicates that the module considers the input as patches referring to SwinIR [30]. The comparison is conducted under the 240P setting due to the model capability of previous works. For the methods which are able to handle arbitrary-resolution video, we calculate metrics based on the original frame sequence, and we measured and demonstrate the runtime of the model under 720P (former) / 480P (latter) input, respectively.
demonstrate the effectiveness of our dataset and method. The flexibility of our corruption model and the robustness of our BSCVR framework are validated on multiple branches of BSCV.
**Experimental setting.** We adopt the corruption parameter of \((1/16,0.4,4096)\), and its corresponding statistics have been illustrated in Fig. 3. This setting has moderate difficulties with adequate corruption types, which is suitable for video inpainting methods. The corrupted region is usually recoverable yet challenging. SOTA video inpainting methods are compared with our method to recover corruption-free videos. STTN [58] and FuseFormer [33] downsample videos to 240P due to the limitation of computational complexity. E2FGVI-HQ is an upgraded version of E2FGVI [29], stating that it could take arbitrary resolution and generate results with the original input resolution. They could be viewed as the main competitor of our method. Noted that previous works set the mask with "random shape and location" to augment inpainting data, and those pre-trained models are trained with 50K iterations. In contrast, training those methods on our dataset requires only 25K iterations to converge. The detailed implementation of our method can refer to the Supplementary Material.
**Evaluation metrics.** Regarding the quantitative evaluation, we measure the performance of inpainting algorithms based on two aspects: inpainting quality reconstruction and realism. Among them, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [49], and Learned Perceptual Image Patch Similarity (LPIPS) [62] using pre-trained AlexNet backbone [27] are mainly used to
Figure 5: Qualitative comparison of our method and SOTA video inpainting methods on low (a) and high (b) resolutions. The involved corruption types include **(1)** blocking artifacts, **(2)** color artifacts, **(3)** duplication artifacts, **(4)** misalignment, **(5)** texture loss, **(6)** trailing Artifacts and their combinations.
measure reconstruction performance. Video F\(\acute{e}\)_c_het Inception Distance (VFID) [24] is mainly used to measure performance in terms of realism. It corresponds to the sets of all recovered videos and all reference videos. The features are extracted from a pre-trained I3D backbone [6].
**Quantitative evaluation.** The quantitative results on the YouTube-VOS and DAVIS subsets are shown in Table 2. Due to the model capability of some previous works, we first follow the existing experimental setting of video inpainting to downscale the original video in our dataset to 240P and conducted model training and metric calculation. It can be observed that our methods achieve better results in all metrics. However, compromising on video resolution is not reasonable for the video recovery problem. Thus, we particularly compared our method with the SOTA method E2FGVI-HQ [29] which is currently the only method can handle the original resolution scenario. For the metrics of PSNR, SSIM, LPIPS, and VFID, our method can achieve significant improvement, which makes the bitstream-corrupted video recoverable, e.g., >30dB PSNR for YouTube-VOS. Our method could comprehensively refine the content in corrupted regions to guide plausible content generation and keep low computational complexity by applying efficient model architecture referring to E2FGVI-HQ [29]. For the results on different corruption parameters, we provided more experiments in the Supplementary Material.
**Qualitative Evaluation.** For qualitative evaluation, we choose STTN [58], FuseFormer [33], and E2FGVI-HQ [29] trained on our dataset as comparison methods. The evaluation is conducted under 240P and original resolutions. Some representative corruption patterns and their recovery results are visualized in Fig. 5a. The comparison methods are difficult to generate plausible content to recover the corrupted region, while our proposed method can generate clearer details with more informative textures and structures. Our method and dataset limits the tendency of object removal when the mask is relatively large, especially under the original resolution and the compared results of our method with the SOTA method E2FGVI-HQ [29] are shown in Fig. 5b. These results demonstrate the limitation of current methods on the proposed dataset and the advantage of our high quality data and method. More visualized results can be found in the Supplementary Material.
**Flexibility in Dataset construction.** We use the three-parameter corruption model \((P,L,S)\) in our dataset construction. We validate the model by generating more branches of dataset with seven additional parameter settings: \((1/16,0.4,2048)\), \((1/16,0.4,8192)\), \((1/16,0.4,4096)\), \((1/16,0.2,4096)\), \((1/16,0.8,4096)\), \((2/16,0.4,4096)\) and \((4/16,0.4,4096)\). These settings represent varying corruption probabilities, where \(P=m/l\) implies that the corruption happens in random \(m\) frames out of \(l\) frames of a GOP.
We analyze the corruption distribution by calculating the ratio of the corrupted region to the frame resolution, as shown in Fig. 6. It indicates that parameter \(P\) has the most significant impact on corruption, leading to a higher number of corrupted frames and more severe damage. For these
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline \multirow{2}{*}{Param. _(P, L, S)_} & \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Accuracy} \\ & & & DAVIS & \multicolumn{1}{c}{Subset} \\ & & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & VFID\(\downarrow\) \\ \hline \multirow{5}{*}{_(1/16, 0.4, 4096)_} & Input & 18.4384 & 0.7979 & 0.1490 & 0.1999 \\ & E2FGVI-HQ [29]* & 26.3734 & 0.8415 & 0.0466 & 0.0444 \\ & **BSCVR-S (Ours)** & **27.2770** & **0.8809** & 0.0427 & 0.0500 \\ & **BSCVR-P (Ours)** & 26.3564 & 0.8511 & **0.0416** & **0.0406** \\ \hline \multirow{5}{*}{_(1/16, 0.4, 2048)_} & Input & 18.2283 & 0.7798 & 0.1536 & 0.1920 \\ & E2FGVI-HQ [29]* & 26.0251 & 0.8423 & 0.4777 & 0.0440 \\ & **BSCVR-S (Ours)** & **26.2437** & **0.8554** & **0.0416** & **0.0401** \\ & **BSCVR-P (Ours)** & 26.1057 & 0.8525 & 0.0422 & 0.0407 \\ \hline \multirow{5}{*}{_(1/16, 0.4, 4096)_} & Input & 18.0789 & 0.7649 & 0.1569 & 0.2067 \\ & E2FGVI-HQ [29]* & 24.7468 & 0.7746 & 0.0656 & 0.0546 \\ & **BSCVR-S (Ours)** & **23.4982** & **0.7898** & **0.0570** & **0.0478** \\ & **BSCVR-P (Ours)** & 24.8303 & 0.7940 & 0.0579 & 0.0484 \\ \hline \multirow{5}{*}{_(2/16, 0.4, 4096)_} & Input & 17.9418 & 0.7616 & 0.1592 & 0.1963 \\ & E2FGVI-HQ [29]* & 24.3774 & 0.7934 & 0.0623 & 0.0767 \\ & **BSCVR-S (Ours)** & **24.5066** & **0.8077** & **0.0553** & **0.0698** \\ & **BSCVR-P (Ours)** & 24.3808 & 0.8037 & 0.0561 & 0.0705 \\ \hline \multirow{5}{*}{_(1/16, 0.4, 48192)_} & Input & 18.6665 & 0.8170 & 0.1450 & 0.1849 \\ & **BSCVR-P (Ours)** & 26.0722 & 0.8371 & 0.0486 & 0.0455 \\ \cline{1-1} & **BSCVR-S (Ours)** & **26.2708** & **0.8518** & **0.0423** & **0.0417** \\ \cline{1-1} & **BSCVR-P (Ours)** & 26.1231 & 0.8487 & 0.0430 & 0.0418 \\ \hline \multirow{5}{*}{_(1/16, 0.8, 4096)_} & Input & 19.0062 & 0.8419 & 0.1389 & 0.1874 \\ \cline{1-1} & E2FGVI-HQ [29]* & **32.8311** & 0.9506 & 0.0162 & 0.0264 \\ \cline{1-1} & **BSCVR-S (Ours)** & 32.7959 & **0.9514** & **0.0147** & **0.0247** \\ \cline{1-1} & **BSCVR-P (Ours)** & 32.7204 & 0.9509 & 0.0149 & 0.0252 \\ \hline \multirow{5}{*}{_(4/16, 0.4, 4096)_} & Input & 17.8542 & 0.7587 & 0.1616 & 0.1973 \\ \cline{1-1} & **BSCVR-P (Ours)** & 27.2912 & 0.7480 & 0.0738 & 0.1192 \\ \cline{1-1} & **BSCVR-S (Ours)** & 22.7094 & **0.7570** & **0.0679** & **0.1079** \\ \cline{1-1} & **BSCVR-P (Ours)** & 22.5480 & 0.7527 & 0.0686 & **0.1079** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison with E2FGVI-HQ on different dataset branches under different corruption parameter combinations.
dataset branches, we compare our BSCVR and E2FGVI-HQ under the original resolution on the DAVIS 480P subset. The results are listed in Tab. 3. It shows that the proposed BSCVR consistently outperforms E2FGVI-HQ, validating the robustness of our BSCVR on multiple dataset branches.
## 6 Conclusion
Aiming at the challenging problem of bitstream-corrupted video recovery in the real world, we construct the first large-scale benchmark, BSCV. The BSCV provides a bitstream corruption model, a realistic decoded video dataset, and a video recovery framework, BSCVR. The bitstream corruption model enables to flexibly generate dataset branches by specifying parameter combinations. The dataset contains 28,000 realistic video clips decoded from corrupted bitstreams with unpredictable error patterns and corruption levels. The BSCVR offers a plug-and-play feature enhancement module to achieve high-quality video recovery. Extensive experiments demonstrate that the proposed BSCVR outperforms SOTA video inpainting methods quantitatively and qualitatively. The flexibility of dataset construction and the robustness of our BSCVR framework are also validated on various dataset branches. The benchmark dataset is expected to benefit video recovery in video communication and multimedia forensics. The future work will concentrate on designing more reasonable bitstream corruption models, engaging more dataset sources, and creating more effective recovery frameworks.
## Acknowledgement
This research / project is supported by the National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity R&D Programme (NRF2018NCR-NCR0009-0001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Cyber Security Agency of Singapore.
|
2309.14127 | Dual digraphs of finite meet-distributive and modular lattices | We describe the digraphs that are dual representations of finite lattices
satisfying conditions related to meet-distributivity and modularity. This is
done using the dual digraph representation of finite lattices by Craig, Gouveia
and Haviar (2015). These digraphs, known as TiRS digraphs, have their origins
in the dual representations of lattices by Urquhart (1978) and Plo\v{s}\v{c}ica
(1995). We describe two properties of finite lattices which are weakenings of
(upper) semimodularity and lower semimodularity respectively, and then show how
these properties have a simple description in the dual digraphs. Combined with
previous work on dual digraphs of semidistributive lattices (2022), it leads to
a dual representation of finite meet-distributive lattices. This provides a
natural link to finite convex geometries. In addition, we present two
sufficient conditions on a finite TiRS digraph for its dual lattice to be
modular. We close by posing four open problems. | Andrew Craig, Miroslav Haviar, Klarise Marais | 2023-09-25T13:29:15Z | http://arxiv.org/abs/2309.14127v1 | # Dual digraphs of finite meet-distributive and modular lattices
###### Abstract
We describe the digraphs that are dual representations of finite lattices satisfying conditions related to meet-distributivity and modularity. This is done using the dual digraph representation of finite lattices by Craig, Gouveia and Haviar (2015). These digraphs, known as TiRS digraphs, have their origins in the dual representations of lattices by Urquhart (1978) and Ploscica (1995). We describe two properties of finite lattices which are weakenings of (upper) semimodularity and lower semimodularity respectively, and then show how these properties have a simple description in the dual digraphs. Combined with previous work on dual digraphs of semidistributive lattices (2022), it leads to a dual representation of finite meet-distributive lattices. This provides a natural link to finite convex geometries. In addition, we present two sufficient conditions on a finite TiRS digraph for its dual lattice to be modular. We close by posing four open problems.
semimodular lattice, lower semimodular lattice, modular lattice, TiRS digraph, meet-distributive lattice, finite convex geometry
06B15, 06C10, 06C05, 05C20, 06A75
## 1 Introduction
The first dual representation of arbitrary bounded lattices was given by Urquhart in 1978 [13]. Since then, many different authors have attempted to provide dualities and dual representations of classes of lattices that are not necessarily distributive (see the recent survey by the first author [3]).
In this paper we examine representations for finite lattices that satisfy conditions related to meet-distributivity and modularity. The dual structures of these finite lattices will be TiRS digraphs satisfying some additional conditions. It was shown by Craig, Gouveia and Haviar [4] that there is a one-to-one correspondence between the class of finite lattices and finite digraphs known as TiRS digraphs (see Definition 2.4 and Theorem 2.6). We remark that this
correspondence generalises Birkhoff's one-to-one correspondence between finite distributive lattices and finite posets from the 1930s.
We introduce and study lattice-theoretic conditions which generalise both lower semimodularity and (upper) semimodularity for finite lattices and seem to be more natural and simpler than the conditions from [8]. We are also able to provide equivalent conditions to them on the dual TiRS digraph of a finite lattice. We can combine our lattice-theoretic conditions with our previous results [6] to characterise the dual digraphs of finite meet-distributive lattices, which correspond to finite convex geometries.
Currently, the only known dual characterisation of finite modular lattices is given by the theory of Formal Concept Analysis [8]. A rather complicated condition is available for the standard context dual to a finite semimodular lattice [8, Theorem 42]. We are able to provide conditions on the dual digraph of a finite lattice, which are sufficient though not necessary for modularity of the lattice.
The paper is laid out as follows. In Section 2 we provide some background definitions and results that will be needed later on in the paper. Section 3 defines two conditions which generalise, respectively, (upper) semimodularity and lower semimodularity. We focus on the generalisation of lower semimodularity--a condition we call (JM-LSM) (see Definition 3.6). We characterise the dual of (JM-LSM) on the dual digraphs of finite lattices. For completeness we state corresponding conditions and results related to upper semimodularity. In Section 4 we combine the results of Section 3 with results from a recent paper by Craig, Haviar and Sao Joao [6]. There, characterisations were given of the digraphs dual to finite join- and meet-semidistributive lattices (and hence also finite semidistributive lattices). The combination of these dual characterisations gives us a characterisation of the dual digraphs of finite meet-distributive lattices (also know as locally distributive lattices). Furthermore, this allows us to describe a new class of structures that is in a one-to-one correspondence with finite convex geometries. In Section 5 we give two sufficient conditions on a finite TiRS digraph for the dual lattice to be modular. In Section 6 we explicitly list four open problems and we also indicate why the task of describing digraphs dual to finite modular lattices is challenging.
## 2 Preliminaries
Central to the representation of a finite lattice that we will use is the notion of a maximal-disjoint filter-ideal pair. This can, equivalently, be viewed as a maximal partial homomorphism from a lattice \(L\) into the two-element lattice.
**Definition 2.1** ([13, Section 3]).: Let \(L\) be a lattice. Then \(\langle F,I\rangle\) is a _disjoint filter-ideal pair_ of \(L\) if \(F\) is a filter of \(L\) and \(I\) is an ideal of \(L\) such that \(F\cap I=\varnothing\). A disjoint filter-ideal pair \(\langle F,I\rangle\) is said to be a _maximal disjoint filter-ideal pair_ (MDFIP) if there is no disjoint filter-ideal pair \(\langle G,J\rangle\neq\langle F,I\rangle\) such that \(F\subseteq G\) and \(I\subseteq J\).
The following fact was noted by Urquhart. It is needed for our characterisation of MDFIPs in Theorem 3.2.
**Proposition 2.2** ([13, p. 52]).: _Let \(L\) be a finite lattice. If \(\langle F,I\rangle\) is an MDFIP of \(L\) then \(\bigwedge F\) is join-irreducible and \(\bigvee I\) is meet-irreducible._
The set of join-irreducible elements of \(L\) is denoted \(\mathsf{J}(L)\) and the set of meet-irreducible elements is denoted \(\mathsf{M}(L)\).
Given a lattice \(L\), we will add a set of arcs to the set of MDFIPs of \(L\). The use of such digraphs for lattice representation is due to Ploscica [10]. We point out that the original work using (topologised) digraphs used so-called _maximal partial homomorphisms_ (see [10, Section 1]). It is easy to show that these are in a one-to-one correspondence with MDFIPs.
For a lattice \(L\), we now present its dual digraph \(G_{L}=(X_{L},E)\) where the vertices are the MDFIPs of \(L\). Ploscica's relation \(E\), when transferred to to the set of MDFIPs, is defined below for two MDFIPs \(\langle F,I\rangle\) and \(\langle G,J\rangle\):
* \(\langle F,I\rangle E\langle G,J\rangle\quad\iff\quad F\cap J=\emptyset\).
For finite lattices every filter is the up-set of a unique element and every ideal is the down-set of a unique element, so we can represent every disjoint filter-ideal pair \(\langle F,I\rangle\) by an ordered pair \(\langle\uparrow a,\downarrow b\rangle\) where \(a=\bigwedge F\) and \(b=\bigvee I\). Hence for finite lattices we have \(\langle\uparrow a,\downarrow b\rangle E\langle\uparrow c,\downarrow d\rangle\) if and only if \(a\not\leqslant d\). For a digraph \(G=(V,E)\) we let \(xE=\{\,y\in V\mid xBy\,\}\) and \(Ex=\{\,y\in V\mid yEx\,\}\). The next lemma is easy to prove and it will be useful later on.
**Lemma 2.3**.: _Let \(G_{L}=(X_{L},E)\) be the dual digraph of a finite lattice \(L\). If \(x=\langle\uparrow a,\downarrow b\rangle\) and \(y=\langle\uparrow c,\downarrow d\rangle\), then_
* \(xE\subseteq yE\) _if and only if_ \(a\leqslant c\)_;_
* \(Ex\subseteq Ey\) _if and only if_ \(d\leqslant b\)_._
Figure 1 shows three lattices and their dual digraphs. These three examples will be important throughout this paper. To make the labelling more succinct, we have denoted by \(ab\) the MDFIP \(\langle\uparrow a,\downarrow b\rangle\). We have also left out the loop on each vertex to keep the display less cluttered. Observe that the directed edge set is not a transitive relation. The labels \(L_{4}\) and \(L_{4}^{\partial}\) (as well as \(L_{3}^{\partial}\) which appears later) come from the paper by Davey et al. [7].
The digraphs coming from lattices were described by Craig, Gouveia and Haviar [4].
**Definition 2.4** ([4, Definition 2.2]).: A TiRS digraph \(G=(V,E)\) is a set \(V\) and a reflexive relation \(E\subseteq V\times V\) such that:
* If \(x,y\in V\) and \(x\neq y\) then \(xE\neq yE\) or \(Ex\neq Ey\).
* For all \(x,y\in V\), if \(xE\subset yE\) then \((x,y)\notin E\), and if \(Ey\subset Ex\) then \((x,y)\notin E\).
* For all \(x,y\in V\), if \(xEy\) then there exists \(z\in V\) such that \(zE\subseteq xE\) and \(Ez\subseteq Ey\). The result below gives a description of dual digraphs of lattices.
**Proposition 2.5** ([4, Proposition 2.3]).: _For any bounded lattice \(L\), its dual digraph \(G_{L}=(X_{L},E)\) is a TiRS digraph._
We recall from [10] a fact concerning general graphs \(G=(X,E)\). Let \(\underline{2}=(\{0,1\},\leqslant)\) denote the two-element graph. A partial map \(\varphi\colon X\to\underline{2}\) preserves the relation \(E\) if \(x,y\in\operatorname{dom}\varphi\) and \(xEy\) imply \(\varphi(x)\leqslant\varphi(y)\). The set of maximal partial \(E\)-preserving maps (i.e. those that cannot be properly extended) from \(G\) to \(\underline{2}\) is denoted by \(\operatorname{\mathfrak{G}}^{\operatorname{mp}}(G,\underline{2})\). We use the abbreviation MPEs for such partial maps.
For a graph \(G=(X,E)\) and \(\varphi,\psi\in\operatorname{\mathfrak{G}}^{\operatorname{mp}}(G,\underline{2})\), it was shown by Ploscica [10, Lemma 1.3] that \(\varphi^{-1}(1)\subseteq\psi^{-1}(1)\iff\psi^{-1}(0)\subseteq\varphi^{-1}(0)\). This implies that the reflexive and transitive binary relation \(\leqslant\) defined on \(\operatorname{\mathfrak{G}}^{\operatorname{mp}}(G,\underline{2})\) by \(\varphi\leqslant\psi\iff\varphi^{-1}(1)\subseteq\psi^{-1}(1)\) is a partial order. In fact, this is a lattice order [5, Theorem 2.3]. For a graph \(G=(X,E)\), denote by \(\mathbb{C}(G)\) the (complete) lattice of MPEs \((\operatorname{\mathfrak{G}}^{\operatorname{mp}}(X,\underline{2}),\subseteq)\).
The theorem below gives a one-to-one correspondence between finite lattices and finite TiRS digraphs. This result is essential to the work done in the rest of the current paper.
**Theorem 2.6** ([4, Theorem 1.7 and p. 87]).: _For any finite bounded lattice \(L\) we have that \(L\) is isomorphic to \(\mathbb{C}(G_{L})\) and for any finite TiRS digraph \(G=(V,E)\) we have that \(G\) is isomorphic to \(G_{\mathbb{C}(G)}\)._
## 3 Generalising lower and upper semimodularity
For lattice elements \(a\) and \(b\) we write \(a\prec b\) to denote that \(a\) is covered by \(b\). A lattice is _upper semimodular_ if whenever \(a\wedge b\prec a\) then \(b\prec a\lor b\)
Figure 1: Finite lattices \(N_{5}\), \(L_{4}\), \(L_{4}^{\partial}\) and their dual digraphs.
It is common to refer to such lattices as _semimodular_. A lattice is _lower semimodular_ if whenever \(a\prec a\lor b\) then \(a\wedge b\prec b\). We use (USM) and (LSM) as abbreviations for these two conditions.
The lattices in Figure 1 provide useful examples: \(N_{5}\) satisfies neither (USM) nor (LSM), \(L_{4}\) satisfies (USM) but not (LSM), and \(L_{4}^{\partial}\) satisfies (LSM) but not (USM).
We will focus on lower semimodularity, rather than upper semimodularity, because of the connection between lower semimodularity and finite convex geometries (see Section 4). We note that modularity implies both semimodularity and lower semimodularity. If a lattice \(L\) has finite length and is semimodular and lower semimodular, then \(L\) is also modular (cf. [9, Corollary 376]). For further reading we refer to the book by Stern [12].
Figure 2 presents a number of different generalisations of distributivity and modularity (including those presented above) and the relationships between them. Observe that the conditions in the top left and top right, which are weakenings of (LSM) and (USM) respectively, are in fact conditions on the standard context dual to a finite lattice. For the necessary terms and notation, we refer to the book from where Figure 2 is taken [8, p. 234].
We begin by proving some new results about MDFIPs. These will be needed in the proofs of later results.
**Lemma 3.1**.: _Let \(L\) be a finite lattice._
1. _If_ \(b\in\mathsf{M}(L)\) _and_ \(b\prec a\lor b\)_, then_ \(\downarrow\!b\) _is maximal with respect to being disjoint from_ \(\uparrow\!a\)
Figure 2: Relationships between generalisations of distributivity.
_._
2. _If_ \(a\in\mathsf{J}(L)\) _and_ \(a\wedge b\prec a\)_, then_ \(\uparrow a\) _is maximal with respect to being disjoint from_ \(\downarrow b\)_._
Proof.: Assume that \(b\in\mathsf{M}(L)\) and \(b\prec a\lor b\). This implies \(b<a\lor b\) and hence \(a\not\leqslant b\) and so \(\uparrow a\cap\downarrow b=\emptyset\). Suppose the ideal \(\downarrow b\) were to be extended to \(\downarrow c\) with \(b<c\) and \(\uparrow a\cap\downarrow c=\emptyset\). Since \(b\in\mathsf{M}(L)\), the element \(a\lor b\) is the unique upper cover of \(b\) and so \(a\lor b\in\downarrow c\). This implies \(a\lor b\in\uparrow a\cap\downarrow c\), a contradiction, showing the maximality of \(\downarrow b\) with respect to being disjoint from \(\uparrow a\).
Now assume that \(a\in\mathsf{J}(L)\) and \(a\wedge b\prec a\). Since \(a\wedge b<a\) we have \(a\not\leqslant b\) and so \(\uparrow a\cap\downarrow b=\emptyset\). If \(\uparrow a\) were extended to \(\uparrow d\) with \(d<a\) and \(\uparrow d\cap\downarrow b=\emptyset\), then \(d\leqslant a\wedge b\) (the unique lower cover of \(a\)). We get \(a\wedge b\in\uparrow d\cap\downarrow b\), which shows that \(\uparrow a\) is maximal with respect to being disjoint from \(\downarrow b\).
The next theorem gives a characterisation of MDFIPs.
**Theorem 3.2**.: _A disjoint filter-ideal pair \(\langle\uparrow a,\downarrow b\rangle\) is an MDFIP if and only if it satisfies the following conditions:_
1. \(a\in\mathsf{J}(L)\)_;_
2. \(b\in\mathsf{M}(L)\)_;_
3. \(b\prec a\lor b\)_;_
4. \(a\wedge b\prec a\)_._
Proof.: If \(\langle\uparrow a,\downarrow b\rangle\) is an MDFIP, by Proposition 2.2, \(a\in\mathsf{J}(L)\) and \(b\in\mathsf{M}(L)\). We also have \(b<a\lor b\), since \(b=a\lor b\) would imply \(a\in\downarrow b\). Suppose there exists \(c\in L\) such that \(b<c<a\lor b\). If \(a\leqslant c\) then \(c\) would be an upper bound for \(\{a,b\}\) and then \(a\lor b\leqslant c\). Therefore \(a\not\leqslant c\). This would make \(\langle\uparrow a,\downarrow c\rangle\) a disjoint filter-ideal pair with \(\downarrow b\subsetneq\downarrow c\), contradicting the maximality of the pair \(\langle\uparrow a,\downarrow b\rangle\). A dual argument can be applied to show that \(a\wedge b\prec a\).
Assume \(\langle\uparrow a,\downarrow b\rangle\) satisfies (i)-(iv). Lemma 3.1 then tells us that \(\downarrow b\) is maximal with respect to being disjoint from \(\uparrow a\) and vice versa. Hence \(\langle\uparrow a,\downarrow b\rangle\) is an MDFIP.
The lemmas below will be used in our later investigations.
**Lemma 3.3**.: _Let \(L\) be a finite lattice, \(a,b\in L\). Then the following are equivalent:_
1. \(a\not\leqslant b\)_;_
2. _there exists_ \(j\in\mathsf{J}(L)\) _such that_ \(j\leqslant a\) _and_ \(j\not\leqslant b\)_;_
3. _there exists_ \(m\in\mathsf{M}(L)\) _such that_ \(b\leqslant m\) _and_ \(a\not\leqslant m\)_._
Proof.: It is well-known that in a finite lattice the set \(\mathsf{J}(L)\) is join-dense. Hence \(a\leqslant b\) is equivalent to the condition that for all \(j\in\mathsf{J}(L)\), \(j\leqslant a\) implies \(j\leqslant b\). This settles the equivalence of (i) and (ii). The equivalence of (i) and (iii) follows similarly from the meet-density of \(\mathsf{M}(L)\) in \(L\).
For \(a,b\in L\) we define the set \(T_{ab}:=\{\,m\in\mathsf{M}(L)\mid b\leqslant m,a\not\leqslant m\,\}\). An important consequence of Lemma 3.3 is that \(T_{ab}\) is non-empty whenever \(a\not\leqslant b\). This is needed for our next result.
**Lemma 3.4**.: _Let \(L\) be a finite lattice and \(a,b\in L\), \(a\nleqslant b\). Let \(d\) be a maximal element of \(T_{ab}\). Then \(d\prec d\lor a\)._
Proof.: Firstly, we point out that \(T_{ab}\) is a non-empty finite poset and hence has a maximal element. Since \(a\nleqslant d\), we have \(a\lor d\neq d\), and so \(d<d\lor a\). Suppose there exists \(c\in L\) such that \(d<c<d\lor a\). As \(d\lor a\nleqslant c\), by Lemma 3.3 there exists \(m\in\mathsf{M}(L)\) such that \(c\leqslant m\) but \(d\lor a\nleqslant m\). So \(d<m\). If \(a\leqslant m\) then \(d\lor a\leqslant m\). It follows \(a\nleqslant m\) and \(b\leqslant d<m\), so \(m\in T_{ab}\). Since \(d\) was maximal in \(T_{ab}\) and \(d<m\), we get a contradiction. Hence \(d\prec d\lor a\).
From the previous lemmas one can derive the following result.
**Proposition 3.5**.: _Let \(L\) be a finite lattice with \(a\in\mathsf{J}(L)\) and \(b\in\mathsf{M}(L)\). Then_
1. _there exists_ \(m\in\mathsf{M}(L)\) _such that_ \(\langle\uparrow a,\downarrow m\rangle\) _is an MDFIP;_
2. _there exists_ \(j\in\mathsf{J}(L)\) _such that_ \(\langle\uparrow j,\downarrow b\rangle\) _is an MDFIP._
Proof.: We prove only (i), as then (ii) will follow by a dual argument. Since \(a\in\mathsf{J}(L)\), it has a unique lower cover \(c\). Clearly \(a\nleqslant c\), so by Lemma 3.4, there exists a maximal element \(m\in T_{ac}\) such that \(m\prec m\lor a\). From Lemma 3.1(i) we know that \(\downarrow m\) is maximal with respect to being disjoint from \(\uparrow a\). If it were possible to extend \(\uparrow a\) to \(\uparrow d\) with \(d<a\), then since \(c\) is the unique lower cover of \(a\), we would get \(c\in\uparrow d\cap\downarrow m\). Hence \(\uparrow a\) is maximal with respect to being disjoint from \(\downarrow m\). It follows that \(\langle\uparrow a,\downarrow m\rangle\) is an MDFIP.
We now define a new condition, (JM-LSM), which will be central to the results that follow. We believe it is a more natural weakening of (LSM) than the condition given in the top left of Figure 2. The name of the condition comes from the fact that it is almost identical to the condition (LSM), but the elements involved are quantified over \(\mathsf{J}(L)\) and \(\mathsf{M}(L)\).
**Definition 3.6**.: A finite lattice \(L\) satisfies (JM-LSM) if for any \(a\in\mathsf{J}(L)\) and \(b\in\mathsf{M}(L)\), if \(b\prec a\lor b\) then \(a\wedge b\prec a\).
**Example 3.7**.: Condition (JM-LSM) is a proper weakening of the condition (LSM). Indeed, the lattice in Figure 3 satisfies (JM-LSM) but not (LSM). To see this, observe that \(c\prec c\lor d\) and \(c\wedge d\nleqslant d\), yet \(d\nleqslant\mathsf{J}(L)\).
We note that the lattice \(L_{4}\) in Figure 1 does not satisfy (LSM), and also does not satisfy (JM-LSM): \(c\in\mathsf{J}(L)\), \(a\in\mathsf{M}(L)\) and \(a\prec c\lor a\), yet \(c\wedge a\nleqslant c\).
Below is a condition that we will prove is equivalent to (JM-LSM). It will assist us in proving that the digraph condition (LTi), given in Definition 3.11, can be used to characterise the dual digraphs of finite (JM-LSM) lattices.
**Definition 3.8**.: Condition (L-abc): Let \(a\in\mathsf{J}(L)\) and \(b\in\mathsf{M}(L)\). If \(a\nleqslant b\) then there exists \(c\geqslant b\) such that \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP.
Notice that if \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP, then Proposition 2.2 (cf. also Theorem 3.2) implies that for the element \(c\) in Definition 3.8 we have \(c\in\mathsf{M}(L)\). Notice also that the finite lattice \(L_{4}\) in Figure 1 does not satisfy (L-abc): we have \(a\in\mathsf{J}(L)\), \(c\in\mathsf{M}(L)\) and \(a\nleqslant c\) and there is no \(m\geqslant c\) such that \(\langle\uparrow a,\downarrow m\rangle\) is an MDFIP.
The following theorem shows that for finite lattices the central property (JM-LSM) can be characterised exactly via the condition (L-abc).
**Theorem 3.9**.: _A finite lattice satisfies (JM-LSM) iff it satisfies (L-abc)._
Proof.: Assume (JM-LSM) and let \(a\in\mathsf{J}(L)\), \(b\in\mathsf{M}(L)\) and \(a\not\leqslant b\). Let \(T_{ab}=\{m\in\mathsf{M}(L)\mid b\leqslant m\ \ \&\ a\not\leqslant m\}\). Then \(T_{ab}\) is a non-empty finite poset. Hence it has a maximal element, say \(c\). So \(c\in\mathsf{M}(L)\), \(b\leqslant c\) and \(\langle\uparrow a,\downarrow c\rangle\) is a disjoint filter-ideal pair. To show that \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP, by Theorem 3.2 we need to show that \(c\wedge a\prec a\) and \(c\prec c\lor a\). By (JM-LSM) we only need to prove \(c\prec c\lor a\), which follows from Lemma 3.4. We have shown that (L-abc) holds.
Now assume (L-abc). To show (JM-LSM), let \(a\in\mathsf{J}(L)\), \(b\in\mathsf{M}(L)\) and \(b\prec a\lor b\). We need to prove \(a\wedge b\prec a\). From \(b\prec a\lor b\) we have \(a\not\leqslant b\). By (L-abc) there exists \(c\geqslant b\) such that \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP. Hence \(c\in\mathsf{M}(L)\) and by Theorem 3.2, \(c\wedge a\prec a\). We claim that \(c=b\). Suppose that \(c>b\). Then, since \(b\in\mathsf{M}(L)\), it has a unique upper cover \(b^{\star}\). As \(b\prec a\lor b\), we get \(b^{\star}=a\lor b\). From \(c>b\) we have \(c\geqslant b^{\star}=a\lor b\geqslant a\). This contradicts the fact that \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP. Hence \(c=b\). This proves \(a\wedge b=c\wedge a\prec a\) as required.
**Remark 3.10**.: We notice that if a finite lattice \(L\) satisfies (L-abc), then in the situation \(a\not\leqslant b\) for \(a\in\mathsf{J}(L)\), \(b\in\mathsf{M}(L)\), an arbitrary maximal element of \(T_{ab}\) can be taken for the element \(c\geqslant b\) such that \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP. Indeed, if \(c\) is any maximal element of \(T_{ab}\), then \(c\in\mathsf{M}(L)\), \(a\not\leqslant c\), \(b\leqslant c\) and so by the assumed condition (L-abc) there is \(c^{\prime}\geqslant c\) such that \(\langle\uparrow a,\downarrow c^{\prime}\rangle\) is an MDFIP. Hence \(c^{\prime}\in\mathsf{M}(L)\), \(a\not\leqslant c^{\prime}\), \(b\leqslant c^{\prime}\), thus \(c^{\prime}\in T_{ab}\). From the maximality of \(c\) in \(T_{ab}\) we get \(c=c^{\prime}\) as required.
Now we present a digraph condition dual to (JM-LSM). The condition below is a strengthening of the (Ti) condition, and because of its connection to lower semimodularity, we have chosen the name (LTi). Later, in Definition 3.16, (UTi) is used for the dual condition related to upper semimodularity.
Figure 3. A finite lattice that satisfies (JM-LSM) but not (LSM). Its dual digraph (right) satisfies (LTi).
**Definition 3.11**.: Consider the following condition on a TiRS digraph \(G=(V,E)\):
\[\text{(LTi)}\qquad uEv\implies(\exists\,w\in V)(wE=uE\,\&\,Ew\subseteq Ev).\]
Note that (LTi) is not dual to (LSM) as Figure 3 shows. The next two results prove that it is (JM-LSM) that is dual to (LTi).
**Proposition 3.12**.: _A finite TiRS digraph satisfies_ (LTi) _if and only if it is the dual digraph of a lattice that satisfies_ (L-abc)_._
Proof.: Assume a finite lattice \(L\) satisfies (L-abc). To show that the dual digraph \(G_{L}\) satisfies (LTi), let \(u=\langle\uparrow a,\downarrow m\rangle\), \(v=\langle\uparrow j,\downarrow b\rangle\) be vertices of the digraph \(G\) and let \(uEv\), whence \(a\nleq b\). Then by (L-abc) there exists \(c\in\mathsf{M}(L)\) such that \(b\leqslant c\) and \(\langle\uparrow a,\downarrow c\rangle\) is an MDFIP. If we denote \(w=\langle\uparrow a,\downarrow c\rangle\) as a vertex of \(G\), then by Lemma 2.3 we have \(wE=uE\) and \(Ew\subseteq Ev\) as required.
For the converse, assume that a finite TiRS digraph \(G\) satisfies (LTi). To show that its dual lattice \(L\) satisfies (L-abc), let \(a\in\mathsf{J}(L)\), \(b\in\mathsf{M}(L)\) and \(a\nleq b\). Since \(a\in\mathsf{J}(L)\) and \(L\) is finite, by Proposition 3.5(i), there exists an element \(m\in M(L)\) such that \(u=\langle\uparrow a,\downarrow m\rangle\) is an MDFIP. Similarly, since \(b\in\mathsf{M}(L)\), by Proposition 3.5(ii) there exists \(j\in J(L)\) such that \(v=\langle\uparrow j,\downarrow b\rangle\) is an MDFIP. Since \(a\nleq b\), we have \(uEv\). Now, by (LTi), there is a vertex \(w=\langle\uparrow c,\downarrow d\rangle\in V(G)\) satisfying \(wE=uE\) and \(Ew\subseteq Ev\). Since \(wE=uE\), we get \(\uparrow c=\uparrow a\), so \(c=a\). Since \(Ew\subseteq Ev\), Lemma 2.3(ii) tells us that \(d\geqslant b\). This proves that \(d\) is the desired element such that \(\langle\uparrow a,\downarrow d\rangle\) is an MDFIP.
The main theorem of this section follows directly from Theorem 3.9 and Proposition 3.12.
**Theorem 3.13**.: _A finite TiRS digraph is the dual digraph of a finite lattice satisfying_ (JM-LSM) _if and only if it satisfies_ (LTi)_._
For completeness, we now state the conditions and results related to finite upper semimodular lattices and their dual digraphs.
**Definition 3.14**.: Let \(L\) be a finite lattice. We say that \(L\) satisfies the condition (JM-USM) if whenever \(a\in\mathsf{J}(L)\), \(b\in\mathsf{M}(L)\), and \(a\wedge b\prec a\), then \(b\prec a\lor b\). We say that \(L\) satisfies (U-abc) if whenever \(a\in\mathsf{J}(L)\) and \(b\in\mathsf{M}(L)\) and \(a\nleq b\) then there exists \(c\leqslant a\) such that \(\langle\uparrow c,\downarrow b\rangle\) is an MDFIP.
The proposition below connects the two conditions defined above.
**Proposition 3.15**.: _A finite lattice satisfies_ (U-abc) _iff it satisfies_ (JM-USM)_._
Our last definition is the condition (UTi) which is, like (LTi), a strengthening of the (Ti) condition from Definition 2.4.
**Definition 3.16**.: Consider the following condition on a finite TiRS digraph \(G=(V,E)\):
\[\text{(UTi)}\qquad uEv\implies(\exists\,w\in V)(wE\subseteq uE\,\&\,Ev=Ew).\]
**Theorem 3.17**.: _A finite TiRS digraph satisfies_ (UTi) _if and only if it is the dual digraph of a finite lattice that satisfies_ (JM-USM)_._
## 4 Dual digraphs of meet-distributive lattices
In this section we will combine the results from Section 3 with results about dual digraphs of finite join- and meet-semidistributive lattices from [6]. The goal is give a description of the dual digraphs of finite meet-distributive lattices. This will give a description of a new class of structures that are in a one-to-one correspondence with the class of finite convex geometries. First, we recall some basic definitions.
A lattice \(L\) is _join-semidistributive_ if it satisfies the following quasi-equation for all \(a,b,c\in L\):
\[\text{(JSD)}\qquad a\lor b\approx a\lor c\quad\longrightarrow\quad a\lor b \approx a\lor(b\wedge c).\]
A lattice \(L\) is _meet-semidistributive_ if it satisfies the following quasi-equation for all \(a,b,c\in L\):
\[\text{(MSD)}\qquad a\wedge b\approx a\wedge c\quad\longrightarrow\quad a\wedge b \approx a\wedge(b\lor c).\]
A lattice is _semidistributive_ if it satisfies both (JSD) and (MSD).
Considering the lattices in Figure 1 one can see that \(N_{5}\) is semidistributive, \(L_{4}\) is meet-semidistributive but not join-semidistributive, and \(L_{4}^{\partial}\) is join-semidistributive but not meet-semidistributive.
For a finite lattice \(L\) and \(a\in L\), consider \(\mu(a)=\bigwedge\{\,b\in L\mid b\prec a\,\}\). A finite lattice is _meet-distributive_ (also called _locally distributive_) if for any \(a\in L\), the interval \([\mu(a),a]\) is a distributive lattice (c.f. [2, Section 5-2]).
The following equivalence is extracted from [2, Theorem 5-2.1].
**Theorem 4.1**.: _Let \(L\) be a finite lattice. Then the following are equivalent:_
1. \(L\) _is meet-distributive;_
2. \(L\) _satisfies_ (JSD) _and_ (LSM)_._
The results below use Theorem 4.1 to provide an additional characterisation of meet-distributive lattices using (JM-LSM), the condition that was central to Section 3. Later, we will use this to characterise their dual digraphs.
**Theorem 4.2**.: _If a finite lattice \(L\) satisfies_ (JM-LSM) _and_ (JSD)_, then it is lower semimodular._
Proof.: Let \(L\) be a finite lattice satisfying (JM-LSM) and (JSD). Let \(a,b\in L\) be arbitrary such that \(a\prec a\lor b\). We are going to show that \(a\wedge b\prec b\). We will proceed by contradiction.
Suppose that \(a\wedge b\not\prec b\). Since \(L\) is finite, there exists \(c\in L\) such that \(a\wedge b<c<b\). Then \(b\not\leqslant c\) and by Lemma 3.3 the set \(S_{cb}=\{\,j\in\mathsf{J}(L)\mid j\leqslant b,j\not\leqslant c\,\}\) is non-empty. Let \(p\) be a minimal element of \(S_{cb}\).
Suppose \(p\leqslant a\), then since \(p\leqslant b\), we get \(p\leqslant a\wedge b\leqslant c\), which is a contradiction, so \(p\nleqslant a\). Then by Lemma 3.3, the set \(T_{pa}=\{m\in M(L)\mid a\leqslant m\text{ and }p\nleqslant m\}\) is non-empty. Let \(m\) be a maximal element of \(T_{pa}\). By Lemma 3.4, \(m\prec m\lor p\). Since \(m\in M(L)\), \(p\in J(L)\), and \(L\) satisifies (JM-LSM), we obtain \(m\wedge p\prec p\).
It is easy to see that \(c\wedge p<p\) as if \(c\wedge p=p\), then \(p\leqslant c\), which is a contradiction.
We will show in several steps that \(m\wedge p<c\wedge p\). Firstly, we will show \(m\wedge p\leqslant c\wedge p\). Suppose \(m\wedge p\nleqslant c\). By Lemma 3.3 there exists \(j\in J(L)\) satisfying \(j\leqslant m\wedge p\) and \(j\nleqslant c\). Then \(j\leqslant p\leqslant b\), so \(j\leqslant b\). Since \(p\) is a minimal element of \(S_{cb}\), and \(j\) is also in \(S_{cb}\), we obtain \(p=j\). Then \(p\leqslant m\wedge p\), so \(p=m\wedge p\). Hence \(p\leqslant m\), which is a contradiction. Therefore \(m\wedge p\leqslant c\). Since \(m\wedge p\leqslant p\), we have \(m\wedge p\leqslant c\wedge p\).
To show \(m\wedge p<c\wedge p\), suppose to the contrary that \(m\wedge p=c\wedge p\). We will continue by showing that \(a\lor c=a\lor b=a\lor p\).
Since \(p\leqslant b\) we have \(a\leqslant a\lor p\leqslant a\lor b\), and since \(a\prec a\lor b\), we get \(a=a\lor p\) or \(a\lor p=a\lor b\). But \(a\neq a\lor p\) since \(p\nleqslant a\), so \(a\lor b=a\lor p\). Also, since \(c\leqslant b\), we have \(a\leqslant a\lor c\leqslant a\lor b\). If \(a\lor c=a\), then \(c\leqslant a\), whence \(c\leqslant a\wedge b\), which contradicts \(a\wedge b<c\). So \(a\lor c=a\lor b=a\lor p\). Hence, \(m\lor c=(m\lor a)\lor c=m\lor(a\lor c)=m\lor(a\lor p)=(m\lor a)\lor p=m\lor p\).
Now, by (JSD), \(m\lor c=m\lor p=m\lor(c\wedge p)=m\vee(m\wedge p)=m\), which contradicts \(p\nleqslant m\). This shows that \(m\wedge p<c\wedge p\) as required.
Now we have \(m\wedge p<c\wedge p<p\). This contradicts \(m\wedge p\prec p\). Hence the element \(c\) cannot exist, which shows that \(a\wedge b\prec b\).
**Remark 4.3**.: Notice in the proof we actually use a weaker form of (JSD). We will say that a lattice \(L\) is _weakly join-semidistributive_ if it satisfies the following quasi-equation for all \(a\in\mathsf{M}(L)\), \(b\in\mathsf{J}(L)\), \(c\in L\):
\[\text{(W-JSD)}\qquad a\lor b\approx a\lor c\quad\longrightarrow\quad a\lor b \approx a\vee(b\wedge c).\]
Hence in Theorem 4.2 we actually showed that (JM-LSM) and (W-JSD) implies (LSM).
We notice the lattice in Figure 3 satisfies (JM-LSM) but not (W-JSD): indeed \(c\in\mathsf{M}(L)\), \(b\in\mathsf{J}(L)\) and \(c\lor b=c\lor a\) but \(c\vee(b\wedge a)\neq c\lor a\).
The result below follows from Theorems 4.1 and 4.2.
**Corollary 4.4**.: _A finite lattice is meet-distributive if and only if it satisfies both (JM-LSM) and (JSD)._
The following theorem provides a characterisation of the dual digraphs of join- and meet-semidistributive lattices. Notice that each of the conditions (i), (ii) and (iii) below is a strengthening of the (S) condition from the definition of TiRS digraphs (Definition 2.4).
**Theorem 4.5** ([6, Theorem 3.6]).: _Let \(G=(V,E)\) be a finite TiRS digraph with \(u,v\in V\). Then_
1. \(G\) _is the dual digraph of a finite lattice satisfying_ (JSD) _if and only if it satisfies the following condition:_ \[\text{(dJSD)}\qquad\text{if $u\neq v$ then $Eu\neq Ev$}.\]
2. \(G\) _is the dual digraph of a finite lattice satisfying_ (MSD) _if and only if it satisfies the following condition:_ \[\text{(dMSD)}\qquad\text{if $u\neq v$ then $uE\neq vE$}.\]
3. \(G\) _is the dual digraph of a finite semidistributive lattice if and only if it satisfies the following condition:_ \[\text{(dSD)}\qquad\text{if $u\neq v$ then $Eu\neq Ev$ and $uE\neq vE$}.\]
The next few results in this section link the properties discussed earlier to distributivity in lattices and transitivity in dual digraphs.
**Theorem 4.6**.: _Let \(G=(V,E)\) be a finite TiRS digraph that satisfies both_ (dMSD) _and_ (LTi)_. Then \(E\) is transitive._
Proof.: We first claim that if a finite TiRS digraph \(G=(V,E)\) satisfies both (dMSD) and (LTi), then for any vertices \(u,v\in V\), \(uEv\) implies \(Eu\subseteq Ev\). Indeed, \(uEv\) by (LTi) implies the existence of \(w\in V\) such that \(wE=uE\) and \(Ew\subseteq Ev\). By the property (dMSD), \(wE=uE\) means \(w=u\), whence \(Eu\subseteq Ev\) as required.
Now to show the transitivity of \(E\), if \(uEv\) and \(vEw\) for some vertices \(u,v,w\in V\), then by the above claim, \(Eu\subseteq Ev\) and \(Ev\subseteq Ew\). Hence \(Eu\subseteq Ew\), which means \(u\in Ew\), whence \(uEw\) as required.
**Proposition 4.7**.: _If \(G=(V,E)\) is TiRS digraph with transitive \(E\), then \(G\) is a poset._
Proof.: As in a TiRS digraph \(G=(V,E)\) the relation \(E\) is reflexive, it only remains to show the antisymmetry of \(E\).
Assume for \(x,y\in V\) that \(xEy\) and \(yEx\). We firstly show that \(xE\subseteq yE\): if \(z\in V\) and \(z\in xE\), then \(xEz\) and with \(yEx\) we get \(yEz\) by transitivity of \(E\), hence \(z\in yE\) as required. Now \(xE\subset yE\) by the condition (R) would give \((x,y)\notin E\), a contradiction. Hence \(xE=yE\).
Analogously one can show that \(Ey\subseteq Ex\) and since \(Ey\subset Ex\) would by the condition (R) give \((x,y)\notin E\), we have \(Ey=Ex\). Using that \(G\) satisfies the separation property (S), it follows that \(x=y\) as required.
The result below follows from Birkhoff's representation, Theorem 4.6 and Proposition 4.7.
**Corollary 4.8**.: _If \(L\) satisfies_ (MSD) _and_ (JM-LSM)_, then \(L\) is distributive._
We now return to focus on finite meet-distributive lattices, with the goal of describing a class of digraphs connected to finite convex geometries.
Using the TiRS conditions, our conditions for the dual digraphs of (JM-LSM) and (JSD), respectively, and Corollary 4.4, we get the following dual condition for meet-distributivity. Notice how (dJSD) is a strengthening of the (S) condition, and (LTi) is a strengthening of the (Ti) condition.
**Theorem 4.9**.: _A finite digraph \(G=(V,E)\) with a reflexive relation \(E\) is the dual digraph of some finite meet-distributive lattice if and only if \(G\) satisfies the following conditions:_
\begin{tabular}{l l} (dJSD) & _If_ \(x,y\in V\) _and_ \(x\neq y\) _then_ \(Ex\neq Ey\)_._ \\ (R) & _For all_ \(x,y\in V\)_, if_ \(xE\subset yE\) _then_ \((x,y)\notin E\)_, and if_ \(Ey\subset Ex\) _then_ \((x,y)\notin E\)_._ \\ \end{tabular}
(LTi) _For all \(x,y\in V\), if \(xEy\) then there exists \(z\in V\) such that \(zE=xE\) and \(Ez\subseteq Ey\)._
Proof.: Let \(G\) be the dual digraph of some finite meet-distributive lattice \(L\). Then by Theorem 2.6 the digraph \(G\) will satisfy (R). By Corollary 4.4, \(L\) satisfies (JSD) and (JM-LSM). Hence by Theorem 4.5(i), \(G\) satisfies (dJSD). Lastly, by Theorem 3.13, \(G\) will satisfy (LTi).
Conversely, assume \(G\) satisfies (dJSD), (R) and (LTi). Clearly \(G\) is a TiRS digraph, hence the dual of a finite lattice \(L\). Theorem 4.5(i) shows that \(L\) satisfies (JSD) and Theorem 3.13 implies that \(L\) satisfies (JM-LSM). Hence by Corollary 4.4, \(L\) is meet-distributive.
The theorem above establishes a one-to-one correspondence between finite meet-distributive lattices and finite digraphs satisfying the conditions (dJSD), (R) and (LTi). It is a restriction of Theorem 2.6, while still generalising Birkhoff's one-to-one correspondence between finite distributive lattices and finite posets.
**Definition 4.10** ([9, Definition 30]).: Let \(X\) be a set and \(\phi:\mathcal{O}(X)\to\mathcal{O}(X)\). Then \(\phi\) is a _closure operator_ on \(X\) if for all \(Y,Z\in\mathcal{O}(X)\)
1. \(Y\subseteq\phi(Y)\);
2. \(Y\subseteq Z\) implies \(\phi(Y)\subseteq\phi(Z)\);
3. \(\phi(\phi(Y))=\phi(Y)\).
If \(X\) is a set and \(\phi\) a closure operator on \(X\) then the pair \(\langle X,\phi\rangle\) is called a _closure system_. For \(Y\subseteq X\) we say that \(Y\) is _closed_ if \(\phi(Y)=Y\). The closed sets of a closure operator \(\phi\) on \(X\) form a complete lattice, denoted by \(\operatorname{Cld}(X,\phi)\). A _zero-closure_ system is a closure system \(\langle X,\phi\rangle\) such that \(\phi(\emptyset)=\emptyset\).
Now we turn our attention to convex geometries. The presentation here follows that of the book chapter by Adaricheva and Nation [2].
**Definition 4.11** ([2, Definition 5-1.1]).: A closure system \(\langle X,\phi\rangle\) satisfies the _anti-exchange property_ if for all \(x\neq y\) and all closed sets \(A\subseteq X\),
(AEP) \(x\in\phi(A\cup\{y\})\) and \(x\notin A\) imply that \(y\notin\phi(A\cup\{x\})\).
**Definition 4.12** ([1, Definition 1.6]).: A zero-closure system that satisfies the anti-exchange property is called a _convex geometry_.
We now combine Theorem 4.9 with known equivalences to obtain the following characterisation of finite convex geometries. There are other equivalent conditions [2, Theorem 5-2.1] that we have not included here.
**Theorem 4.13**.: _Let \(L\) be a finite lattice. Then the following are equivalent:_
1. \(L\) _is the closure lattice_ \(\operatorname{Cld}(X,\phi)\) _of a closure space_ \(\langle X,\phi\rangle\) _with the_ (AEP)_._
2. \(L\) _is a meet-distributive lattice._
3. \(L\) _satisfies_ (JSD) _and_ (LSM)_._
4. \(L\) _satisfies_ (JSD) _and_ (JM-LSM)_._
* \(L\) _is the lattice_ \(\mathbb{C}(G)\) _of a reflexive digraph_ \(G\) _satisfying_ (dJSD)_,_ (R) _and_ (LTi)_._
Proof.: The equivalences of (i), (ii) and (iii) are known [2, Theorem 5-2.1]. The equivalence of (iii) and (iv) is the result of Corollary 4.4, and the equivalence of (iv) and (v) is Theorem 4.9.
## 5 Dual digraphs of finite modular lattices
In this section we provide two sufficient conditions for a finite TiRS digraph to be the dual digraph of a finite modular lattice.
For \(i=0,1,2\), let us denote by \(G_{i}=(V_{i},E_{i})\) an induced subgraph of \(G_{N_{5}}\) (see Figure 1) with \(V_{i}=\{x,y,z\}\) and with \(i\) of the arcs \(xEy\) and \(yEz\) missing compared to \(G_{N_{5}}\). (For \(i=1\) we can, w.l.o.g., consider the arc \(yEz\) missing.) Hence \(G_{0}=G_{N_{5}}\), \(G_{1}\) has one arc and an isolated vertex, and \(G_{2}\) has no arc and consists of two isolated vertices. All three digraphs are reflexive, hence they have loops at each vertex.
We introduce the following condition for the dual digraph \(G_{L}\) of a finite lattice \(L\) in terms of "Forbidden Induced Subgraphs":
* \(G_{L}\) has neither \(G_{0}=G_{N_{5}}\) nor \(G_{1}\) as an induced subgraph.
The next lemma and two propositions lead to showing that the condition (FIS) is sufficient for modularity of a finite lattice \(L\). Note that by Lemma 3.3, for \(a,b\in L\) with \(a\not\leqslant b\), there always exist elements \(\underline{a}\leqslant a\) and \(\overline{b}\geqslant b\) such that \(\langle\uparrow\underline{a},\downarrow\overline{b}\rangle\) is an MDFIP. Below we write \(a||b\) to indicate that \(a\not\leqslant b\) and \(b\not\leqslant a\).
**Lemma 5.1**.: _Let \(a,b,c,0,1\) be any elements of the lattice that form a sublattice isomorphic to \(N_{5}\)_(_where \(0<a,b,c<1\), \(b<c\) and \(a||b,a||c\)_). Let \(x=\langle\uparrow\underline{a},\downarrow\overline{c}\rangle\), \(y=\langle\uparrow\underline{c},\downarrow\overline{b}\rangle\) and \(z=\langle\uparrow\underline{b},\downarrow\overline{a}\rangle\) be any maximal disjoint extensions of \(\langle\uparrow a,\downarrow c\rangle\), \(\langle\uparrow c,\downarrow b\rangle\) and \(\langle\uparrow b,\downarrow a\rangle\), respectively. Then the induced subgraph \(\{x,y,z\}\) of \(G_{L}\) is isomorphic either to \(G_{0}=G_{N_{5}}\), \(G_{1}\), or \(G_{2}\)._
Proof.: First we must confirm that \(x,y,z\) are distinct MDFIPs. If \(x=y\) then \(\uparrow\underline{a}=\uparrow\underline{c}\) which implies \(\uparrow\underline{a}\cap\downarrow\overline{c}\neq\emptyset\), i.e \(x\) would not be an MDFIP. If \(x=z\) then \(\uparrow\underline{a}=\uparrow\underline{b}\) which means \(z\) would not be an MDFIP. Lastly, if \(y=z\) then \(\downarrow\overline{b}=\downarrow\overline{a}\) and \(z\) would not be an MDFIP.
We claim that in the induced subgraph \(\{x,y,z\}\) of \(G_{L}\), the arcs \(xEy\) and \(yEz\) are possible, but the induced subgraph \(\{x,y,z\}\) has none of the other four possible arcs between distinct vertices: indeed, the arcs \(yEx\), \(zEy\), \(xEz\) and \(zEx\) are not present in \(G_{L}\) because clearly \(c\in\uparrow\underline{c}\cap\downarrow\overline{c}\), \(b\in\uparrow\underline{b}\cap\downarrow\overline{b}\), \(a\in\uparrow\underline{a}\cap\downarrow\overline{a}\) and \(b\in\uparrow\underline{b}\cap\downarrow\overline{c}\), respectively.
Hence \(\{x,y,z\}\) is isomorphic to \(G_{i}\) in case \(i\) of the arcs \(xEy\) and \(yEz\) are missing in the induced subgraph \(\{x,y,z\}\) for \(i=0,1,2\).
**Proposition 5.2**.: _Let \(L\) be a finite lattice and assume that its dual digraph \(G_{L}=(V,E)\) satisfies_ (FIS)_. Then \(L\) is lower semimodular._
Proof.: Suppose to the contrary that \(L\) does not satisfy (LSM). Then there exist elements \(a,b\in L\) such that \(a\prec a\lor b\) but \(a\wedge b\not\prec b\). Then there exists an element \(c\in L\) such that \(a\wedge b<c<b\). Hence \(a\lor c\leqslant a\lor b\). Since \(a\prec a\lor b\), and \(a\leqslant a\lor c\leqslant a\lor b\), we get \(a\lor c=a\) or \(a\lor c=a\lor b\). If \(a\lor c=a\), then \(c\leqslant a\), so \(c\leqslant a\wedge b\), which contradicts \(a\wedge b<c\). It follows that \(a\lor c=a\lor b\). From \(c<b\) we get \(a\wedge c\leqslant a\wedge b\). Further, since \(a\wedge b<c\) we get \(a\wedge(a\wedge b)=a\wedge b\leqslant a\wedge c\). Thus \(a\wedge c=a\wedge b\).
Hence \(a,c,b,a\wedge b,a\lor b\) forms a sublattice isomorphic to \(N_{5}\) (see Figure 4). Let \(x=\langle\uparrow\!\!\underline{a},\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Analogous to the proof of Proposition 5.2, it can be shown that the elements \(b,a,d,a\wedge b,a\lor b\) form a sublattice isomorphic to \(N_{5}\) (see Figure 4).
Then by Lemma 5.1, arbitrary maximal disjoint extensions of \(\langle\uparrow\!b,\downarrow\!d\rangle\), \(\langle\uparrow\!d,\downarrow\!a\rangle\) and \(\langle\uparrow\!a,\downarrow\!b\rangle\), denoted by \(x=\langle\uparrow\!\underline{b},\downarrow\!\overline{d}\rangle\), \(y=\langle\uparrow\!\underline{d},\downarrow\!\overline{a}\rangle\) and \(z=\langle\uparrow\!\underline{a},\downarrow\!\overline{b}\rangle\), respectively, form an induced subgraph \(\{x,y,z\}\) of \(G_{L}\) that is isomorphic either to \(G_{0}=G_{N_{5}}\), \(G_{1}\), or \(G_{2}\). Using (FIS), \(\{x,y,z\}\) is isomorphic to \(G_{2}\).
In particular, it follows that \(G_{L}\) does not have the arc \(xEy\). Hence, \(\underline{b}\leqslant\overline{a}\). We can then get \(\underline{b}<b\) (as we got \(a<\overline{a}\) in Proposition 5.2--see the left lattice in Figure 4). Now either \(a\wedge b<\underline{b}\) or \(a\wedge b||\underline{b}\).
If \(a\wedge b<\underline{b}<b\), this contradicts \(a\wedge b\prec b\), so \(\underline{b}||a\wedge b\). We can also show \(b||\overline{a}\) (as we showed \(\underline{b}||a\) in Proposition 5.2).
Since \(a\leqslant\overline{a}\), we get \(a\wedge b\leqslant\overline{a}\wedge b\). We can again establish that \(a\wedge b<\overline{a}\wedge b\) and \(\overline{a}\wedge b<b\) (since \(b||\overline{a}\)), which contradicts \(a\wedge b\prec b\). Hence, our assumption that \(L\) does not satisfy (USM) leads to a contradiction.
Now we can deduce that the condition (FIS) is a sufficient condition for modularity of a finite lattice.
**Theorem 5.4**.: (**Sufficient condition for modularity**) Let \(L\) be a finite lattice with dual TiRS digraph \(G_{L}\). If \(G_{L}\) satisfies the condition (FIS) then \(L\) is modular.
Proof.: If follows by Propositions 5.2 and 5.3 that \(L\) satisfies both (LSM) and (USM). Since \(L\) is finite, we have that \(L\) is modular.
We notice that the dual digraph of the modular lattice \(M_{3}\) has neither \(G_{0}=G_{N_{5}}\) nor \(G_{1}\) as an induced subgraph (see Figure 5), hence it satisfies (FIS). The following example shows that the digraphs \(G_{0}\) and \(G_{1}\) cannot be dropped as forbidden induced subgraphs in the condition (FIS) for the dual digraph \(G_{L}\), which guarantees the modularity of a finite lattice \(L\).
**Example 5.5**.: The dual digraph of \(L_{3}^{\partial}\) in Figure 3 contains \(G_{0}\) as an induced subgraph, but not \(G_{1}\). Hence the lattice \(L_{3}^{\partial}\) (in addition to \(N_{5}\)) witnesses that the digraph \(G_{0}\) cannot be dropped from the condition (FIS).
The dual digraphs of the lattices \(L_{4}\) and \(L_{4}^{\partial}\) in Figure 1 do not contain \(G_{0}\) as an induced subgraph but they both contain \(G_{1}\) as an induced subgraph.
Figure 5: \(M_{3}\) and its dual digraph.
Hence these two examples witness that the digraph \(G_{1}\) cannot be dropped from the condition (FIS).
Now we are going to show that the condition (FIS) is not necessary for modularity. Indeed, it is not the case that every lattice whose dual digraph has \(G_{0}=G_{N_{5}}\) as an induced subgraph is a non-modular lattice. The next example gives a modular lattice whose dual digraph has \(G_{0}\) as an induced subgraph (but does not have \(G_{1}\) as an induced subgraph).
**Example 5.6**.: (**Condition (FIS) not necessary for modularity**) Figure 6 shows a modular lattice \(K\) on the left, and its dual digraph on the right. The induced subgraph isomorphic to \(G_{0}\) is shown with the dotted arrows (\(dcEcb\) and \(cbEed\)).
The fact that the dual TiRS digraph \(G_{L}=(V,E)\) of a finite modular lattice \(L\) does not contain \(G_{0}=G_{N_{5}}\) as an induced subgraph can be understood as some form of a "weak transitivity" condition for \(G_{L}\). We cannot have the arcs \(xEy\) and \(yEz\) in \(G_{L}\) without having also the arc \(xEz\) or at least the arc \(zEx\) (provided there are no "opposite" arcs \(yEx\) and \(zEy\) in \(G_{L}\)):
* for all vertices \(x,y,z\in V\), if \(xEy\) and \(yEz\), but \((y,x)\notin E\) and \((z,y)\notin E\), then \(xEz\) or \(zEx\).
Similarly, the fact that the dual TiRS digraph \(G_{L}=(V,E)\) of a finite modular lattice \(L\) does not contain the digraph \(G_{1}\) as an induced subgraph can be understood as some form of a "weak transitivity" condition for \(G_{L}\):
* for all vertices \(x,y,z\in V\), if \(xEy\) but \((y,x)\notin E\) and \((y,z)\notin E\) and \((z,y)\notin E\) then \(xEz\) or \(zEx\).
**Example 5.7**.: It is easy to see that the dual digraph of the lattice \(M_{3}\) (Figure 5) satisfies the weak transitivity conditions (wT0) and (wT1). The lattices \(L_{4}\) and \(L_{4}^{\partial}\) in Figure 1, and \(L_{3}^{\partial}\) in Figure 3 are non-modular lattices. The weak transitivity condition (wT0) is not satisfied in the dual digraph of \(L_{3}^{\partial}\). In the dual digraph of the lattices \(L_{4}\) and \(L_{4}^{\partial}\) we see the failures of (wT1).
Figure 6: A finite modular lattice \(K\) whose dual digraph contains \(G_{0}=G_{N_{5}}\) as an induced subgraph.
We notice that the weak transitivity conditions (wT0) and (wT1) are essentially expressing on the digraph side that the digraph \(G_{L}\) does not contain respectively the graphs \(G_{0}\) and \(G_{1}\) as induced subgraphs.
Hence the sufficiency of the quasi-equations (wT0) and (wT1) on the dual TiRS digraphs \(G_{L}\) for the modularity of \(L\) comes as no surprise:
**Corollary 5.8**.: **(Sufficient condition for modularity by "weak transitivity")** Let \(L\) be a finite lattice with dual TiRS digraph \(G_{L}=(V,E)\). If \(G_{L}\) satisfies the weak transitivity conditions (wT0) and (wT1), then \(L\) is modular.
Proof.: Let the weak transitivity conditions (wT0) and (wT1) be satisfied in \(G_{L}\). Suppose for contradiction that the lattice \(L\) is not modular. Then by Theorem 5.4, for some \(i\in\{0,1\}\) the digraph \(G_{L}\) contains the digraph \(G_{i}\) as an induced subgraph on certain vertices \(x,y,z\in V\). It follows that the weak transitivity condition (wTi) is not satisfied.
## 6 Conclusions and future work
In this paper, we firstly (in Section 3) defined two lattice conditions which generalise lower semimodularity and (upper) semimodularity respectively. We were motivated by Figure 2, taken from Ganter and Wille's book [8] (see also the PhD thesis of Reppe [11, Chapter 3.7]). There, weakenings of (LSM) and (USM) are given using complicated conditions on standard contexts. Our lattice-theoretic conditions on finite lattices that are weakenings of (LSM) and (USM), which we call (JM-LSM) and (JM-USM), seem to be simpler than the mentioned conditions in Figure 2 and they are easily seen to be natural generalisations of (LSM) and (USM). Our focus was the generalisation of lower semimodularity, and we characterised the dual of (JM-LSM) on the dual digraphs of finite lattices. We think the answer to the question below will be affirmative, yet investigating it is beyond the scope of this paper.
**Problem 1**.: _Are the top left and top right conditions in Figure 2, in terms of Formal Concept Analysis [8], equivalent to (JM-LSM) and (JM-USM) respectively?_
In Section 4 we used the results of Section 3 to obtain a new characterisation of meet-distributive lattices in Theorem 4.1. Combining this with previous results [6], we obtained a characterisation of the dual digraphs of finite meet-distributive lattices. Theorem 4.13 shows that we have identified a new class of structures that is in a one-to-one correspondence with finite convex geometries.
In Remark 4.3 we gave a condition, (W-JSD), which is a weakening of join-semidistributivity. The lattice \(M_{3}\) satisfies (LSM) but not (W-JSD) and hence shows that (LSM) is not equivalent to (JM-LSM) and (W-JSD). This leads us to ask the following question.
**Problem 2**.: _Is there another weakening of (JSD) such that when it is combined with (JM-LSM), this will be equivalent to (LSM)?_
Theorem 4.9 gave three conditions ((dJSD), (R) and (LTi)) on reflexive digraphs, which characterise the dual digraphs of finite meet-distributive lattices. This leads to the posing of the following open problem.
**Problem 3**.: _Can the conditions (dJSD), (R) and (LTi) be combined to give fewer, and possibly simpler, conditions?_
In Section 5 we introduced the condition (FIS) on dual digraphs and showed that it implies both lower and upper semimodularity of a finite lattice. Hence (FIS) was shown to be a sufficient condition for modularity of a finite lattice (Theorem 5.4). We also formulated a sufficient condition for modularity in different terms in Corollary 5.8. The condition (FIS) was shown not to be necessary for modularity of a finite lattice and hence we raise the following open question.
**Problem 4**.: _Is it possible to find forbidden induced subgraphs that characterise the dual digraphs of finite modular lattices in an analogous way to how \(N_{5}\) characterises modularity?_
The task of representing structures (in our case digraphs) dual to finite modular lattices has proved to be very challenging. We note that in the setting of formal contexts dual to finite lattices, a condition dual to semimodularity has been obtained (c.f. item (4) of [8, Theorem 42]). We have attempted to translate this condition to TiRS digraphs and the result was a complicated and opaque condition. We do not believe that the translation of this condition and its dual will yield a useful characterisation of the TiRS digraphs dual to finite modular lattices.
#### Acknowledgements
The first author acknowledges the hospitality of Matej Bel University during a visit in August-September 2022, as well as the National Research Foundation (NRF) of South Africa (grant 127266). The second author acknowledges his appointment as a Visiting Professor at the University of Johannesburg from June 2020 and the hospitality shown during a visit in July-August 2023. He further acknowledges support by Slovak VEGA grant 1/0152/22. The authors would like to thank Jose Sao Joao for useful discussions on these topics.
|
2309.04334 | On Frobenius structures in symmetric cones | We prove that in any strictly convex symmetric cone $\Omega$ there exists a
non empty locus where the WDVV equation is satisfied (i.e. there exists a
hyperplane being a Frobenius manifold). This result holds over any real
division algebra (with a restriction to the rank 3 case if we consider the
field $\mathbb{O}$) but also on their linear combinations. This theorem holds
as well in the case of pseudo-Riemannian geometry, in particular for a Lorentz
symmetric cone of Anti-de-Sitter type. Our statement can be considered as a
generalisation of a result by Ferapontov--Kruglikov--Novikov and Mokhov. Our
construction is achieved by merging two different approaches: an
algebraic/geometric one and the analytic approach given by Calabi in his
investigations on the Monge--Amp\`ere equation for the case of affine
hyperspheres. | Noemie C. Combe | 2023-09-08T14:02:42Z | http://arxiv.org/abs/2309.04334v1 | # On Frobenius Structures in Symmetric Cones
###### Abstract.
We prove that in any strictly convex symmetric cone \(\Omega\) there exists a non empty locus where the WDVV equation is satisfied (i.e. there exists a hyperplane being a Frobenius manifold). This result holds over any real division algebra (with a restriction to the rank 3 case if we consider the field \(\mathbb{O}\)) but also on their linear combinations. This theorem holds as well in the case of pseudo-Riemannian geometry, in particular for a Lorentz symmetric cone of Anti-de-Sitter type. Our statement can be considered as a generalisation of a result by Ferapontov-Kruglikov-Novikov and Mokhov. Our construction is achieved by merging two different approaches: an algebraic/geometric one and the analytic approach given by Calabi in his investigations on the Monge-Ampere equation for the case of affine hyperspheres.
###### Contents
* 1 Introduction
* 2 Strictly convex symmetric cones
* 3 Hessian structures
* 4 Algebraic structures
* 5 Frobenius structures
* 6 The WDVV equation and symmetric cones
* 7 Conclusion and future work
## 1. Introduction
### The main problem and result
Consider a strictly convex symmetric cone in the Euclidean space. Does there exist a relation between strictly convex symmetric cones and the Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) highly nonlinear PDE equation (also known as forming Frobenius manifolds)? The answer to this question is yes. In this paper, we prove that in a strictly convex symmetric cone there exists a non empty locus (a hyperplane) where the WDVV equation is satisfied. This result holds over any division algebra (as well as their linear combinations). In addition, this holds also for symmetric Lorentz cones of an Anti-de-Sitter type. We separate the latter from the first class of cones due to the difference in the geometries that they carry. The construction of the proof allows a deep exposition on the geometrical and algebraic aspects occurring around those types of manifolds and answers questions raised by Yu. I. Manin, related to our joint works.
Our statement can be considered as a generalisation of a construction by Ferapontov-Kruglikov-Novikov [12] and Mokhov [13, 14] proving the existence of connections between the symplectic Monge-Ampere equations of Hirota type and the WDVV equation. The construction is achieved by merging two different approaches: an algebraic/geometric one and the analytic approach given by Calabi in his investigations on the Monge-Ampere equation for the case of affine hyperspheres.
For a documentation on the importance of the WDVV equation in algebraic geometry we refer to [1, 13, 15, 16, 17, 18, 19, 20, 21]. Note that a manifold satisfying the WDVV equation corresponds to a Frobenius manifold for algebraic geometers. For a more differential geometry flavoured approach, relating the WDVV equation and integrable systems see for instance [11, 14, 13].
### Motivation
The reason for our question takes root in a series of differential geometry problems. In particular discussions with M. Kontsevich [15] at the IHES during my stay Nov-Dec (2022) and discussions with Yu. I. Manin that have followed from our joint works [1, 16, 17, 18] have inspired this paper. One of the differential geometry problems related to what we consider is the problem of classifying flat Lagrangian submanifolds in \(\mathbb{R}^{2n}\) with a given pseudo-Riemannian metric. From the Hessian geometry perspective, having flat Hessian metrics corresponds exactly to satisfying the WDVV equation (see [10]). Finally, this problem is also connected to problems gravitating around manifolds of constant curvature ([14]).
### The realm of strictly convex symmetric cones: 1935-now
Strictly convex symmetric cones play a central role in many different domains. They appeared at first in the works of Cartan [13], Koszul [14, 15, 16, 17, 18, 19, 20]. From the works of Minkowski, Siegel [21, 22], Maas [23], Piateski-Shapiro [24] (and many others) those cones are important in number theory. In algebraic geometry, symmetric convex cones appear under the shape of "cone of Kahler classes" for the case of an \(n\)-dimensional complex torus [25, 14]. Throughout the works of Wishart [25], Constantine [15, 16], James [10], Muirhead [16, 17], symmetric cones shine in statistics and in harmonic analysis [12]. More recent developments show links towards information geometry [1, 16, 17, 18, 19, 20, 21, 22]. Their importance is revealed also on a more applied side of mathematics, such as in optimization via convex programming and machine learning.
### The rich geometry and algebra of symmetric cones
Behind the notion of strictly convex symmetric cones hides a rich geometric and algebraic world. Our proof is the fruit of merging two different languages, which intersect at the notion of those strictly convex symmetric cones. First we use a differential geometry aspect. This comes from the relations between those cones and the Monge-Ampere equation (showed in Lem. 2). The more algebraic approach is inherited from Cartan's classification of symmetric spaces.
Recall the bridge between strictly convex symmetric cones and a special case of the Monge-Ampere equation. For \(x\in\mathbb{R}^{n}\) and \(\phi(x)\) a smooth function, this equation is:
\[\det\operatorname{Hess}(\phi)=k, \tag{1}\]
where \(k\) is a constant and \(\operatorname{Hess}(\phi)\) is the Hessian of \(\phi\).
Following [10], Eq. 1 has at most one convex solution in a bounded strictly convex domain, if \(\phi\) has prescribed boundary values. Interest is given to tensors defined by the second and third derivatives of \(\phi\). Given \(\phi\) a smooth function on an open subset of a real vector space, it is possible to define an associated Hessian metric. This metric is obtained by taking the second derivatives of \(\phi\). Hessian metrics are a natural way to construct Riemannian or pseudo-Riemannian metrics.
Using Cartan's classification of symmetric spaces those cones are expressed using a more algebraic flavour. It turns out that those strictly convex cones fall into two main classes. The first class corresponds to so-called Lagrangian Grassmannians having nonpositive sectional curvature. The second class are formed by the Lorentz cones (of anti-de-Sitter type).
### A proof giving a new perspective
By merging the geometric and algebraic viewpoints discussed above, we create a method which allows to prove our result. This new method gives additionally a very detailed description of the geometric and algebraic landscape occurring in this topic. For instance, using the language of Lie groups, we reformulate one part of our statement by saying that: in a noncompact Lagrangian-Grassmanian symmetric space there exist hypersurfaces satisfying the axioms of a Frobenius manifold. Those spaces are defined over a division algebra \(\mathbb{R},\mathbb{C},\mathbb{H},\mathbb{O}\) (or a linear combination of those algebra). Reciprocally, there exist hypersurfaces satisfying the axioms of a Frobenius manifold within a Lorentz symmetric cone (of anti-de-Sitter type).
### Connections to other works
[14, 15] shows that in the real case, using a reformulation of the dispersionless Hirota type of equation (in the context of the symplectic Monge-Ampere equation) one can prove the existence of a hyperplane such that the WDVV equation is satisfied. In the real case, this statement is linked to our result since it is a confirmation that our statement is true. Since our result holds over any division algebra (and their linear combinations) as well as for Lorentz Anti-de-Sitter (AdS) symmetric cones, our result gives a general statement. Finally, we point out that the method chosen to prove our result pictures the rich geometric and algebraic patterns, hiding within those symmetric cones and bridges two different ways of seeing them.
### Organisation of the paper
This paper is organised as follows: in the three first sections we expose the geometric and algebraic state of the art concerning the strictly convex symmetric cones. It is necessary to recall some known results for the proof. The the last two sections are devoted to the construction the proof of the main statement.
**Acknowledgments** I would like to thank M. Kontsevich for many discussions and comments during my stays (spring and winter 2022) at the IHES. Both Max Planck
Institutes for Mathematics in Bonn (MPIM) and in Leipzig (MPIMIS) are very much acknowledged for having supported my research. As well, I am grateful to the grant Polonez-bis 3 for supporting my research.
## 2. Strictly convex symmetric cones
This part serves, as a first focus, on what is known of the geometric aspect of those strictly symmetric convex cones. The second part will bring a (complementary) algebraic version.
### Strictly convex cones
In the following parts of this article we always consider _strictly convex cones_. Note that for brevity we simply refer to them as _convex cones_.
Let us recall some elementary notions on strictly convex cones (see [10] for further information).
**Definition 1**.: _Let \(V\) be a finite dimensional real vector space. Let \(\left\langle-,-\right\rangle\) be a non-singular symmetric bilinear form on \(V\). A subset \(\Omega\subset V\) is a convex cone if and only if \(x,y\in\Omega\) and \(\lambda,\mu>0\) imply \(\lambda x+\mu y\in\Omega\)._
### Homogeneous cones
The automorphism group \(G(\Omega)\) of an open convex cone \(\Omega\) is defined by
\[G(\Omega)=\{g\in GL(V)\,|\,g\Omega=\Omega\}\]
An element \(g\in GL(V)\) belongs to \(G(\Omega)\) iff \(g\overline{\Omega}=\overline{\Omega}\) [11] So, \(G(\Omega)\) is a closed subgroup of \(GL(V)\) and forms a Lie group. The cone \(\Omega\) is said to be _homogeneous_ if \(G(\Omega)\) acts transitively upon \(\Omega\).
### Symmetric cones
From homogeneous cones one can construct symmetric convex cones. Let us introduce the definition of an open dual cone. An open dual cone \(\Omega^{*}\) of an open convex cone is defined by \(\Omega^{*}=\{y\in V\,|\,\langle x,y\rangle>0,\,\forall\,x\in\overline{\Omega} \setminus 0\}\). A homogeneous convex cone \(\Omega\) is symmetric if \(\Omega\) is self-dual i.e. \(\Omega^{*}=\Omega\). Note that if \(\Omega\) is homogeneous then so is \(\Omega^{*}\). A symmetric homogeneous cone is called a _Vinberg cone_.
**Remark 1**.: _If \(\Omega\) is a symmetric open cone in \(V\), then \(\Omega\) is a symmetric Riemann space._
### Automorphism group
Let us go back to the automorphism group of \(\Omega\). This discussion relies on Prop I.1.8 and Prop. I.1.9 in [10].
Let \(\Omega\) be a symmetric cone in \(V\). For any point \(a\in\Omega\) the stabilizer of \(a\) in \(G(\Omega)\) is given by
\[G_{a}=\{g\in G(\Omega)\,|\,ga=a\}.\]
By [12], if \(\Omega\) is a proper open homogeneous convex cone then for any \(a\) in \(\Omega\), \(G_{a}\) is compact. Now, if \(H\) is a compact subgroup of \(G\) then \(H\subset G_{a}\) for some \(a\) in \(\Omega\). This means that the groups \(G_{a}\) are all maximal compact subgroups of \(G\) and that if \(\Omega\) is homogeneous then all these subgroups are isomorphic.
By [Prop. I.1.9, [10]], if \(\Omega\) is a symmetric cone, there exist points \(e\) in \(\Omega\) such that \(G(\Omega)\cap O(V)\subset G_{e}\), where \(O(V)\) is the orthogonal group of \(V\). For every such \(e\) one has \(G_{e}=G\cap O(V)\)
Suppose \(\Omega\) is a convex homogeneous domain in \(V\). Assume that
* \(G(\Omega)\) is the group of all automorphisms;
* \(G_{e}=K(\Omega)\) is the stability subgroup for some point \(x_{0}\in\Omega\);
* \(T(\Omega)\) is a maximal connected triangular subgroup of \(G(\Omega)\).
Following [11] Th 1] we have:
\[G(\Omega)=K(\Omega)\cdot T(\Omega),\]
where \(K(\Omega)\cap T(\Omega)=e\) and the group \(T\) acts simply transitively.
This decomposition on the Lie group side leads naturally to its Lie algebra. Cartan's decomposition for the Lie algebra tells us that \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{t}\),
where:
* \(\mathfrak{t}\) can be identified with the tangent space of \(\Omega\) at \(e\).
* \(\mathfrak{h}\) is the Lie algebra associated to \(K(\Omega)\)
and
\[[\mathfrak{t},\mathfrak{t}]\subset\mathfrak{h},\]
\[[\mathfrak{h},\mathfrak{t}]\subset\mathfrak{t}.\]
From now assume that \(G\) is semi-simple. The Killing bilinear form is thus not degenerate on \(\mathfrak{g}\) and the symmetric bilinear form is given by:
\[\langle X,Y\rangle=-Tr(adX\,adY)\]
where \(ad\,X(\xi)=[X,\xi]\) and \(ad\,Y(\xi)=[Y,\xi]\).
### \(\mathbb{K}-\)modules
Throughout the paper let \(\mathbb{K}\) be a finite dimensional real division algebra. By the well known Kervaire-Milnor there exist only four such division algebras which are isomorphic to \(\mathbb{R},\mathbb{C},\mathbb{H}\), and \(\mathbb{O}\) of respective dimensions 1,2, 4 and 8.
* If the finite dimensional real division algebra \(\mathbb{K}\) is unitary and commutative then it is isomorphic to \(\mathbb{R}\) and \(\mathbb{C}\).
* If the real division algebra is noncommutative but associative then \(\mathbb{K}\) is isomorphic to \(\mathbb{H}\).
* If \(\mathbb{K}\) is non associative but alternative then \(\mathbb{K}\) is isomorphic to \(\mathbb{O}\).
We discuss briefly the realisation of the \(\mathbb{K}\)-modules in terms of linear spaces. If \(\mathbb{K}\) is a field of dimension \(n\), where \(n\in\{1,2,4,8\}\) the (real) realisation of the \(\mathbb{K}\)-module defines a (real) vector space of dimension \(nm\) where \(m\) is the dimension of the module [12], Sec. 2.1.1. Via the Cayley-Dickson process and starting from \(\mathbb{R}\), remark that all normed division algebras can be obtained. This means that from \(\mathbb{R}\) we get \(\mathbb{C}\); from \(\mathbb{C}\) we get \(\mathbb{H}\) and finally from \(\mathbb{H}\) we get \(\mathbb{O}\). Taking the realisation of \(\mathbb{K}\)-modules as real vector spaces \(V\) then it leads to:
1. the complexification: \(V\to(TV,J)\), where \(J^{2}=-Id\).
2. the symplectification: \(V\to(T^{*}V,\omega)\), where \(\omega\) is a non-degenerate closed form.
See [11, 12] for further developments concerning the geometry of the real realisation of \(\mathbb{K}\)-modules, namely concerning affinors.
### Classification of cones
Any symmetric cone (i.e. homogeneous and self-dual) \(\Omega\) is in a unique way isomorphic to the direct product of irreducible symmetric cones \(\Omega_{i}\) (cf. Prop. III.4.5, [10]). According to Vinberg we have that:
**Proposition 1**.: _Each irreducible homogeneous self-dual cone belongs to one of the following classes:_
**Remark 2**.: _We have two remarks. The first is that \(\Pi_{3}(\mathbb{O})\) corresponds to the Cayley algebra. The second is that spherical cone \(\Lambda_{n}\) corresponds here to an \(n\)-dimensional Anti-de-Sitter (AdS) space._
### Jordan algebra structures
Recall the tight relations between those cones and algebraic objects, namely formally real simple Jordan algebras. We introduce some notations:
* \(Sym(n,\mathbb{K})\) denotes the space of symmetric matrices of dimension \(n\times n\) defined over the field \(\mathbb{K}\).
* \(Herm(n,\mathbb{K})\) denotes the space of hermitian matrices of dimension \(n\times n\) defined over the field \(\mathbb{K}\).
**Lemma 1**.: _Consider the cones given by \(\Pi_{n}(\mathbb{K})\), where the field \(\mathbb{K}\) is \(\mathbb{R},\mathbb{C},\mathbb{H}\) or \(\mathbb{O}\). Then, the tangent space \(T_{e}(\Pi_{n}(\mathbb{K}))\) to \(\Pi_{n}(\mathbb{K})\) is the space of self-adjoint matrices._
Proof.: Take the symmetric cone \(\Omega\) of symmetric positive definite matrices \(n\times n\) over \(\mathbb{R}\). The tangent space to \(\Omega\) at \(x\in V\) is the space of symmetric matrices with real entries. The matrix exponential and logarithm realize a one-to-one mapping, between the space of symmetric matrices to the space of symmetric positive definite matrices. The same reasoning can be made for the cones over the other division algebras.
**Definition 2**.: _An algebra \((\mathscr{A}^{+},\circ)\) is a Jordan algebra if:_
* _it is commutative and_
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Nb_ & _Symbol_ & _Irreducible symmetric cones_ \\ \hline _1._ & \(\Pi_{n}(\mathbb{R})\) & _Cone of_ \(n\times n\) _positive definite symmetric real matrices._ \\ _2._ & \(\Pi_{n}(\mathbb{C})\) & _Cone of_ \(n\times n\) _positive definite self-adjoint complex matrices._ \\ _3._ & \(\Pi_{n}(\mathbb{H})\) & _Cone of_ \(n\times n\) _positive definite self-adjoint quaternionic matrices._ \\ _4._ & \(\Pi_{3}(\mathbb{O})\) & _Cone of_ \(3\times 3\) _positive definite self-adjoint octavic matrices._ \\ _5._ & \(\Lambda_{n}\) & _Lorentz cone given by_ \(x_{0}>\sqrt{\sum_{i=1}^{n}x_{i}^{2}}\) _(aka spherical cone)._ \\ \hline \end{tabular}
\end{table}
Table 1. Classification of irreducible symmetric cones
**Remark 3**.: _Note that one can consider the algebra \(JSpin_{n}^{+}\) as a Jordan subalgbera of a full algebra of hermitian matrices \(2^{n}\times 2^{n}\) real matrices._
### Cartan's symmetric spaces
Vinberg cones are Cartan symmetric spaces. In Sec. 3 we study their properties as Riemannian symmetric spaces. Recall that a (pseudo-)Riemannian manifold is globally symmetric if one can assign to every point \(p\in\mathcal{M}\) an isometry \(s_{p}\) of \(\mathcal{M}\) such that \(s_{p}^{2}=id\) and \(p\) is an isolated fixed point of \(s_{p}.\)
A Riemannian symmetric space \(\mathcal{M}\) is diffeomorphic to a homogeneous space \(G/K,\) where \(G\) is a connected Lie group with an involutive automorphism whose fixed point set is essentially the compact subgroup \(K\subset G.\) Consider the pair \((G,K)\). We call \((G,K)\) a symmetric pair provided that there exists an involution \(s\in G,\) such that \((K_{s})_{0}\subset K\subset K_{s},\) where \(K_{s}\) is the set of fixed points of \(s\) and \((K_{s})_{0}\) is the identity
\begin{table}
\begin{tabular}{|c|c|} \hline _Irreducible symmetric cone_ & _Formally real simpl Jordan algebras_ \\ \hline \(\Pi_{n}(\mathbb{R})\) & _Jordan algebra of \(n\times n\) self-adjoint real matrices._ \\ \(\Pi_{n}(\mathbb{C})\) & _Jordan algebra of \(n\times n\) self-adjoint complex matrices._ \\ \(\Pi_{n}(\mathbb{H})\) & _Jordan algebra of \(n\times n\) self-adjoint quaternionic matrices._ \\ \(\Pi_{3}(\mathbb{O})\) & _Jordan algebra of \(3\times 3\) self-adjoint octonionic matrices:_ \\ \(\Lambda_{n}\) & _Spin factor algebra \(JSpin^{+}\) on the space \(\mathbb{R}1\oplus\mathbb{R}^{n}\) for \(n\geq 2\)._ \\ \hline \end{tabular}
\end{table}
Table 2. Classification of symmetric cones using their Jordan algebras
component of \(K_{s}\). Riemannian symmetric spaces may be classified in terms of symmetric Lie algebras. Every noncompact symmetric space has a compact dual (and reciprocally).
Any simply connected Riemannian symmetric space is a Riemannian product of irreducible ones. Irreducible, simply connected Riemannian symmetric spaces are classified as follows:
* Type of irreducible symmetric space: Euclidean. The curvature is \(0\). It is therefore isometric to a Euclidean space.
* Type of irreducible symmetric space: Compact. The sectional curvature is nonnegative (but not identically zero).
* Type of irreducible symmetric space: non-compact.The sectional curvature is nonpositive (but not identically zero).
**Definition 3**.:
1. _The following non-compact symmetric spaces are called spacelike Lagrangian Grassmannians non-compact symmetric spaces:_
2. There exists a short exact sequence: \[1\to SL_{n}(\mathbb{K})\to GL_{n}(\mathbb{K})\xrightarrow{det}\mathbb{K}^{\times} \to 1,\]
implying that we have the following decomposition \(GL_{n}(\mathbb{K})=SL_{n}(\mathbb{K})\rtimes\mathbb{K}^{\times}\), where \(\mathbb{K}^{\times}\) is the multiplicative group. One may thus consider \(SL_{n}(\mathbb{K})/K\rtimes\mathbb{K}^{\times}\) and for simplicity we focus on the submanifold \(SL_{n}(\mathbb{K})/K\).
Those submanifolds are classified as follows:
* \(\Pi_{n}(\mathbb{C})\), with \(\det=1\) is associated to \(SL_{n}(\mathbb{C})/SU_{n}\);
* \(\Pi_{n}(\mathbb{H})\), with \(\det=1\) is associated to \(SL_{n}(\mathbb{H})/Sp_{n}\);
* \(\Pi_{3}(\mathbb{O})\), with \(\det=1\) is associated to \(SL_{3}(\mathbb{O})/F_{4}\).
Concerning the latter, \(\Pi_{3}(\mathbb{O})\) note that \(Herm(3,\mathbb{O})\) is a closed subgroup of the orthogonal group \(O(27)\) and thus forms a compact Lie group. It is referred to as \(F_{4}\).
3. The spherical cone \(\Lambda_{n}\) is an anti-de-Sitter space. In terms of Lorentzian symmetric spaces it is associated to: \(O(1,n-1)/O(n-1)\oplus\mathbb{R}\) (see [10] p.7 for a detailed proof).
4. The full classification of Vinberg cones in terms of Lie groups and Jordan algebras is presented in the table below (see for details [10], Ch.V, p.97 Sec.3). Te Jordan algebras are listed in the leftmost part of the table.
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\mathcal{J}\) & \(\Omega\) & \(\mathfrak{g}\) & \(\mathfrak{k}\) & \(dim\mathcal{J}\) & rank \(\mathcal{J}\) & \(d\) \\ \hline \(Sym(n,\mathbb{R})\) & \(\Pi_{n}(\mathbb{R})\) & \(\mathfrak{sl}(n,\mathbb{R})\oplus\mathbb{R}\) & \(\mathfrak{o}(n)\) & \(\frac{1}{2}n(n+1)\) & \(n\) & 1 \\ \(Herm(n,\mathbb{C})\) & \(\Pi_{n}(\mathbb{C})\) & \(\mathfrak{sl}(n,\mathbb{C})\oplus\mathbb{R}\) & \(\mathfrak{su}(n)\) & \(n^{2}\) & \(n\) & 2 \\ \(Herm(n,\mathbb{H})\) & \(\Pi_{n}(\mathbb{H})\) & \(\mathfrak{sl}(m,\mathbb{H})\oplus\mathbb{R}\) & \(\mathfrak{su}(n,\mathbb{H})\) & \(n(2n-1)\) & \(n\) & 4 \\ \(\mathbb{R}\times\mathbb{R}^{n-1}\) & \(\Lambda_{n}\) & \(\mathfrak{o}(1,n-1)\oplus\mathbb{R}\) & \(\mathfrak{o}(\text{n-1})\) & \(n\) & 2 & \(n-2\) \\ \(Herm(3,\mathbb{O})\) & \(\Pi_{3}(\mathbb{O})\) & \(\mathfrak{c}_{(-26)}\oplus\mathbb{R}\) & \(\mathfrak{f}_{4}\) & 27 & 3 & 8 \\ \hline \end{tabular}
5. By Nomizu [11], an irreducible symmetric space \(G/H\) is either flat, compact or noncompact. It is easy to check that the symmetric spaces considered above have non-positive scalar curvature. To sumarize:
* Cones given by \(GL_{n}(\mathbb{K})/K\) are noncompact symmetric spaces.
* The spherical cone \(\Lambda_{n}\) corresponds to \(O(1,n-1)/O(n-1)\oplus\mathbb{R}\) and belongs also to the class of noncompact symmetric space.
To conclude, Vinberg cones (1)-(4) correspond to noncompact symmetric spaces of spacelike Lagrangian-Grassmanian type; Vinberg cones of type (5) are Lorentz cones of Anti-de-Sitter type.
**Proposition 4**.: _Vinberg cones of space-like Lagrangian-Grassmannian type come equipped with a \(G\)-invariant metric and with a symmetric bilinear form given by_
\[\langle X,Y\rangle=\Re Tr(XY),\]
_where \(X,Y\in T_{e}(Gl_{n}(\mathbb{K})/K)\cong\mathfrak{t}\subset\mathfrak{g}\) and where \(Tr(\cdot)\) stands for the trace (linear) operator._
Proof.: To the Lie group \(Sym(n,\mathbb{R})\) the corresponding Lie algebra is given by the set of symmetric matrices of trace \(0\), denoted \(\mathfrak{s}ym_{0}(n)\).This statement follows from the Killing form.
## 3. Hessian structures
We proceed to a differential geometric approach of cones. This allows to inherit a rich panel of tools, important for our construction.
### The Monge-Ampere equation
Consider the non-linear PDE equation Monge-Ampere equation.
\[\det\operatorname{Hess}(\phi)=f(x), \tag{2}\]
where \(\operatorname{Hess}(\phi)\) denotes the Hessian matrix attached to the smooth function \(\phi\) and \(f(x)\) is a given positive-valued function with \(x=(x_{1},\cdots,x_{n})\in\mathbb{R}^{n}\).
A special case of the Equ. 2 can be given by:
\[\det\operatorname{Hess}(\phi)=1. \tag{3}\]
Solutions to this nonlinear PDE equation (such that the hessian matrix is definite at each point) are so-called _elliptic_. Note that \(\phi\) is locally either convex or concave.
**Lemma 2**.: _In a strictly convex symmetric cone \(\Omega\), the equation Eq. 2 has at most one convex solution \(\phi\) with arbitrary smooth boundary values._
Proof.: It has been shown that Eq.2 has at most one convex solution \(\phi\) (with arbitrary smooth boundary values) in any strictly convex bounded domain [2]. Given that the strictly convex symmetric cone is also a strictly convex bounded domain. This concludes the statement.
Eq. 1 is invariant under (unimodular) linear transformation of the independent variables \(x=(x_{1},\cdots,x_{n})\in\mathbb{R}^{n}\). If we allow only linear transformations of \(x\), the array of all partial derivatives of \(\phi\) of any given order \(k\) can be interpreted as the components of a covariant tensor of valence \(k\) which is symmetric in all pairs of indices.
Our interest goes to tensors defined by the second and third derivatives. Since we assume that \(\phi\) is convex and satisfies 2, then the symmetric tensor \(g_{ij}\) is positive definite. It thus definesa Riemannian and also Hessian metric, in the domain of definition of \(\phi\).
### The potential function
We introduce a function \(\chi(x):\overline{\Omega}\to\mathbb{R}\) such that:
1. \(\chi(x)\) is real analytic and positive on \(\Omega\)
2. \(\chi(x)\) is continuous on \(\overline{\Omega}\) and vanishes on the boundary \(\partial\Omega\).
3. \(\chi(\lambda x)=\lambda^{n}\chi(x)\) for \(\lambda>0\), \(x\in\Omega\) and where \(n\) is the dimension of \(\Omega\).
4. the bilinear form on \(\Omega\) is non singular for every \(y\in\Omega\). Fixing a point \(c\in\Omega\) we have: \[\langle u,v\rangle=-\partial_{u}\partial_{v}\log\chi(y)|_{y=c}\]
**Definition 4** (Koszul-Vinberg (KV) characteristic function).: _Let \(\Omega\subset V\) be a strictly convex homogeneous cone. For any vector \(x\in\Omega\), define the KV-characteristic function:_
\[\chi(x)=\int_{\Omega^{*}}\exp{\{-\langle x,a^{*}\rangle\}}da^{*} \tag{4}\]
_where \(da^{*}\) is a volume form invariant under translations in \(\Omega^{*}\)._
Some data about the KV-function is expressed below.
* The KV-function tends to infinity on the boundary of the cone \(\overline{\Omega}\). By Holder's inequality, we have that \(\Phi=\ln\chi\) is strictly convex.
* From the definition of \(\chi\) we have that for any \(x\in\Omega\) and any \(g\in G(\Omega)\), one has \(\chi(gx)=|\det g|^{-1}\chi(x).\) The differential form, \(\alpha=d\chi/\chi\) is invariant under \(G(\Omega)\).
* Let \(\Omega\) be homogeneous cone. Then the complex tube \(T_{V}\)--whose real part is \(\Omega\)--has a transitive group of holomorphic mappings given by \(z\to gz+\imath t,\,g\in G(\Omega)\). The Bergman kernel is for \(T_{V}\) up to constant factor: \(\chi^{2}(z+\overline{w})\).
### Affine flat structures
Let \(\Omega\) be a strictly convex homogeneous cone. Let \(\mathcal{T}_{\Omega}\) be the tangent sheaf. Let \(\Omega^{1}_{\Omega}\) be the sheaf of holomorphic \(1\)-forms on \(\Omega\). Let us recall the general definition of an affine flat structure on \(\Omega\). It is given by any of the following equivalent data:
1. An atlas on \(\Omega\) whose transition functions are affine linear.
2. A torsionless flat connection \(\nabla_{0}:\mathcal{T}_{\Omega}\to\Omega^{1}_{\Omega}\otimes_{v}\mathcal{T}_ {\Omega}\).
3. A local system \(\mathcal{T}^{f}_{\Omega}\subset\mathcal{T}_{\Omega}\) of flat vector fields, which forms a sheaf of commutative Lie algebras of rank \(n(=dim\Omega)\) such that \(\mathcal{T}_{\Omega}=\mathcal{O}_{\Omega}\otimes_{k}\mathcal{T}^{f}_{\Omega}\).
Affine flat structures on a homogeneous cone imply the existence of a pre-Lie algebra (aka Vinberg algebra, or Left Symmetric Algebra). Consider \(X,Y\) two vector fields in \(\mathcal{T}_{\Omega}\) (or t). If \(\Omega\) is provided with an affine structure then:
\[\nabla_{X}(Y)-\nabla_{Y}(X)-[X,Y]=0, \tag{5}\]
Also, written as:
\[\nabla_{X}\nabla_{Y}-\nabla_{Y}\nabla_{X}-\nabla_{[X,Y]}=0,\]
where \(X,Y\in\mathcal{T}_{\Omega}\).
A multiplication operation \(``\circ"\) given by \(X\circ Y=\nabla_{X}(Y)\) defines a commutative algebra \((\mathfrak{t},\circ)\) satisfying the relation:
\[a\circ(b\circ c)-b\circ(a\circ c)=(a\circ b)\circ c-(b\circ a)\circ c,\]
for \(a,b,c\in\mathfrak{t}\).
This forms a pre-Lie algebra structure, on the set of connections. This is also known as a Lie-admissible algebra. Each Lie algebra with affine structure is derived from a Lie-admissible algebra.
**Lemma 3** ([Bu06]).: _There is a one-to-one correspondence of \(n\)-dimensional convex homogeneous cones \(\Omega\) and \(n\)-dimensional pre-Lie algebras._
We can thus now state that:
**Corollary 1**.: _Convex homogeneous cones and Vinberg cones are domains equipped with an affine flat structure._
Proof.: The proof follows directty from the lemma 3 and the discussion above.
### On Hessian structures
Suppose that \(\Omega\) is an affine flat manifold. Then, it comes equipped with an affine flat connection \(\nabla_{0}\) and there exists a class of Riemannian metrics compatible with \(\nabla_{0}\). A Riemannian metric \(g\) on \(\mathcal{M}\) is said to be a Hessian metric if \(g\) is given by \(g=\nabla_{0}^{2}\Phi\), where \(\Phi\) is a local smooth function.
**Definition 5**.: _The pair \((\nabla_{0},g)\) is called a Hessian structure on \(\Omega\). The triple given by \((\Omega,\nabla_{0},g)\) is called a Hessian manifold._
Let \(\Omega\) be a differentiable manifold with a locally flat linear connection \(\nabla_{0}\). Let \(p\) be a point of the differentiable manifold \(\Omega\) and let \(U\) be an open neighborhood of \(p\). Then, for any point \(p\) in \(U\), there exists a local coordinate system \((x_{1},...,x_{n})\) called an affine local coordinate system, such that \(\nabla_{0}(dx_{i})=0\). A Riemannian metric \(g\) on the differential manifold \(\Omega\) is said to be locally Hessian with respect to \(\nabla_{0}\) if there exists for each point \(p\in\Omega\), a real-valued function \(\Phi\) of class \(C^{\infty}\) on \(U\) such that
\[g=\frac{\partial^{2}\Phi}{\partial x_{i}\partial x_{j}}dx_{i}dx_{j},\]
where \((x_{1},...,x_{n})\) is an affine local coordinate system around \(p\). Then, \((\nabla_{0},g)\) is called a _locally Hessian_ structure on \(\Omega\).
**Remark 4**.: _A variant of Hessian metrics can be used to define a canonical Riemannian metric on any convex domain, using the solution of a Monge-Ampere equation by Loewner-Nirenberg and Cheng-Yau._
Therefore, applying the knowledge above, we can conclude that:
**Lemma 4**.: _The convex homogeneous cone \(\Omega\) has a Hessian structure._
Proof.: We use the KV-function to define a Hessian structure on \(\Omega\). Indeed, the canonical Riemannian metric attached to the cone \(\Omega\) is \(g_{V}=g=-\mathrm{Hess}(\ln\chi(x))\). It is invariant under \(G(\Omega)\).
Let us continue the discussion in local coordinates. Assume \((x_{1},\cdots,x_{n})\) forms an affine coordinate system on \(\Omega\). The convex homogeneous domain \(\Omega\) admits an invariant volume element defined as \(\Phi dx_{1}\wedge\cdots\wedge dx_{n}\).
The canonical bilinear form is:
\[g=\,\sum\frac{\partial^{2}\ln\chi}{\partial x_{i}\partial x_{j}}dx_{i}dx_{j}, \tag{6}\]
where \(\Phi=\ln\chi(x)\) is a potential function. The canonical bilinear form \(g\) is positively definite. This gives the Riemannian metric on \(\Omega\) and defines the Hessian structure.
This marks the beginning of the emergence of Frobenius manifold patterns. As we will see, \(\Phi=\ln\chi(x)\) is the potential function in the Frobenius manifold (see Sec. 5).
## 4. Algebraic structures
Let \(\Omega\) be a Riemannian homogeneous convex cone (cf. Sec.3). We investigate its relations to algebras.
### Frobenius algebras
Consider a finite \(n\)-dimensional commutative associative algebra \(\mathscr{A}\) over \(\mathbb{K}\) possibly with unit \(\mathbf{1}_{\mathscr{A}}\), with basis \(\{e_{i}\}_{i=1}^{n}\) and structural constants \(C_{jk}^{i}\), being components of the (1,2)-tensor \(\circ:\mathscr{A}\times\mathscr{A}\to\mathscr{A}\) such that:
\[e_{i}\circ e_{j}=C_{ij}^{s}e_{s},\quad i,j,s\in\{1,\cdots,n\}.\]
Associativity can be written as follows :
\[C_{ab}^{c}C_{cd}^{f}=C_{ad}^{c}C_{cb}^{f}.\]
Commutativity implies that
\[C_{ab}^{s}=C_{ba}^{s}.\]
**Definition 6**.: _A unital, commutative, associative algebra \((\mathscr{A},\circ)\) equipped with a bilinear symmetric form \(\langle-,-\rangle\) satisfying_
\[\langle x\circ y,z\rangle=\langle x,y\circ z\rangle,\quad x,y,z\in\mathscr{A}\]
_is a Frobenius algebra._
### Connection algebras
**Definition 7** (Connection algebra).: _A connection algebra is a unital, commutative algebra \(\mathscr{A}^{+}\) with a basis \(e_{1},\cdots,e_{n}\) and multiplication \(\circ:V\times V\to V\) given by_
\[e_{i}\circ e_{j}=\Gamma_{ij}^{k}e_{k},\]
_where \(\Gamma_{ij}^{k}\) are the structure constants and being equipped with a symmetric bilinear form \(\langle-,-\rangle\), induced from the Riemannian metric \(g\)._
**Definition-Proposition 1**.: _Let \(\Omega\) be a convex (pseudo-)Riemannian homogeneous cone; let \(p\in\Omega\) be a point. Consider a system of linear coordinates \((x_{1},x_{2},\cdots,x_{n})\in\Omega\)._
_Then, there exists a connection algebra \(\mathscr{A}^{+}\) on the tangent bundle of \(\Omega\) with multiplication operation \(\circ\) given for every pair \(X,Y\) in \(V\) by:_
\[(X\circ Y)^{i}=-\sum_{j,k}\Gamma_{jk}^{i}X^{j}Y^{k}\quad 1\leq i\leq n. \tag{7}\]
_Structure constants are given by \(\Gamma_{jk}^{i}=\frac{1}{2}\partial_{jkl}\Phi g^{li}(p)\) and \(\partial_{jkl}\Phi=C(\partial_{j},\partial_{k},\partial_{l})\) forms a rank three symmetric tensor._
The construction of this algebra follows from [20].
**Remark 5**.: _Note that the symmetry of \(C\) comes from the fact that \(\chi(e)\) is a real-analytic function._
### Towards Jordan algebras
The connection algebra \(\mathscr{A}^{+}\) forms a Jordan algebra if the Riemannian space is a symmetric space (i.e. is a Vinberg cone). We define the powers of the elements \(u\) in \(\mathscr{A}^{+}\) as follows, which leads to the Jordan algebra:
\[X^{1}=X,\quad X^{m+1}:=X\circ X^{m},\quad m\geq 1.\]
**Corollary 2**.: _If \(\mathscr{A}^{+}\) is a Jordan algebra, then \(X^{r}\circ X^{s}=X^{r+s}\) holds for all \(X\in\mathscr{A}^{+}\), where \(r\geq 1\) and \(s\geq 1\) and it is then called power-associative._
The Jordan algebras associated to the Vinberg cones are finite-dimensional formally real Jordan algebras, being a direct sum of a finite number of simple ideals. The five basic types of simple building blocks are listed in Sec.2.7
**Lemma 5**.: _The pre-Lie algebra \(\nabla_{X}(Y)=X\circ Y\) generates the connection algebra \(\mathscr{A}^{+}\)._
Proof.: The pre-Lie algebra defined above is given by a multiplication operation \(X\circ Y=\nabla_{X}(Y)\) where \(X,Y\) lie in the space of vector fields of \(\mathfrak{t}\). The algebra \(\mathscr{A}^{+}\) is defined on the vector space \(V\) which identifies with \(\mathfrak{t}\).
Let us construct a canonical linear isomorphism between \(\mathfrak{t}\) and \(V\), via Vinberg's \(T\)-algebras [20]. Every element \(a\in\mathfrak{t}\) corresponds to the sum of \(a+\tilde{a}\in V=T_{e}(V)\), where \(\tilde{a}\) is the involutive anti-automorphism equipping the \(T\)-algebra. This determines the isomorphism:
\[\nu:\mathfrak{t} \to V=T_{e}(V)\] \[a \mapsto\,a+\tilde{a}.\]
The multiplication operation on \(\mathscr{A}^{+}\) coincides with \(X\circ Y=\nabla_{X}(Y)\). So, the pre-Lie algebra \(\nabla_{X}(Y)=X\circ Y\) generates the connection algebra \(\mathscr{A}^{+}\).
The algebra is defined throughout a non-singular symmetric bilinear form on \(V\). Consider a trilinear form \(C\) on \(V\). Fixing one argument we get a bilinear form in the remaining two arguments. Therefore, for given \(X\) the form \(C(X,Y,Z)\) is a bilinear form in \(Y\) and \(Z\) and there exists a linear transformation \(\nabla_{X}\) such that
\[C(X,Y,Z)=g(\nabla_{X}(Y),Z).\]
If the trilinear form is symmetric in \(X,Y,Z\) then we have that the algebra is commutative and that
\[g(X\circ Y,Z)=g(X,Y\circ Z).\]
**Lemma 6**.: _Let \(\mathscr{A}^{+}\) be the connection algebra. The symmetric bilinear form satisfies:_
\[\langle x\circ y,z\rangle=\langle x,y\circ z\rangle,\]
_for all \(x,z,y\in\mathscr{A}^{+}\)._
Proof.: The symmetry of the rank 3 tensor \(C(x,y,z)\) implies the relation:
\[\langle x\circ y,z\rangle=\langle x,y\circ z\rangle.\]
**Corollary 3** ([12], Thm. 12, p.118.).: _Let \(\mathscr{A}^{+}\) be a real Jordan algebra associated to a Vinberg cone \(\Omega\). Then the following statements are equivalent:_
* \(\mathscr{A}^{+}\) _is formally real;_
* _there exists a positive definite bilinear form_ \(\langle-,-\rangle\) _satisfying_ \[\langle x,y\circ z\rangle=\langle x\circ y,z\rangle.\]
Proof.: This follows from [12], Thm. 12, p.118. In fact, in a simple Euclidean Jordan algebra, every associative symmetric bilinear form is a scalar multiple of \(Tr(xy)\).
**Lemma 7**.: _Consider an associative, commutative unital subalgebra \(\mathscr{A}\subset\mathscr{A}^{+}\). Then, this subalgebra forms a Frobenius algebra._
Proof.: A Frobenius algebra is a unital, commutative, associative algebra (finite dimensional) equipped with a symmetric bilinear form \(\langle-,-\rangle\) where: \(\langle x\circ y,z\rangle=\langle x,y\circ z\rangle\), forall \(x,y,z\) lying in the Forbenius algebra.
The subalgebra \(\mathscr{A}\) is a commutative, associative, unital subalgebra of \(\mathscr{A}^{+}\), equipped with a symmetric bilinear form satisfying associativity condition (by Lem.6). So, \(\mathscr{A}\) is a Frobenius algebra.
## 5. Frobenius structures
We set out some notions on Frobenius manifolds/ structures and conclude by showing that this is equivalent to satisfying the WDVV equation.
### Compatible flat affine structures with a multiplication
Recall the notion of flat structures introduced in Sec. 3.3. For brevity, let \(\mathcal{M}\) be a smooth manifold. Assume \(\mathcal{T}_{\mathcal{M}}\) is endowed with \(\circ\) a \(\mathcal{O}_{\mathcal{M}}\)-bilinear commutative and associative multiplication operation, eventually with unit \(e\).
Then, according to [10][Def.2.2.1], a flat structure \(\mathcal{T}_{\mathcal{M}}^{f}\) on \(\mathcal{M}\) is called compatible with \(\circ\), if in a neighbourhood of any point there exists a vector field \(\mathscr{C}\) such that for any arbitrary local flat vector fields we have
\[X\circ Y=[X,[Y,\mathscr{C}]], \tag{8}\]
\(\mathscr{C}\) is a local vector potential for \(\circ\).
Moreover, \(\mathcal{T}_{\mathcal{M}}^{f}\) is called compatible with \((\circ,\mathbf{1}_{\mathscr{A}})\) if the equation (8) holds and if \(e\) is flat.
By [10] Prop 2.2.2: If \(\circ\) admits a compatible flat structure then it satisfies the identity: For all local vector fields \(X,Y,Z,W\):
\[P_{X\circ Y}(Z,W)=X\circ P_{Y}(Z,W)+(-1)^{XY}Y\circ P_{X}(Z,W), \tag{9}\]
where \(P_{X}(Z,W):=[X,Z\circ W]-[X,Z]\circ W-(-1)^{XZ}Z\circ[X,W]\).
A manifold \((\mathcal{M},\circ)\), where \(\circ\) is an \(\mathcal{O}_{\mathcal{M}}\)-bilinear commutative and associative multiplication operation satisfying the identity 9 is called an \(F\)-manifold (see Def 5.1 [10].
### Pencils of flat connections
Let us consider the following input data:
* a flat structure \(\nabla_{0}:\mathcal{T}_{\mathcal{M}}\to\Omega^{1}_{\mathcal{M}}\otimes_{ \mathcal{M}}\mathcal{T}_{\mathcal{M}}\) on \(\mathcal{M}\), where \(\Omega^{1}_{\mathcal{M}}\) is the sheaf of holomorphic 1-forms on \(\mathcal{M}\).
* An odd global section \(\mathcal{A}\in\Omega^{1}_{\mathcal{M}}\otimes_{\mathcal{O}_{\mathcal{M}}}End( \mathcal{T}_{\mathcal{M}})\).
Then, one can produce from it the following datum:
* A pencil of connections \(\nabla^{\mathcal{A}}_{\lambda}=\nabla_{0}+\lambda\mathcal{A}\).
* An \(\mathcal{O}_{\mathcal{M}}\)-bilinear composition law \(\circ\) on \(\mathcal{T}_{\mathcal{M}}\) given by:
\[X\circ Y:=i_{X}(\mathcal{A})(Y),\]
where for \(G\in End(\mathcal{T}_{\mathcal{M}})\) and \(df\in\Omega^{1}_{\mathcal{M}}\) (for \(f\in\mathcal{O}_{\mathcal{M}}\)) the following is defined: \(i_{X}(df\otimes G):=Xf\cdot G\).
**Proposition 5** ([20], Prop 2.3.1.).: \((\mathcal{M},\circ,\nabla^{\mathcal{A}}_{0})\) _is an \(F\)-manifold with compatible flat structure iff \(\nabla^{\mathcal{A}}_{\lambda}\) is a pencil of torsionless flat connections._
In that case, \((\mathcal{M},\circ,\nabla^{\mathcal{A}}_{\lambda})\) is an \(F\)-manifold with compatible flat structure for any \(\lambda\) as well.
### (pre-)Frobenius manifolds
Consider the following family of data:
\[(\mathcal{M};\,\circ:\mathcal{T}_{\mathcal{M}}\otimes\mathcal{T}_{\mathcal{M}} \to\mathcal{T}_{\mathcal{M}};\,\mathcal{T}^{f}_{\mathcal{M}}\subset\mathcal{T }_{\mathcal{M}};\,\,g:S^{2}(\mathcal{T}_{\mathcal{M}})\to\mathcal{O}_{ \mathcal{M}}), \tag{10}\]
where:
* \(\mathcal{M}\) is a manifold;
* \(\circ:\mathcal{T}_{\mathcal{M}}\otimes\mathcal{T}_{\mathcal{M}}\to\mathcal{T }_{\mathcal{M}}\) is a multiplication operation on the tangent sheaf;
* \(\mathcal{T}^{f}_{\mathcal{M}}\subset\mathcal{T}_{\mathcal{M}}\) is the subsheaf of flat vector fields;
* \(g:S^{2}(\mathcal{T}_{\mathcal{M}})\to\mathcal{O}_{\mathcal{M}}\) is the non-degenerate symmetric quadratic form with \(\mathcal{O}_{\mathcal{M}}\) a sheaf of holomorphic functions on \(\mathcal{M}\).
The main additional structure gluing together this data is given by a family of (local) potentials \(\Phi\), being sections of \(\mathcal{O}_{\mathcal{M}}\), such that for any local tangent fields \(X,Y,Z\):
\[g(X\circ Y,Z)=g(X,Y\circ Z)=(XYZ)\Phi. \tag{11}\]
If such a structure exists on \((\mathcal{M};\,\circ:\mathcal{T}_{\mathcal{M}}\otimes\mathcal{T}_{\mathcal{M} }\to\mathcal{T}_{\mathcal{M}};\,\mathcal{T}^{f}_{\mathcal{M}}\subset\mathcal{T }_{\mathcal{M}};\,g:S^{2}(\mathcal{T}_{\mathcal{M}})\to\mathcal{O}_{\mathcal{ M}})\) then it forms a pre-Frobenius manifold.
**Definition 8**.: _A pre-Frobenius manifold is called associative if the multiplication \(\circ\) is associative._
_A pre-Frobenius manifold is called potential if \(C\) admits everywhere locally admits a potential._
_A pre-Frobenius manifold is Frobenius if it is simultaneously potential and associative._
**Theorem 1** (Th.1.5, [20]).: _The pre-Frobenius manifold \((\mathcal{M},\circ,\nabla_{0})\) is a Frobenius manifold with compatible flat structure if and only if \(\nabla^{\mathcal{A}}_{\lambda}\) is a pencil of torsionless, flat connections._
### Associativity equations
Consider an \(F\)-manifold \((\mathcal{M},\circ,\nabla_{0})\) endowed with a compatible flat structure \(\nabla_{0}\).
Consider the algebra \((\mathscr{A},\circ)\) with basis \(\{e_{i}\}_{i=1}^{n}\) and its structural constants \(C^{s}_{ij}\) satisfying the relation
\[e_{i}\circ e_{j}=C^{s}_{ij}e_{s}\]
for indexes \(i,j,s\in\{1,\cdots,n\}\). This algebra is defined at each point \(p\in\mathcal{M}\) on the tangent space to \(T_{\mathcal{M}}\).
If we choose a local flat coordinate system \((x_{a})\) on \(\mathcal{M}\) and write the local vector potential \(C\) as \(C=\sum_{c}C^{c}\partial_{c}\), \(X=\partial_{a}\) and \(Y=\partial_{b}\) then
\[\partial_{a}\circ\partial_{b}=\sum C^{c}_{ab}\partial_{c},\quad C^{c}_{ab}= \partial_{a}\partial_{b}C^{c}.\]
The choice of a flat invariant metric \(g\) allows to define \(C^{a}\) as \(C^{a}=\sum_{b}g^{ba}\partial_{b}\Phi\).
Let us consider:
\[(\partial_{a}\circ\partial_{b})\circ\partial_{c}=\Bigg{(}\sum_{e}C^{e}_{ab} \partial_{e}\Bigg{)}\circ\partial_{c}=\sum_{ef}C^{e}_{ab}C^{f}_{ec}\partial_{ f}. \tag{12}\]
\[\partial_{a}\circ(\partial_{b}\circ\partial_{c})=\partial_{a}\circ\sum_{f}C^ {f}_{bc}\partial_{f}=(-1)^{a(b+c+e)}\sum_{ef}C^{f}_{bc}C^{e}_{ae}\partial_{e}. \tag{13}\]
If \(\mathscr{A}\) is associative then
\[C^{e}_{ab}C^{f}_{ec}=C^{e}_{ae}C^{f}_{cb},\]
which amounts to \(\partial_{a}\circ(\partial_{b}\circ\partial_{c})=(\partial_{a}\circ\partial_{ b})\circ\partial_{c}\) i.e. Eq.12=Eq.13.
Given that \(C^{c}_{ab}:=\sum_{e}C_{abe}g^{ec}=\sum_{e}\Phi_{abe}g^{ec}\), where \((g^{ec}):=(g_{ec})^{-1}\) and \(\Phi_{abe}=\partial_{a}\partial_{b}\partial_{e}\Phi\) we can rewrite the associativity relation for \(\Phi\) as
\[\forall a,b,c,d:\quad\sum_{ef}\Phi_{abe}g^{ef}\Phi_{fcd}=(-1)^{a(b+c)}\sum\Phi _{bce}g^{ef}\Phi_{fad},\]
which corresponds to the WDVV equation. Given the flat identity denoted \(e=\partial_{0}\), then the equation reduces to \(\Phi_{0ab}=g_{ab}\).
## 6. The WDVV equation and symmetric cones
We proceed as follows.
1. We prove the existence of totally geodesic submanifolds in Vinberg cones.
2. We prove that those totally geodesic submanifolds carry Frobenius structures (i.e. satisfy the WDVV equation).
### A first insight
To give an first intuition to the reader of why should totally geodesic submanifolds in Vinberg cones exist, it is enough to go back to a theorem of Rothaus [10]. According to [10]: _if \(\Omega\) is a cone of rank \(k\) and of dimension \(n\) then every cone \(\Omega\) is birationally biregularly equivalent to the direct product of \(k\) half lines and an Euclidean space._ The number of half lines appearing coincides with the rank of the Vinberg cone \(\Omega\).
### Totally geodesic submanifolds and Lie triple systems
We will proceed to an algebraic investigation of the existence of totally geodesic submanifolds in \(\Omega\). Using our method, it is possible to compute explicitly all the properties of those manifolds.
**Definition 9**.: _Let \(\Omega\) be a noncompact symmetric space. A totally geodesic \(r\)-dimensional submanifold of \(\Omega\) isometric to \(\mathbb{R}^{r}\) is called an \(r\)-flat. If \(r\) is the maximal natural number \(r\) for which an \(r\)-flat exists then it is a maximal flat._
Let \(F\) denote an \(r\)-flat submanifold in an \(n\)-dimensional Vinberg cone \(\Omega\). We first outline an algebraic description of \(F\).
**Theorem 2** (Thm IV.4.2. and Thm IV.7.2 of [1]).:
1. _The curvature tensor_ \(R\) _evaluated at_ \(T_{e}\Omega\) _is given by_ \[R(X,Y)Z=-[[X,Y],Z],\,\text{for}\quad X,Y,Z\,\in\,T_{e}\Omega.\]
2. _The totally geodesic space_ \(F\subset\Omega\) _has the form_ \[F=\exp\mathfrak{a}\cdot e,\] _where_ \(\mathfrak{a}\subseteq\mathfrak{t}\) _is a Lie triple system i.e._ \([[\mathfrak{a},\mathfrak{a}],\mathfrak{a}]\subseteq\mathfrak{a}\)_._
3. _Totally geodesic submanifolds through_ \(e\) _are of the form_ \(\exp\mathfrak{a}\cdot e\) _where_ \(\mathfrak{a}\subseteq\mathfrak{t}\) _is a Lie triple system._
By construction, we know that the sectional curvature restricted to \(F\) equals zero. If \(\mathfrak{a}\subseteq\mathfrak{t}\) is a maximal abelian subspace of dimension \(r\) then, \(F=\exp\mathfrak{a}\cdot e\) is a maximal flat in \(\Omega\).
**Remark 6**.: _Given that \(\mathfrak{g}\) are semisimple Lie algebras a possible class of Lie subalgebras \(\mathfrak{a}\) that satisfy the necessary requirements for producing a maximal flat are the Cartan subalgebras of \(\mathfrak{g}\). They are maximal abelian subalgebras of \(\mathfrak{g}\) and form a Lie triple system._
**Lemma 8**.: _Suppose that there exists a totally geodesic \(F\) submanifold immersed in \(\Omega\). Then, the Lie subalgebra \(\mathfrak{a}\) of \(\mathfrak{t}\) attached to the tangent space \(T_{e}F\) to \(F\) is Lie associative._
Proof.: By Thm IV 4.2 in [1], the curvature tensor \(R_{0}\) evaluated at \(T_{e}\Omega\) is given by
\[R_{0}(X,Y)Z=-[[X,Y],Z],\quad X,Y,Z\in T_{e}\Omega. \tag{14}\]
\(F\) being totally geodesic it is flat. So, restricting attention onto \(F\), the left hand side of Eq.14 is \(0\). Thus, we have \(0=-[[X,Y],Z]\), where \(X,Y,Z\in T_{e}F\). Now, by Jacobi's identity:
\[[X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0.\]
Rewriting it one gets \([[Y,Z],X]+[[Z,X],Y]=-[[X,Y],Z]\). So, by Eq.14 for \(X,Y,Z\in T_{e}F\) one has \([[Y,Z],X]=[Y,[Z,X]]\). Thus, the Lie associativity is satisfied on \(F\)
### The case of space-like Lagrangian-Grassmanians cones
We prove algebraically that in \(n\)-dimensional space-like Lagrangian-Grassmanians cones there exist totally geodesic immersed submanifolds being maximal flats in \(\Omega\).
According to Thm. 6.6 totally geodesic submanifold are found if one has a Lie triple system. So, in particular we can take the example of Cartan subalgebras which form a Lie triple sytem. A generalisation is mentioned as Cor. 4.
**Proposition 6**.: _Consider the \(n\)-dimensional irreducible Vinberg cones of Lagrangian-Grassmanian type. Then, for each of those cones there exists an \(n-1\)-dimensional totally geodesic submanifold given by_
\[F=\exp\tilde{\mathfrak{a}}\cdot e,\]
_such that \(\tilde{\mathfrak{a}}\) is a Cartan subalgebra of \(\mathfrak{gl}_{n}(\mathbb{K})\) given by \(\tilde{\mathfrak{a}}=\lambda I_{n}\oplus\mathfrak{a}\), \(\lambda\in\mathbb{K}\) where \(\mathfrak{a}\) is formed by diagonal matrices of null trace._
1. _If_ \(\mathbb{K}=\mathbb{R}\)_,_ \(\mathfrak{a}\) _is given by all diagonal matrices with real diagonal entries and such that the trace is 0._
2. _If_ \(\mathbb{K}=\mathbb{C}\)_,_ \(\mathfrak{a}\) _is given by all diagonal matrices with diagonal entries_ \(a+\imath b\) _and such that the trace is 0._
3. _If_ \(\mathbb{K}=\mathbb{H}\)_,_ \(\mathfrak{a}\) _is given by all diagonal matrices of_ \(\left\{\begin{pmatrix}X&-\overline{Y}\\ Y&\overline{X}\end{pmatrix}\,|\,\mathfrak{R}TrX=0.\right\}\)__
4. _If_ \(\mathbb{K}=\mathbb{O}\)_,_ \(\mathfrak{a}\) _is given by all diagonal_ \((3\times 3)\) _matrices with diagonal entries_ \(a+\imath b\) _and of the diagonal matrices of a Cartan subalgebra of_ \(\mathfrak{g}_{2}\)_._
Proof.: The noncompact symmetric spaces considered are of the type: \(SL_{n}(\mathbb{K})/K\rtimes\mathbb{K}^{*}\), where \(K\) is the maximal compact subgroup at \(e\). Thm IV.7.2 of [10] implies that totally geodesic submanifolds through \(e\) are of the form \(exp\mathfrak{a}\cdot e\), where \(\mathfrak{a}\subset\mathfrak{t}\) is a Lie triple system. Given that we are working with semi-simple Lie algebras we can investigate in particular the existence of their Cartan subalgebras which here will be maximal abelian subalgebras. The Cartan involution is given by \(X\mapsto-X^{t}\). One can prove that for the real division algebras \(\mathbb{K}\), the Cartan subalgebras of \(\mathfrak{sl}_{n}(\mathbb{K})\) are diagonal matrices with null trace. We discuss this statement in detail, according to \(\mathbb{K}\).
1. If \(\mathbb{K}=\mathbb{R}\), then Lie algebra attached to the symmetric space has the Cartan decomposition \(\mathfrak{sl}_{n}(\mathbb{R})=\mathfrak{so}_{n}\oplus sym_{0}(n)\) where \(\mathfrak{s}ym_{0}(n)\) denotes the set of symmetric matrices of trace 0 with entries in \(\mathbb{R}\). One can check that the maximal abelian subspace \(\mathfrak{a}\) of \(sym_{0}(n)\) is given by the set of diagonal matrices of null trace. The maximal flat is thus given by \[F=\exp\mathfrak{a}\cdot e=\{Diag(\lambda_{1},\cdots,\lambda_{n}):\,\lambda_{ i}=exp(t_{i})\in\,\mathbb{R},\,\prod_{i=1}^{n}\lambda_{i}=1\}.\]
2. If \(\mathbb{K}=\mathbb{C}\), \(\mathfrak{a}\) is given by the diagonal matrices with complex entries. The maximal flat is \[F=\exp\mathfrak{a}\cdot e=\{Diag(\lambda_{1},\cdots,\lambda_{n}):\,\lambda_{ i}=\exp a_{i}+\imath b_{i}\in\,\mathbb{C},\,\prod_{i=1}^{n}\lambda_{i}=1.\}\]
3. If \(\mathbb{K}=\mathbb{H}\), \(\mathfrak{sl}_{n}(\mathbb{H})\) is given by diagonal matrices in \(\bigg{\{}\begin{pmatrix}X&-\overline{Y}\\ Y&\overline{X}\end{pmatrix}\bigm{|}\Re TrX=0\bigg{\}}\). To define \(\mathfrak{a}\), one takes only the diagonal matrices in that set.
4. If \(\mathbb{K}=\mathbb{O}\), \(\mathfrak{a}\) is given by all diagonal \((3\times 3)\) matrices with diagonal entries \(a+b\) and of the diagonal matrices of a Cartan subalgebra of \(\mathfrak{g}_{2}\).
**Proposition 7**.: _Consider a Vinberg cone of space-like Lagrangian-Grassmanian type. Then, at any point of the maximal flat, the tangent space to it carries the structure of a Frobenius algebra._
Proof.: Let us discuss the case of Vinberg cones of Lagrangian-Grassmanian type. Consider the maximal flat \(F\), given by \(F=\exp\tilde{\mathfrak{a}}\cdot e\) where \(\tilde{\mathfrak{a}}=\lambda I_{n}\oplus\mathfrak{a}\), \(\lambda\in\mathbb{K}\) is a Cartan subalgebra of \(\mathfrak{gl}_{n}(\mathbb{K})\) and where \(\mathfrak{a}\) are diagonal matrices of null trace. The set of diagonal matrices form an associative, commutative and unital algebra. Therefore, we have a Frobenius algebra structure on the tangent space to the \(n-1\)-flat. The tangent space to \(F\) is given by \(\tilde{\mathfrak{a}}\subseteq\mathfrak{t}\). On \(\mathfrak{t}\) there exists a bilinear symmetric form given by the Killing form. In particular, we have \(\langle X,Y\rangle=tr(XY)\), where \(X,Y\in T_{e}(G/K)\), which satisfies associativity by Lem. 6. The Cartan subalgebra inherits this bilinear form. Thus, we have a Frobenius algebra.
**Lemma 9**.: _The \(n\)-dimensional Vinberg cones (1)-(3) are associated with the Weyl chambers of type \(A_{n}\)._
Proof.: For \(SL_{n}(\mathbb{R})/SO_{n}\) consider \(H\in Diag(t_{1},\cdots t_{n})\in\mathfrak{a}\). One gets that
\[(adH)E_{ij}=[H,E_{ij}]=(t_{i}-t_{j})E_{ij}.\]
So, one has \(n(n-1)\) non-zero roots. In particular, we have the following splitting
\[\mathfrak{sl}_{n}(\mathbb{R})=\mathfrak{a}+\sum_{i\neq j}\mathbb{R}\cdot E_{ ij}.\]
For \(\mathbb{C}\), a similar situation occurs. More abstractly we can write:
\[\mathfrak{sl}_{n}(\mathbb{C})=\mathfrak{a}+\oplus\bigg{(}\bigoplus_{i\neq j} \mathfrak{g}_{\lambda_{i}-\lambda_{j}},)\]
where \(\mathfrak{g}_{\lambda_{i}-\lambda_{j}}=Spac_{\mathbb{C}}(e_{ij})\) and \(e_{ij}\) represents the basis vector in the \(i\)-th row and \(j\)-th column. For \(\mathbb{H}\) and \(\mathbb{O}\) a similar argument can be carried out.
**Lemma 10**.: _Let \(\Omega\) be Vinberg cones of type (1)-(4). A Weyl chamber is isomorphic to \(\exp\mathfrak{a}^{+}\in V\), where_
\[\mathfrak{a}^{+}:=\{Diag(t_{1},\cdots,t_{n}):\,\sum_{i=1}^{n}t_{i}=0,\,t_{1}< \cdots<t_{n}\}. \tag{15}\]
Proof.: A Weyl chamber is isomorphic to an open Euclidean cone in \(\mathfrak{a}\). This chamber is given by Eq.15. So, in the Vinberg cone \(\Omega\) a Weyl chamber is isomorphic to \(\exp\mathfrak{a}^{+}\in V\).
### The case of Lorentzian cones
**Proposition 8**.: _Consider the irreducible Lorentzian cone \(\Lambda_{n}\). Then, there exists a totally geodesic submanifold \(\mathscr{H}\) in \(\Lambda_{n}\). This totally geodesic space is given by the maximal flat \(\exp\mathfrak{a}\cdot e\), where \(\mathfrak{a}\) is a maximal abelian subalgebra of the Lie algebra \(\mathfrak{o}(1,n-1)\)._
Proof.: The Lorentzian cone is related to pseudo-orthogonal Lie algebras. It is described in terms of quotients of Lie groups by \(O(p,q)/O(p)\times O(q)\), with \(p\geq q\), where \(p=n-1\) and \(q=1\).
To the corresponding Lie algebra, there exists a non-empty maximal abelian subalgebra. Consider the matrix corresponding to an element of \(\mathfrak{o}(p,q)\):
\[\left(\begin{smallmatrix}p=n-1&q=1\\ a&b\\ b^{*}&d\end{smallmatrix}\right)_{q=1}^{p=n-1}\]
where entries are real and \(a,d\) are skew symmetric. The blue numbers correspond to numbers of rows/columns. We take \(\mathfrak{h}\) as matrices with \(b=0\); \(\mathfrak{t}\) are matrices with \(a=0\) and \(d=0\). It has been shown that for \(\mathfrak{o}(p,q)\), one has a class of maximal abelian subalgebra which are formed by orthogonally decomposable matrices (these are the Cartan subalgebras). All matrices in that set can be simultaneously represented by block diagonal matrices, having the same decomposition patterns. A second type of maximal abelian subalgebra occurring has a matrix representation as follows:
\[\left(\begin{array}{cccc}0&\alpha&\cdots&0\\ \cdots&&\cdots&-\alpha^{T}\\ 0&&\cdots&0\\ 0&&0&\cdots&0\end{array}\right)_{n\times n}\quad\text{ for some vector}\,\alpha. \tag{16}\]
We construct the maximal flats via Thm. 6.6. The flat is a non-empty subspace given that the maximal abelian subalgebra. So, there exists a non-empty totally geodesic submanifold \(\mathcal{H}\) in \(\Lambda_{n}\).
The Lorentzian cone \(\Lambda_{n}\) is of the Anti-de-Sitter type. Anti-de-Sitter spaces are Lorentzian manifolds with negative constant sectional curvature.
**Lemma 11**.: _Consider \(\mathscr{H}\) the totally geodesic submanifold in \(\Lambda_{n}\). Then, the tangent space at at any point of \(\mathscr{H}\) carries the structure of a (non unital) Frobenius algebra._
Proof.: We have a maximal abelian subalgebra \(\mathfrak{a}\subset\mathfrak{t}\) which is associated to \(\mathscr{H}\), via \(\exp\mathfrak{a}\cdot e\). This defines the "flat". Taking \(\mathfrak{a}\) to be represented by block diagonal matrices having same decomposition pattern, one can prove that for the standard matrix multiplication this forms this forms an associative, commutative algebra. Given that \(\mathfrak{a}\) inherits a symmetric bilinear form from \(\mathfrak{t}\) satisfying Lem. 6 it is a Frobenius algebra. Associativity and commutativity also holds for the class of matrices of type given in formula 16 as well as the symmetric bilinear form satisfying Lem. 6.
### Main statements
#### New sources of \(F\)-manifolds
We prove first that on \(F\), the multiplication \(\circ\) on the tangent sheaf admits a compatible flat structure.
**Proposition 9**.: _Let \(\Omega\) be a Vinberg cone. Consider \(F\) a totally geodesic immersed submanifold of \(\Omega\). Then, for any vector fields \(X,Y\) in \(\mathcal{T}_{F}\) there exists a local vector potential \(\mathscr{C}\) such that we have the following compatibility relation_
\[X\circ Y=[X,[Y,\mathscr{C}]]. \tag{17}\]
Proof.: By [10], Thm IV 4.2 one has the following:
\[R_{0}(Y,Z)X=[X,[Y,Z]],\]
where \(X,Y,Z\in T_{e}\Omega\). The submanifold \(F\) being totally geodesic, the scalar curvature vanishes on \(F\). So, the curvature tensor is \(0\) and the right hand side of Eq.17 is
\[[X,[Y,Z]]=0.\]
Consider the left hand side of Eq. 17. The multiplication operation in the algebra is given by:
\[(X\circ Y)^{i}=\Gamma^{i}_{jk}X^{j}Y^{k},\]
where \(\Gamma^{i}_{jk}=g^{li}\partial_{jkl}\Phi\). The scalar curvature vanishes if and only if \(C_{ijk}=\partial_{ijl}\Phi\) vanishes. So, \(X\circ Y=0\) and thus \(X\circ Y=[X,[Y,\mathscr{C}]]\).
**Theorem 3**.: _Let \(\Omega\) be a Vinberg cone. Consider \(F\) a totally geodesic immersed submanifold of \(\Omega\) given by \(F=\exp\mathfrak{a}\cdot e\), where \(\mathfrak{a}\subset\mathfrak{t}\) is a Lie triple system. Then, the flat structure on \(F\) is compatible with \(\circ\) and \((F,\circ)\) is an \(F\)-manifold._
Proof.: One applies directly the statement from [10] (Theorem ) ensuring that \(F\) is totally geodesic iff \(\mathfrak{a}\subset\mathfrak{t}\) is a Lie triple system. We have shown in Prop. 9 that one has a flat structure compatible with \(\circ\) on \(F\). Thus, by Manin [Man] if the multiplication \(\circ\) on the tangent sheaf admits a compatible flat structure it forms an \(F\)-manifold.
**Corollary 4**.: _Let \(\Omega\) be a Vinberg cone. Then, \(F=\exp\mathfrak{a}\cdot e\) in \(\Omega\) is an \(F\)-manifold iff \(\mathfrak{a}\) is a Lie triple system._
Proof.: Any Vinberg cone is a linear combination of irreducible Vinberg cones. Each of those irreducible cones contains a non-empty totally geodesic submanifold (Prop.8 and Prop.6). The Cartesian product of those totally geodesic submanifolds gives a totally geodesic submanifold. So, our statement on the existence of a totally geodesic submanifold in a Vinberg cone holds given _any_ Vinberg cone defined on any algebra (for example a Clifford algebra) combining the real division algebras.
**Lemma 12**.: _Let \(F=\exp\mathfrak{a}\), where \(\mathfrak{a}\) is Lie triple system. Then, \((F,\circ,\nabla_{0})\) is an \(F\)-manifold with compatible flat structure iff \(\nabla^{\mathcal{A}}_{\lambda}\) is a pencil of torsionless flat connections._
Proof.: This follows from Thm. 3 and Prop.5.
### New sources of Frobenius manifolds
**Proposition 10**.: _Vinberg cones are equipped with the structure of a potential pre-Frobenius manifold._
Proof.: Let \(\Omega\) be a Vinberg cone.
1. By Lem. 1, \(\Omega\) has an affine flat structure. Given that it is a Hessian manifold the flat connection is a Levi-Civita connection \(\nabla_{0}\).
2. Vinberg cones comes equipped with the data \((\Omega,g,C,\circ)\), where: * \(g\) is a compatible Riemannian (Hessian) metric. Locally we have: \(g_{ij}=\partial_{i}\partial_{j}\Phi\), where \(\Phi\) is a potential function given by the Koszul-Vinberg function given in Def.4. * \(C\) is a rank three symmetric tensor. In the present case given that \(\Omega\) is being Hessian, \(C\) is given by \(C_{ijk}=\partial_{i}\partial_{j}\partial_{k}\Phi\). * "\(\circ\) " is a multiplication operation on the tangent bundle, defined in Prop. 7. The subalgebra associated to the locus of \((n-1)\)-flats is a Frobenius algebra, by Prop.1.
By Prop.7 and Lem.6, we have \(C(X,Y,Z)=g(X\circ Y,Z)=g(X,Y\circ Z)\).
This gives a pre-Frobenius manifold. One can add that we have a potential pre-Frobenius manifold. Indeed, \(C(X,Y,Z)\) admits everywhere locally a potential.
**Theorem 4**.: _Consider a Vinberg cone \(\Omega\). Then, the manifold \(F=\exp\mathfrak{a}\cdot e\), where \(\mathfrak{a}\subset\mathfrak{t}\) is Lie triple system, forms a Frobenius manifold immersed in \(\Omega\)._
Proof.: Let \(F\) be maximal flat in \(\Omega\); we think of \(F\) as embedded in \(\Omega\) for the following paragraphs.
1) By Prop. 10 the Vinberg cone is pre-Frobenius. It is equipped with:
* \(\nabla_{0}\), an affine flat structure (cf. Lem. 1);
* \(g\), a compatible Riemannian metric;
* \(C\), a rank three symmetric tensor;
* "\(\circ\) ", a multiplication operation on the tangent bundle (see Prop. 1.)
Moreover, one has a compatibility condition: \(g(X\circ Y,Z)=g(X,Y\circ Z)=C(X,Y,Z)\), where \(X,Y,Z\in\mathcal{T}_{\Omega}\) is satisfied.
Let us now discuss the proper embedding \(i:F\hookrightarrow V\). \(F\) is isometric to an Euclidean space.
2) By Lem. 1, \(\Omega\) has an affine flat structure. The embedding \(i:F\to V\) induces a Levi-Civita connection on \(F\). It is the Levi-Civita connection for the pullback metric \(g^{F}:=i^{*}g\).
3) One verifies that the potentiality axiom holds on \(F\). Indeed, as for \(\Omega\), the rank 3 symmetric tensor \(C\) admits everywhere locally a potential function. The local potential \(\Phi(=\ln\chi)\) is such that for any flat local tangent fields one has: \(\partial_{abc}\Phi=C_{abc}\).
4) By 7 the tangent space to \(F\) forms a Frobenius algebra. This is a strong statement since it proves that the associativity axiom for \(\circ\) i.e \((X\circ Y)\circ Z=X\circ(Y\circ Z)\) for \(X,Y,Z\in\mathcal{T}_{V}\).
At this stage of the proof all axioms to have a Forbenius manifold are satisfied. So, a maximal flat i.e a totally geodesic submanifold of maximal dimension, in a Vinberg cone froms a Frobenius manifold.
A different argument can be added. In virtue of Th. 1.5 in [10] a pre-Frobenius manifold M is Forbenius iff the pencil of connections \(\nabla_{\lambda,X}(Y)=\nabla_{0,X}(Y)+\lambda(X\circ Y)\) for \(X,Y\in\mathcal{T}_{V}\) and \(\lambda\in\mathbb{R}\) is flat. The maximal flat is totally geodesic. So, the pencil of connections \(\nabla_{\lambda}^{\mathcal{A}}\) is flat. This allows a direct conclusion.
**Lemma 13**.: _Every Frobenius submanifold \(F\) of \(\Omega\) is necessarily a \(G\)-translate of \(F\), where \(G\) acts by isometries on \(\Omega\)._
## 7. Conclusion and future work
We have thus proved in this paper an interesting relation between the WDVV PDE and strictly convex symmetric cones, leading to intriguing questions relating the 2D (TFT) [11] and those cones. This is achieved using geometric and algebraic methods (using Cartan's classification of symmetric spaces) on the one hand side; and Calabi's investigation on the Monge-Ampere equation. This allows for instance to generalise a result of [13, 14] on symplectic Monge-Ampere equations of Hirota type. It as well allows an expression of this idea in the realm of geometry and algebra.
Applications of this result can be achieved for instance in information geometry [1, 15]. This is done by using the fact that to any point \(x\in\Omega\) one may attach a probability measure on the dual cone. This forms a main object in information geometry and statistics, since the _KV characteristic function_ corresponds to the density of a probability measure:
\[\forall x\mapsto p_{x}(x^{*})=\frac{\exp(-\langle x^{*},x\rangle)}{\chi(x)}.\]
To conclude, results of this paper reveal an enlargement of the classification of sources of Frobenius manifolds. In particular, it is shown that there exist Frobenius manifolds defined over linear combinations of real division algebras of finite dimension and as well that there exist Frobenius manifolds in the context of pseudo-Riemannian manifolds: the Lorentz manifolds of Anti-De-Sitter type.
|
2309.13648 | Don't Let MEV Slip: The Costs of Swapping on the Uniswap Protocol | We present the first in-depth empirical characterization of the costs of
trading on a decentralized exchange (DEX). Using quoted prices from the Uniswap
Labs interface for two pools -- USDC-ETH (5bps) and PEPE-ETH (30bps) -- we
evaluate the efficiency of trading on DEXs. Our main tool is slippage -- the
difference between the realized execution price of a trade, and its quoted
price -- which we breakdown into its benign and adversarial components. We also
present an alternative way to quantify and identify slippage due to adversarial
reordering of transactions, which we call reordering slippage, that does not
require quoted prices or mempool data to calculate. We find that the
composition of transaction costs varies tremendously with the trade's
characteristics. Specifically, while for small swaps, gas costs dominate costs,
for large swaps price-impact and slippage account for the majority of it.
Moreover, when trading PEPE, a popular 'memecoin', the probability of
adversarial slippage is about 80% higher than when trading a mature asset like
USDC.
Overall, our results provide preliminary evidence that DEXs offer a
compelling trust-less alternative to centralized exchanges for trading digital
assets. | Austin Adams, Benjamin Y Chan, Sarit Markovich, Xin Wan | 2023-09-24T14:22:15Z | http://arxiv.org/abs/2309.13648v2 | # The Costs of Swapping on the Uniswap Protocol+
###### Abstract
We present the first in-depth empirical characterization of the costs of trading on a decentralized exchange (DEX). Using quoted prices from the Uniswap Labs interface for two pools -- USDC-ETH (5bps) and PEPE-ETH (30bps) -- we evaluate the efficiency of trading on DEXs. Our main tool is slippage -- the difference between the realized execution price of a trade, and its quoted price -- which we breakdown into its benign and adversarial components. We also present an alternative way to quantify and identify slippage due to adversarial reordering of transactions, which we call _reordering slippage_, that does not require quoted prices or mempool data to calculate. We find that the composition of transaction costs varies tremendously with the trade's characteristics. Specifically, while for small swaps, gas costs dominate costs, for large swaps price-impact and slippage account for the majority of it. Moreover, when trading PEPE, a popular'memecoin', the probability of adversarial slippage is about 80% higher than when trading a mature asset like USDC.
Overall, our results provide preliminary evidence that DEXs offer a compelling trust-less alternative to centralized exchanges for trading digital assets.
Introduction
Since its inception in 2018, the Uniswap Protocol [4, 3]--the largest _decentralized exchange_ by volume today--has handled nearly $1.65 trillion (USD) in total notional transaction volume, or $1.3 billion per day in 2023 alone. While decentralized exchanges (hereafter, DEXs) still see significantly less volume than traditional exchanges such as the NYSE and the CME, in part because trading is limited to assets that live on a blockchain, they point to a larger promise. It is a compelling promise: by using cryptography (such as digital signatures [20] or collision-resistant hash functions [46, 19], which power blockchains [42, 14]), we may design exchange protocols that allow us to transact without the need to trust a third party to offer a 'fair' price or to settle trades. Moreover, they are 'permissionless', allowing anyone to participate. In this way, decentralized exchanges promise to lower barriers to participation, boosting liquidity, whilst making trading fairer, auditable, and more efficient.
Whereas traditional exchanges typically route orders to a centralized matching engine--the opaqueness of which enables fraud [38, 25, 52]--DEXs take orders from a public mempool of pending transactions, before filling them on a blockchain using either a liquidity-pool/AMM [11, 35] or a decentralized order book (e.g., EtherDelta). Still, as [18] first explored, moving things onchain does not necessarily guarantee fair and efficient trading. For example, block proposers have a monopoly on transaction ordering, due to requirements of an underlying consensus mechanism such as that of Ethereum [13]. This combined with the transparent nature of orders allows adversaries to carry out _frontrunning_ attacks, where they reorder transactions and insert their own orders (with the help of block proposers) to extract one form of'miner extractable value' (MEV) [18] from other users. Moreover, gas costs on Ethereum remain high, around approximately \(\sim\$5\) to $25 per trade on the Uniswap v3 in 2023. A user of a DEX will have to pay these costs (in addition to LP fees and price impact costs) on top of the market price whenever they transact. Quantifying these costs is important for evaluating existing exchanges and for designing new protocols.
To the best of our knowledge, few works (if any) have studied the overall transaction costs experienced by users of DEXs. The community has taken some initial steps [2, 37] towards evaluating overall costs, but there remains a need for an in-depth analysis. [9] estimate transaction costs of fixed-size trades over time by simulating a pool using on-chain liquidity and gas price data, but do not look at real trades or account for MEV or slippage. Some past work quantified the magnitude of sandwich attacks, arbitrages, and liquidations on DEXs such as Uniswap v2 and v3 [55, 44, 47]. However, they rely on heuristics to identify attacks, or on assumptions about the structure of MEV extraction [29, 1], and may not capture other hidden sources of transaction costs. It is also essential to understand how the cost of MEV compares with other transaction costs, such as gas costs, LP fees, and price impact.
Summary of our contributions.In this paper, we study the overall efficiency of transacting on Uniswap v3, on Ethereum mainnet:
* _Framework._ We present a framework for evaluating the efficiency of DEXs. Our main tool is slippage -- the difference between the realized execution price of a trade, and its quoted price -- which provides a direct measure of the additional cost users pay on top of the quoted price. We also present an alternative way to quantify and identify slippage due to adversarial re-ordering of transactions, which we call _reordering slippage_, that does not require heuristics, quoted prices, or mempool data to compute. (While in this work, we have the luxury of access to mempool data and quoted prices, such data may be difficult to obtain in general.)
* _Breakdown of transaction costs._ We analyze a dataset of 534,198 trades made through the Uniswap Labs interface for two pools that we believe are representative of trading on the Uniswap v3 -- the USDC-ETH (5bps) and PEPE-ETH (30bps) pools -- between January and mid-August, 2023. We present a detailed breakdown of transactions costs, the first such baseline that we know of. We find that while for small trades, gas fees dominate the transaction cost, for large trades, on average, slippage is by far the dominant cost. The average transaction cost per dollar transacted on USDC-ETH (PEPE-ETH) is about 22 bps (140 bps). Transaction costs vary widely depending on the pool and on the size of the trade. We show that, controlling for a variety of market variables, the effect of order size on slippage is significant, and that most of the effect of order size is driven by adversarial behavior. In addition, higher gas prices and market momentum worsen slippage, potentially due to collisions (i.e., if many users are simultaneously trading in the same direction).
* _Other effects._ Slippage also strongly depends on the characteristics of the pool the user is trading against. We find that when trading PEPE, a popular'memecoin', the probability of adversarial slippage is historically about 80% larger than when trading USDC, a mature asset. Finally, as an initial foray into evaluating the integrity of mev-boost participants, we find that private RPC services effectively eliminate adversarial slippage in our dataset (albeit this may change in the future).
Our work establishes the first baseline for comparing the design of DEXs. Moreover, our techniques may provide a way to audit trusted entities (such as builders, relays, and RPC providers) in the mev-boost ecosystem to ensure that they are behaving honestly, a task previously thought to be difficult due to the opaque and centralized nature of block building.
Looking forward, our results also show early evidence that DEXs provide a sound trading experience, and offer a compelling trust-less alternative to centralized exchanges for trading digital assets. (The slippage we see is arguably _lower_ than what one would expect.)
Unsurprisingly, our work also shows that there remains ample room for improvements, and for applying new techniques in cryptography and game theory to realize the vision of decentralized finance.
## 2 Background
### Decentralized Exchanges
The primary exchange that we analyze, Uniswap v3 [4, 3], is an example of an Automated Market Maker (AMM) [11], where liquidity providers amass liquidity reserves \((x,y)\in\mathbb{R}^{+}\times\mathbb{R}^{+}\) into a pool, where \(x\) denotes the amount of the first asset and \(y\) the amount of the second asset, and traders trade against the pool. In Uniswap v2, the pool reserves are constrained by the function
\[xy=k\]
for some constant \(k\). Thus, a trader who wishes to purchase \(y^{\prime}\) units of the second asset, must deposit \(x^{\prime}\) units of the first asset s.t.
\[(x+x^{\prime})\cdot(y-y^{\prime})=k\]
in addition to paying settlement fees and additional fees set by the pool (which is 30 bps in Uniswap v2 or between 5 bps and 100 bps in Uniswap v3, which is then distributed pro-rata to liquidity providers, also known as the 'LP fee'). Uniswap v3 additionally allows liquidity providers to specify a price range in which to deposit liquidity. For this paper, it suffices to note that the liquidity available to trade depends on the price; holding the available liquidity fixed, the protocol behaves like Uniswap v2. Automated market makers are typically implemented using a blockchain (in our case Ethereum), which manage the execution of the pool and keeps track of its state in a decentralized way. To trade with the pool, or deposit/withdraw liquidity, users send transactions to the underlying blockchain; their actions are realized only when the corresponding transaction is finalized on the blockchain.
### Anatomy of a Swap
A trade is a tuple comprising the _input token_, or the ERC20 token that a user wishes to sell to the pool; the _output token_, or the ERC20 token that the user wishes to receive in exchange from the pool; the _fee tier_, identifying which pool to swap with; and one of either the _input amount_ or the _output amount_, specifying the amount of the input token/output token that the user wishes to exchange (the amount of the other asset is then determined by the pool).
A transaction for trading with a Uniswap pool is called a'swap', and comprises a trade with two additional fields specified: one of the _minimum amount out_ or the _maximum amount in_, which is the worst case amount of the output/input asset that the user is
willing to receive/spend; and a _deadline_, specifying a deadline by which the swap must be completed, after which the swap is invalid. Even if a transaction is finalized on the blockchain, the underlying swap may fail due to a violation of the minimum amount out (resp. maximum amount in). For any given block \(B\) in a blockchain, the trades contained in \(B\) are defined to be the trades specified by the swaps in \(B\) that succeed.
When transacting on the Uniswap Labs interface, users are shown a quoted output amount (resp. input amount) for the input amount (resp. output amount) that they entered into the interface, in the form of a quoted average execution price. After seeing the quoted price, users can then decide whether to sign and broadcast the swap transaction. The _slippage tolerance_ of a swap is defined as the ratio of the quoted amount out over the minimum amount out (resp. the quoted amount in over the maximum amount in), minus 1, expressed in basis points (bps). The _price impact_ of a swap is defined as the ratio of the quoted price over the market mid price minus 1, expressed in bps.
### The MEV Ecosystem
In order to make MEV extraction more transparent, Flashbots introduced the mev-boost library [24], which is open source software that block proposers can run to connect them to a market of 'block builders'. Block builders (which may eventually become enshrined in Ethereum through enshrined 'proposer-builder separation' [12]) find the most profitable ways to sequence blocks of transactions, and bid for proposers to propose their blocks. The market is run by trusted relays, who send the block headers of the best bids to the block proposer; the proposer then replies with a signature of the winning block header, after which the relay sends the proposer the block (with the payment contained within). If the proposer later proposes a different block header than the one it signed initially, then it must have signed two different block headers and will be slashed. Additionally, block builders accept _bundles_ of transactions from'searchers' (users who want to avoid using the public mempool), and promise not to frontrun or unpack the transactions within bundles. Bundled transactions are also referred to as _private_ transactions.
Over 80% of Ethereum validators today run a version of mev-boost[1]. Note that mev-boost today requires many trust assumptions: proposers must trust relays to send them winning blocks, builders must trust relays to propagate their blocks/bids without publishing or frontrunning them, and searchers must trust builders not to publish or frontrun their private transactions.
## 3 Our Framework
We present a framework for evaluating the execution quality and efficiency of DEXs. In particular, any such framework should generalize even as markets evolve (as is bound to happen, evidenced by the dominance of mev-boost since its introduction by Flashbots in
2020). Combined with the empirical results in Section 4, this gives a baseline for evaluating DEXs as they become more sophisticated.
In this paper, we focus on the following costs of trading -- slippage, settlement costs, exchange fees, liquidity costs -- and the latency of trades:
* _Slippage.1_ Whereas many works have looked at dollar value lost specifically to sandwich attacks, backruns, etc. [55, 47, 44, 50], or dollar value earned by validators [29, 1], to the best of our knowledge, we are the first to characterize the efficiency of a DEX by quantifying the overall slippage experienced by trades. Following convention, we take positive slippage to denote a price improvement for the swapper: Footnote 1: In the literature, slippage is sometimes referred to as implementation shortfall [43]. [ENDFIGURE]
* _The slippage of swap \(i\) (in bps) is_ \[\mathsf{slippage}_{i}:=\left(\frac{\mathsf{realizedPrice}_{i}}{\mathsf{quotedPrice }_{i}}-1\right)\cdot-10000\] _where_ \(\mathsf{realizedPrice}_{i}\) _is the average realized execution price of the swap (the amount of the input asset spent, over the amount of the output asset received), and_ \(\mathsf{quotedPrice}_{i}\) _is the decision price shown to the user._2 Footnote 2: Here, slippage is the difference between the realized output amount and the quoted output amount, expressed as a percentage of the realized output amount. Alternatively, we may express slippage as a percentage of the quoted output amount, with no major changes in interpretation.
* _Settlement costs, exchange fees._ When trading, users must pay for the operation of the exchange, as well as for settlement fees. On Ethereum, this takes the form of 'gas' fees. Settlement and exchange fees are generally higher when volume is high and the underlying blockchain is congested; however, in Ethereum, gas costs scale sublinearly with the order size.
* _Liquidity fees and price impact._ For liquidity provision to be profitable, market makers take an explicit Liquidity Provider fee (LP fee) on AMMs or construct a bid-ask spread on order books. The _price impact_ of a swap is defined as the ratio of the quoted price over the market mid price minus 1, expressed in bps, and directly measures market depth. We assume that the quoted price incorporates LP fees and the expected liquidity consumption of the swap, as in the case with the Uniswap Labs interface.
* _Latency._ The latency of a trade is the time it takes the trade to be executed plus its settlement time; and should be minimized.
We further break down slippage into its benign and adversarial components:
* _Adversarial slippage._ Adversarial slippage refers to slippage due to adversarial or reactive behavior. This includes frontruns, or more broadly, slippage due to MEV [18]. Usually, adversarial slippage is negative, but it may be positive, e.g., in the case of Just-in-Time liquidity provisioning (JIT) [49].
* _Collision slippage._ Collision slippage refers to slippage due to benign transactions sequenced between quote time and the final execution time. It may arise, for instance, if many traders are trying to trade the same assets at once on an exchange with non-zero latency.
To measure adversarial and collision slippage, prior works relied on heuristics for classifying sandwich attacks or arbitrages [44, 47]. In Section 4, we will use a different heuristic: we collect data from mev-boost to identify swaps that might be in the same MEV bundle and thus adversarial.
Other desiderata.The following desiderata should be kept in mind even though they do not directly impact the profit of market participants. We will include discussion on these trade-offs when appropriate:
* _Security._ Decentralized exchanges inherit their security from the underlying blockchain or consensus mechanism. As [18] first observed, DeFi protocols may even erode the security of underlying blockchains by providing new incentives to deviate from honest behavior at the consensus level [26].
* _Decentralization._ Protocols such as mev-boost are centralized in that players must trust individual block builders and relays to be honest and not front-run private transactions. In the context of DeFi, decentralized solutions are preferable to centralized ones (by principle), and also promise additional accountability when compared to centralized exchanges.
* _Fairness._ Exchanges should prioritize being _permissionless_ and avoid censoring any single user, preventing them from transacting or exchanging assets.
### Reordering Slippage
Although a heuristical approach for identifying adversarial slippage makes sense for our empirical study, the notions of adversarial and collision slippage remain rather informal. A heuristical approach requires us to specify the exact attacks that we hope to measure (e.g. sandwich attacks or those captured by mev-boost payments), and will not capture unknown adversarial strategies; it also requires sophisticated mempool or mev-boost data that may be difficult to obtain.
As an alternative, we present a more formal notion which we call "reordering slippage". Reordering slippage is easy to measure without mempool data, or even quoted prices, and
also sidesteps the need for heuristics. Thus, it generalizes even as the market evolves with new adversarial strategies. Later, in Section 4, in addition to analyzing our heuristical notion of adverse slippage, we will also empirically analyze reordering slippage, and show that it indeed captures both sandwic and arbitrages.
Defining reordering slippage.Reordering slippage compares the realized price of a swap to its hypothetical price in a world where the trades in its block are randomly ordered. Intuitively, if the realized price is far from this 'randomly ordered' baseline, then the adversary must have explicitly'reordered' the trades in that block in order to extract value:
**Definition 3.2** (Reordering Slippage).: _Fix any block \(B\), and denote by \(S:=\mathsf{td}_{1},\ldots,\mathsf{td}_{n}\) the sequence of trades contained within \(B\). For each trade \(\mathsf{td}_{i}\), the reordering slippage (in bps) of its corresponding swap is defined as_
\[\mathsf{reorderingSlippage}_{i}:=\left(\frac{\mathsf{realizedPrice}_{i}}{ \mathbb{E}_{\pi}[\mathsf{hypotheticalPrice}_{i}(\pi(S))]}-1\right)\cdot-10000\]
_where the expectation is taken over sampling a uniformly random permutation \(\pi\) of the sequence of trades in \(B\). Here, \(\mathsf{realizedPrice}_{i}\) is the realized price for the \(i\)th trade, and \(\mathsf{hypotheticalPrice}_{i}(\pi(S))\) refers to the hypothetical execution price of trade \(\mathsf{td}_{i}\) if the trades in block \(B\) are ordered according to \(\pi(S)\)._
Note that when computing the hypothetical price, we consider a world where the actual realized trades (e.g.,'sell 1 ETH') in \(B\) are reordered, as opposed the swaps or transactions (e.g.,'sell 1 ETH if we get at least 1800 USDC' back).3 Recall from Section 2 that a trade is the realized interaction between a transaction and a pool. That is, only actual exchanges of assets factor into the calculation of reordering slippage, and, when reordered, they "execute" regardless of slippage tolerance settings or constraints set by a smart contract.4
Footnote 3: As an example, suppose a block contains three swaps, one of which fails due to a violation of its slippage tolerance setting, and two of which succeed. Suppose that the first successful trade \(\mathsf{td}_{1}\)—corresponding to some swap \(s\)—‘buys 1 ETH’ and the second trade \(\mathsf{td}_{2}\) ‘buys 2 ETH’. Then the reordering slippage of \(s\) would be its realized price, divided by the average of its realized price and the hypothetical price of \(\mathsf{td}_{1}\) in a world that first executes ‘buy 2 ETH’ (\(\mathsf{td}_{2}\)) and then ‘buy 1 ETH’ (\(\mathsf{td}_{1}\)). Note that even if \(s\) would have failed (e.g., due to slippage tolerance) had it been ordered 2nd, we can still compute a hypothetical price for executing \(\mathsf{td}_{1}\) after \(\mathsf{td}_{2}\).
Footnote 4: While the ‘hypothetical’ world may not be realizable in practice, this definition allows us to quantify the effects of a broader set of adversarial strategies. In particular, those whose execution is conditional on the current state of the blockchain. To capture liquidity slippage, \(S\) should include liquidity events in addition to trades.
Backruns and arbitrages.Note that reordering slippage also captures backruns and arbitrages in the same block as the swap in question. While not directly adversarial, it
seems reasonable to classify arbitrages as adversarial because arbitrages are costs born by liquidity providers [41, 40], which are then passed on to users in the form of a larger LP fee.
Achieving zero reordering slippage.Consensus-level order fairness [31, 28] does not immediately guarantee zero reordering slippage. If transactions are ordered by observation time, backruns are possible and result in non-zero reordering slippage, pointing to market inefficiencies and LVR [41].
A strawman approach to reduce reordering slippage is to randomize the order of transactions (or swaps) within each block (e.g., using an unpredictable randomness beacon). However, this also fails to achieve zero reordering slippage. The problem is that the execution of a transaction can depend on the current state of the blockchain. Thus, even if the order of transactions within a block is randomized and unpredictable, an adversarial transaction may simply refuse to carry through with a trade if executed in unfavorable conditions (via a smart contract, or simply by setting a tight slippage tolerance).5 As such, achieving zero reordering slippage remains an open question.
Footnote 5: Indeed, this exact issue caused backrunning bots to spam the Ethereum network in the past [34].
Concurrent work.In concurrent work, [6] present a notion that they call 'Cost of MEV' which is very similar to our notion of reordering slippage (Definition 3.2). Their work is theoretical and shows loose bounds on how their worst-case notion scales with the total volume of trades, when instantiated with basic models for frontrunning and sandwiching. To use their notion still requires instantiating the metric with an appropriate model. In contrast, we use reordering slippage to directly characterize the cost of trading on DEXs. Our choice of definition highlights the fact that even randomized transaction ordering does not eliminate adversarial slippage, since transactions can execute _adaptively_ based on their location in the block [34]. Thus, the definition of reordering slippage seems well-equipped to capture MEV. Moreover, we provide an empirical characterization of reordering slippage on Uniswap v3 in Section 4, quantifying realized execution costs (as opposed to the theoretical worst-case).
## 4 Empirical Findings
### Data
We obtain transaction hashes logged from the Uniswap Labs interface6, covering all swaps made through the interface (mobile and web) that were published onchain, during the period of January 21, 2023 to August 14, 2023, and for two specific target pools on Ethereum: WETH-USDC 5bps pool (284,031 swaps) and WETH-PEPE 30bps pool (230,236 swaps).
For each swap, the dataset includes a 'log index' which together uniquely identify the swap in question, the quoted price, and a timestamp for when the swap was relayed by the user. The quoted price is computed using onchain pool data at the time of the quote, and represents the expected execution price had the swap been executed at that time. In other words, the quoted price incorporates the estimated price impact of the swap and the LP fee. Due to caching, the quote may be one or two blocks stale. Notably, our quoted prices comprise the actual prices shown to the user when they made the decision to swap on the interface.
We augment this dataset with additional onchain data to obtain the final average execution price, slippage tolerance setting, location in the blockchain, failure status, gas expenditure, and order size for each swap. We obtain onchain data for every swap in the target pools between January 21, 2023 to August 14, 2023, including but not limited to the interface swaps.7 Onchain data is also used to compute the liquidity distribution of each target pool at the beginning of each block. We use the publicly available mevboost.pics dataset [48] to obtain for each swap the builder that built the corresponding block.
Footnote 7: The data for every onchain swap will later be used to simulate Uniswap v3 pools to compute a slippage breakdown.
We further augment our dataset with mempool data for every (onchain) swap, including the time that each swap was seen in the mempool, and whether a swap was seen for the first time in the public mempool, or if it was first seen as part of a finalized block onchain. The latter indicates that the swap was likely sent as part of a private mev-boost bundle of transactions. The mempool data comes from two different datasets: one assembled by bloXroute, comprising mempool data from June 22, 2023 to August 14, 2023, and one by Blocknative, comprising mempool data from January 21, 2023 to August 8, 2023. When mempool data overlaps, we take the earliest mempool observation time, and consider a swap public iff it was seen by both bloXroute and Blocknative in the public mempool. Most of the initial work was done using the bloXroute dataset, and subsequently extended to incorporate new data from Blocknative for the longer timeframe when it became available.
Pool choice.We focus on two pools that we believe are representative of trading on DEXs on Ethereum mainnet -- the Uniswap v3 WETH-USDC8 5bps9 and the Uniswap v3 WETH-PEPE 30 bps10 pools. The WETH-USDC 5 bps pool is generally the largest pool by volume and liquidity in DeFi, trading mature assets that have a large market cap and good price discovery on centralized exchanges. PEPE is an archetypical high-volatility'memecoin' and representative of a less mature asset being traded on Uniswap v3; notably, the 30 bps pool has enough volume for a meaningful analysis. An exploration of a wider selection of pools is left to future work.
Footnote 8: WETH is an ERC20 wrapped version of ETH backed 1-1. It is used in place of ETH due to most of DeFi requiring a token be an ERC20.
### Summary Statistics
We study transaction costs from various angles: dollar amounts, fraction of trade sizes, as well as the breakdown into cost items laid out in Section 3.
Our full sample includes 534,198 transactions, roughly $6B dollar of volume. Combined, the sample accounts for approximately 20% of all swaps done through the Uniswap Labs interface during the sample period, and roughly 12-15% of the USD volume. The average WETH-USDC (WETH-PEPE) transaction size is $18301.7 ($2680.6), and the average total transaction cost (relative to a gas-free swap executed at the pool price at the end of the quote block) is $40.7 ($41.4), which is 22 bps (140 bps) of the mean order size. Note that this latter number represents the average transaction cost _per dollar_ transacted in each respective pool. Swap sizes in our sample skew to the right, and the median swap size is only $1,650 ($407). For the median swap, gas cost is about $7.3 ($14.9), which is 44 bps (366 bps) of its order size.
### Cost Composition: High Level Patterns
We compare the magnitude of the different transaction cost components: gas costs, slippage, LP fees, and the price impact of swaps.
Summing over all swaps, each of the four cost items accounts for between 20-35% of the total transaction cost, with no item dominating costs (Figure 0(a) in the appendix). Yet, this breakdown varies with order size and across the different assets. Specifically, for the median sized transaction, gas cost completely dominates other cost, accounting for more than 90% of total cost. Excluding gas costs, which are specific to the underlying blockchain, the overall cost of transacting in a decentralized market is on average 23bps (compared to 36bps including gas costs). Below we detail how the composition of total costs varies significantly with the size of the swap, across different assets, and over time:
Order Size.Transaction costs vary significantly with the size of the swap: the larger the swap, the lower the gas cost as a percentage of the swap size. In our WETH-USDC pool sample, gas cost is relatively tightly distributed around a median of $7.3, with 75th and 25th percentile of $5.0 and $11.0, and a low correlation with swap size at \(\sim\)0.21. As a result, for small swaps with size below $1000, over 98% of total transaction cost is paid as gas cost; in contrast, for large swaps with size above $100,000, gas cost accounts for 2.4% of total transaction cost. That is, as swap size increases, other cost items start to dominate. In particular, for swaps larger than $100,000, price-impact and slippage together account for \(\sim\)77% of overall transaction cost on a per dollar basis. This makes sense because price impact and slippage usually increase faster than swap size. Specifically, since marginal price impact is inversely related to liquidity depth, and Uniswap v3 pools tend to have thinner liquidity farther away from current pool price, then larger swaps should see increasing price-impact as a fraction of total order size. LP fees scale linearly with swap size, so their
fraction of swap size remains fixed.
USDC vs PEPE.In our sample, the mean transaction cost for WETH-PEPE is about 140 bps, which is six times larger than the mean transaction cost for WETH-USDC swaps, at 22 bps. Several factors contribute to this large difference. PEPE is a relatively recent launched token, which means it has shallower liquidity, higher volatility, and smaller trade sizes. Specifically, the average and median transaction sizes for WETH-USDC pair is $18,302 and $1,650, respectively, while those of WETH-PEPE are only $2,681 and $407. Larger swap sizes mean that the relatively fixed gas cost will have a larger volume base to be amortized over, but at the same time adversaries like sandwich attackers could also have a strong motivation to attack. This is consistent with what we see in the data: for WETH-USDC swaps, summed slippage costs account for 30.1% of total transaction cost11, yet accounts for only 8.1% for WETH-PEPE swaps. In contrast, gas costs account for 24.5% of WETH-USDC cost, but 47.1% of WETH-PEPE cost. Shallower liquidity also translates into higher price-impact and higher slippage. In our sample, the amount of liquidity within a 5% price range (liquidity\({}_{i}\)) is on average 70-80 times deeper for the WETH-USDC pool. Finally, WETH-PEPE has higher price volatility, on average 10x times higher than WETH-USDC. As we will demonstrate below in our regression analysis, higher volatility, in general, leads to higher slippage cost.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & & Total Cost & Gas Cost & Slippage & LP Fee & Price Impact \\ Token Pair & Size & & & & & \\ \hline ETH\(<\)\(>\)USDC All & \$40.7 (22bps) & \$10.1 (5bps) & \$12.3 (7bps) & \$9.3 (5bps) & \$9.0 (5bps) \\ & Large & \$698.2 (24bps) & \$16.9 (0.6bps) & \$308.4 (11bps) & \$145.4 (5bps) & \$227.5 (8bps) \\ & Medium & \$18.9 (14bps) & \$10.6 (8bps) & \$1.0 (0.7bps) & \$6.7 (5bps) & \$0.6 (0.4bps) \\ & Small & \$8.7 (250bps) & \$8.5 (245bps) & \$0.0 (0bps) & \$0.2 (5bps) & \$0.0 (0bps) \\ ETH\(<\)\(>\)PEPE All & \$41.4 (140bps) & \$19.5 (66bps) & \$3.0 (10bps) & \$8.9 (30bps) & \$10.1 (34bps) \\ & Large & \$2847.9 (212bps) & \$52.7 (4bps) & \$190.8 (14bps) & \$403.5 (30bps) & \$2200.9 (164bps) \\ & Medium & \$67.9 (98bps) & \$22.6 (33bps) & \$8.3 (12bps) & \$20.7 (30bps) & \$16.3 (24bps) \\ & Small & \$19.2 (526bps) & \$17.4 (478bps) & \$0.5 (14bps) & \$1.1 (30bps) & \$0.2 (6bps) \\ \hline \end{tabular}
\end{table}
Table 1: Mean Transaction Cost Composition by Pair & Size; Each dollar item is computed by total item cost divided by number of swaps in that bucket; each bps item is computed by total item cost divided by the total dollar volume for swaps in that bucket. Large Size group includes swaps larger than $100,000; Medium Size group includes swaps between $1,000 and $100,000; Small Size Group includes swaps less than $1,000.
Time effects.Transaction costs vary significantly based on the time period. We observe a significant reduction in total transaction cost as a fraction of trade size in the initial months of trading for WETH-PEPE. MEV related costs such as slippage and price-impact also spike during highly volatile periods. For the month of March, during the SVB incident, the mean slippage and price-impact are in total 22.5 bps for the WETH-USDC pair and over 71% of total transaction cost, more than five times the lowest point in the month of April, which saw a mean of 3.8 bps, accounting for only 25% of total transaction cost.
Latency and fill rate.We also present baseline data for latency and fill-rate of transactions. Latency and fill-rate measure how quickly and reliably can traders get into their desired positions. Our dataset indicates that roughly 90% of transactions wait less than 12 seconds before their signed transactions are confirmed in a block. As we move out farther in the distribution, waiting time gets much longer--the 99.5th percentile waiting time is more than 20 blocks. Of the USDC-WETH swaps in our dataset, fewer than 0.5% fail on-chain. In PEPE-WETH, interface swaps saw an onchain fail rate of nearly 10% at launch in mid-April, dropping to 5% by the end of April, and approximately 3% in August.
Comparison with Traditional Markets.Many works have assessed transaction costs (slippage, commission, broker fees, bid ask spreads, price impact) on traditional markets. Depending on the region and asset class, transaction costs can vary widely [5, 17, 22]. For example, [5] finds that transaction costs for equities range from 100 bps for emerging markets to 40 bps for US large cap companies. [17] finds that institutional investors pay in the range of 40-70 bps in total transaction costs. The average transaction costs that we observe in the PEPE-WETH and USDC-WETH pools are remarkably competitive in comparison.
### Factors Affecting Slippage
We now shift our focus to understanding slippage on Uniswap v3, for two main reasons. First, of the four cost items above, gas used by a swap and LP fees are relatively static and do not change much with the swap's characteristics. In contrast, price-impact and slippage directly affect execution price on a per-trade basis. While the factors required to minimize price-impact are well understood, this is not the case for slippage. Second, slippage attracts the most adversarial attention. Thus, for any given swap, slippage arguably matters the most to execution quality.
We start by analyzing how different transaction and market factors affect slippage. To this end, we run a set of linear regressions on a broad set of market variables, for various
measures of slippage, described by the equation
\[y_{i}=\beta_{0} +\beta_{1}\cdot\mathsf{orderSize}_{i}+\beta_{2}\cdot\mathsf{gasPrice} _{i}+\beta_{3}\cdot\mathsf{logLatency}_{i} \tag{1}\] \[+\beta_{4}\cdot\mathsf{slippageTolerance}_{i}+\beta_{5}\cdot \mathsf{lastHourReturn}_{i}+\beta_{6}\cdot\mathsf{liquidity}_{i}\] \[+\beta_{7}\cdot\mathsf{volatility}_{i}+\mathsf{weekFE}+e_{i},\]
where for each swap \(i\), \(y_{i}\) corresponds to one of the measures of slippage described above. In order to further control for market conditions, we control for weekly fixed effects. The results for the WETH-USDC 5 bps pool is presented in Table 2, along with definitions for the above variables.
Recall that for every notion of slippage that we consider, slippage is negative (by convention) if the realized execution price is worse than the quoted execution price, and positive if the realized price is better.
As Table 2 shows, consistent with the high-level pattern observed in Section 4.3, we observe in Model (1) that larger swaps (in terms of USDC size) are associated with substantially worse slippage: for every extra 1 million dollars in additional order size, the swap costs an additional 14 bps on top of its quoted price. That is, larger swaps pay a large penalty on top of their price-impact. While caution is needed when interpreting these numbers, this suggests that adversaries find larger swaps -- controlling for market volatility -- more profitable to sandwich or otherwise exploit (in the form of MEV). To test this, in models (2) & (3) we run the same regression on the adverse and benign components of slippage. We break down slippage into adversarial, collision, and liquidity components (the methodology is presented in Table 2):
\[\mathsf{Slippage}_{i}\approx\mathsf{AdversarialSlippage}_{i}+\mathsf{Collision Slippage}_{i}+\mathsf{ LiquiditySlippage}_{i}\]
As expected, most of the effect of order size on slippage is driven by adversarial slippage. Moreover, increased liquidity reduces adversarial slippage. We also evaluate reordering slippage in model (4), and find that it behaves similarly to our heuristical notion of adverse slippage. Interestingly, the effect of order size on reordering slippage is larger than the effect in our original slippage and adversarial slippage models. This may be due to back-running transactions that are captured in reordering slippage, but not in adverse slippage.
One interesting question is whether collision slippage is due to benign swaps in the same block as the target swap, or swaps from previous blocks (in which case latency might matter more). As shown in model (5), putting the transaction at the top of the block does not completely eliminate slippage. While the coefficient is significantly smaller than the coefficient in models (1)-(4), it might take 2-3 blocks between the quote time and the time the transaction becomes onchain, so the price at the top of the block may be different than the quoted price (either due to collision or adversarial transactions), thereby creating slippage. Still, it seems like cross-block slippage is smaller than within-block slippage.
Another important factor affecting slippage is gas price. As the results show, higher gas prices are also negatively associated with slippage. Here, however, the association is mostly
driven by collision slippage (see models 2 & 3). The intuition is simple: gas price is higher when the networks is more congested and there is more activity. Since, typically, during high activity periods, users trade in the same directions, this results in higher negative slippage. This same intuition can explain the negative coefficient in models (4) & (5).
Slippage is also expected to be affected by the time transactions sit in the mempool - the longer it takes for the transaction to become onchain, the higher the expected slippage; due to higher probability of collision and higher probability of the transaction being identified as a profitable MEV opportunity. Our results show that indeed slippage increases with the time the transaction spends in the mempool. Interestingly, most of the effect comes from collision rather than adversarial slippage. This suggests that searchers respond quickly to MEV opportunities. As before, the negative effect in the case where the transaction is executed at the top of the block is due to inter-block slippage.
The hourly returns variable captures, in essence, the effect of market momentum. It is positive if the market moves in the same direction as the swap, and negative otherwise. As expected, market momentum negatively affects slippage and is mostly driven by collision. All other control variables - e.g., volatility and liquidity - are in the expected direction. Finally, note that the effect of the slippage tolerance set by the user is economically insignificant. This is driven by the fact that most users do not change the default tolerance of 50bps. Indeed, both the 25th percentile and the median slippage tolerance levels are 50bps.
### Comparison with the WETH-PEPE pool
As mentioned above, the WETH-USDC pool is a mature and highly liquid pool. One would expect it to perform much more efficiently than younger, mostly speculative pools with high volatility. To this end, we choose the WETH-PEPE pool which is a highly active pool with 1/100 the liquidity of USDC (median of \(0.24M\) vs \(22.5M\) dollars) and about 10 times the volatility of USDC. Table 3 presents the regression results.
In general, the direction of the effects of the different factors on slippage is similar to what we see in the WETH-USDC pool, with the notable exception that the effect of gas price on adversarial slippage is positive and significant and the effect of slippage tolerance is significant and economically meaningful. The positive and significant coefficient on gas price for adversarial slippage is likely due to the fact that an increase in the cost of a transaction makes some adversarial strategies unprofitable. If, on average, the availability of profitable MEV opportunities does not change during high gas price, then we should expect to see less adversarial activity during times when gas price is high. Since, unlike the WETH-USDC pool, network congestion is likely not closely associated with profitable MEV opportunities in the PEPE pool, we see that adversarial slippage for PEPE is positively correlated with gas price. As for slippage tolerance, the 25th percentile and median slippage tolerance values for the PEPE pool are 100 and 300, respectively. This suggests that users (or the Uniswap Labs interface) are actively choosing risk tolerance levels with the
expectation that slippage would be quite high, likely, due to the high price volatility.
Given the large differences in overall activity, transaction size, etc. between the two pools, the coefficients in the regressions in Table 2 and Table 3 cannot be directly compared. In order to examine whether there is more adversarial activity in the PEPE pool relative to the USDC pool, we run a logit regression on adversarial slippage. Specifically, we run the same regression as in Equation 1 where now \(y_{i}\) takes on the value of 1 if swap \(i\) experienced a negative adversarial slippage larger than $5, and 0 otherwise.12 The results are presented in Table 4. (Note that liquidity in range is quite correlated with week number for PEPE, and much less so for USDC).
Footnote 12: To avoid the case of swaps being mistakenly identified as adverse because of a small negative adversarial slippage (which may be spurious due to our heuristic), we classify transactions in the logit regression as being adversarial only if their adversarial slippage (in bps) multiplied by the order size is worse than negative §5.
As the table shows, the likelihood of adversarial slippage for PEPE is about 80% larger than for USDC. Furthermore, the increased likelihood is enhanced by the size of the transaction-i.e., for a certain increase in transaction size, the increase in the likelihood of adversarial slippage for PEPE is larger than the corresponding increase for USDC.
The results above suggest that slippage in mature markets that are highly active, liquid, and with low volatility is minor. More generally, the results suggests that, despite the decentralization and transparency that characterize DeFi market, mature markets behave efficiently and may be even considered to be close in their efficiency to traditional financial markets. In order to further examine the impact of the fundamental characteristics of DeFi market--decentralization and transparency--next, we break down our analysis to public and private transactions. Specifically, while decentralization is core to the functioning of the Uniswap Protocol, nowadays many transactions (including MEV transactions) are sent as private transactions. This allows us to better study the effect of transparency on market efficiency in mature markets as WETH-USDC as well as in younger markets like the WETH-PEPE pool.
## 5 The Impact of MEV Infrastructure on Slippage
We examine two important components of the MEV ecosystem: the usage of private RPCs and trust in builders' neutrality.
### Private RPCs
Sending a transaction to a private RPC (putting it in a mev-boost bundle) should prevent other searchers from sandwiching transactions, as the private transaction does not appear in the mempool before becoming onchain. Consequently, private transactions should have much smaller negative slippage, that is only driven by collision. Nevertheless, since these
transactions typically are sent to specific RPCs, they might suffer from higher latency and consequently wider collision slippage.
Table 5 presents the results for the same regression as in Equation 1 with the addition of a dummy variable that takes on the value of 1 if transaction \(i\) is public, and 0 otherwise. As expected, the coefficient on \(\mathsf{Public}\) is negative and significant for both USDC and PEPE. In fact, out of the 8294 (8629) private interface swaps in USDC (PEPE), only 3 (12) have negative adverse slippage that is larger than $5. That is, private RPCs seem to completely eliminate adverse slippage. Of the 3 (12) swaps with adverse slippage worse than $5, none are obviously sandwiched when manually checked on Etherscan, and the slippage appears accidental. Furthermore, the coefficient on collision slippage is not significant, meaning that the potential increase in latency has no effect on average collision slippage. We note that while this shows evidence that private RPCS are reliable in the present day, it does not guarantee that the trust assumptions that underpin private RPCs will continue to be valid in the future.13
Footnote 13: We further interact \(\mathsf{Public}\) with our other explanatory and control variables. We find that the effect of order size on adversarial slippage is stronger for public transactions, yet gas price interacted with \(\mathsf{Public}\) has no significant effect. For brevity, we do not present these results here.
### Builder Trust
When participating in the \(\mathsf{mev}\)-boost ecosystem, searchers and users of private RPCs must trust that builders do not frontrun or 'unpack' the \(\mathsf{mev}\)-boost bundles that are sent to builders. While in traditional markets, it is the regulator that audits the intermediaries, in DeFi this trust relies on incentives (or even on goodwill). It is, therefore, important to audit this trust assumption. Our reordering slippage provides a way for the public to monitor builders' behavior, without the need to acquire private data.
As Table 6 shows, we do not find conclusive evidence that any of the top 5 builders (by private transaction count) are misbehaving, at least from a cursory examination. This may suggest that the penalties associated with breaking users' trust are large enough to incentivize builders not to defect. Investigating the validity of trust assumptions required by the MEV ecosystem remains an important open question.
## 6 Related work
Quantifying MEV on Ethereum is an active area of research. Daian et al. [10, 18] introduced the notion of MEV and were the first to demonstrate its impact on decentralized markets, in turn giving rise to projects such as \(\mathsf{mev}\)-boost[24]. More recently, a series of works [47, 44] quantify frontrunning attacks and MEV extraction using historical data. [50] further extend this to analyze the impact of Flashbots on overall MEV extraction. Many of their techniques rely on identifying specific strategies for extracting value from users, and
then designing heuristics for identifying those strategies. More recently, [45, 7, 8] explore more automated approaches for identifying value-extraction mechanisms.
Few works have analyzed slippage or overall transaction costs of trading on decentralized exchanges. 0x [2] analyze the slippage of trades sent through the 0x Swap API, and show that setting an appropriate slippage tolerance is essential for bounding the cost of MEV experienced by any single swap. [37] show that transactions sent to private RPCs may have on average higher transaction costs than public transactions. [9] estimate the transaction costs for various fixed-size trades over time by simulating pools using onchain liquidity data, and gas price data. Their analysis does not incorporate slippage, MEV, or onchain execution prices, relying entirely on simulated execution.
A number of works measure the dynamics of liquidity provisioning on decentralized exchanges, sometimes through the lenses of transaction costs. [16] show that lower gas fees increase liquidity repositioning and concentration, reducing price impact for small trades. They use a notion of slippage that incorporates price impact, comparing realized prices to the market mid price. [27] show that higher LP fees may reduce price impact. [33] show that the cost of price impact on AMMS may be lower than on centralized exchanges for highly liquid pools.
_Transaction costs on traditional markets._ In contrast, a number of works have quantified overall trading costs on traditional markets (such as equities markets). We point to [23, 22, 21, 17, 5] as examples, but this list is by no means comprehensive.
_Mitigating MEV._ Much work has focused on mitigating MEV extraction through protocol design. At the consensus layer, [31] propose an order-fair consensus algorithm, where transactions are ordered in the time they arrive in the view of validators. Order-fair consensus has seen substantial follow-up work [54, 32, 15, 30]. Time-based order-fairness may not eliminate backruns or arbitrage and may incentivize latency wars. An alternative approach, as described by [36], is to force block proposers to 'commit' to an ordering of a block, before they learn the contents of the transactions. Such an approach is reminiscent of those used by MPC protocols [53, 39] and asynchronous byzantine agreement algorithms; the downside is that it requires more sophisticated cryptography (e.g. threshold cryptography, secret sharing) that is harder to adapt to a permissionless setting.
At the block builder level, [51] propose a verifiable block sequencing rule that block builders can follow and that observers can audit. Such a rule may mitigate MEV whilst being accountable to the general public. |
2309.10341 | Theory of Nonequilibrium Coexistence with Coupled Conserved and
Nonconserved Order Parameters | Phase separation routinely occurs in both living and synthetic systems. These
phases are often complex and distinguished by features including crystallinity,
nematic order, and a host of other nonconserved order parameters. For systems
at equilibrium, the phase boundaries that characterize these transitions can be
straightforwardly determined through the framework of thermodynamics. The
prevalence of phase separation in active and driven systems motivates the need
for a genuinely nonequilibrium theory for the coexistence of complex phases.
Here, we develop a dynamical theory of coexistence when both conserved and
nonconserved order parameters are present, casting coexistence criteria into
the familiar form of equality of state functions. Our theory generalizes
thermodynamic notions such as the chemical potential and Gibbs-Duhem relation
to systems out of equilibrium. While these notions may not exist for all
nonequilibrium systems, we numerically verify their existence for a variety of
systems by introducing the phenomenological Active Model C+. We hope our work
aids in the development of a comprehensive theory of high-dimensional
nonequilibrium phase diagrams. | Daniel Evans, Ahmad K. Omar | 2023-09-19T05:51:44Z | http://arxiv.org/abs/2309.10341v3 | # Theory of Nonequilibrium Symmetry-Breaking Coexistence and Active Crystallization
###### Abstract
Crystallization is perhaps the most familiar example of a symmetry-breaking transition. In equilibrium, thermodynamic arguments result in a powerful and convenient set of criteria for determining the coexistence curves associated with these transitions. In recent years, nonequilibrium symmetry-breaking transitions have been routinely observed in a variety of natural and synthetic systems. The breaking of detailed balance, and the resulting absence of Boltzmann statistics, motivates the need for a symmetry-breaking coexistence theory that is independent of the underlying distribution of microstates. Here, we develop such a theory, relying only on mechanics, balance laws, and system symmetries. In doing so, we develop a generalized Gibbs-Duhem relation that results in nonequilibrium coexistence criteria solely in terms of bulk equations of state. We apply our framework to active crystallization, developing a complete description of the phase diagram of active Brownian hard spheres. Our predicted phase diagram quantitatively recapitulates the solid-fluid coexistence curve as well as other key features of active phase behavior, such as the liquid-gas coexistence binodal and solid-liquid-gas triple point. It is our hope that our findings offer a concrete path forward towards the development of a general theory for nonequilibrium coexistence.
## I Introduction
From motile bacteria [1] to starfish embryos exhibiting chiral motion [2], living systems comprised of so-called active matter are routinely observed to crystallize. For over a century, thermodynamics has enabled the determination of phase diagrams that describe these transitions for systems in _equilibrium_. A number of approaches have been proposed to construct _nonequilibrium_ liquid-gas binodals (described by one conserved order parameter, i.e., the density) [3; 4; 5; 6; 7; 8; 9; 10; 11] and the spinodal (or stability limit) of driven systems with multiple coupled conserved [12; 13] or nonconserved [14] order parameters. However, nonequilibrium crystallization [15; 16; 17; 18; 19; 20; 21], representative of a broad class of out-of-equilibrium transitions that involve coupled conserved and nonconserved order parameters, has largely eluded theoretical description.
The criteria for equilibrium solid-fluid coexistence is unambiguous: the pressure and chemical potential of the solid phase is equal to those of the fluid phase and both phases are locally stable with respect to the crystalline order parameter. This remarkably simple and convenient criteria afforded by thermodynamics allows equilibrium phase diagrams to be readily determined from _bulk_ equations of state. While pressure and local stability are notions that can be extended to systems arbitrarily far from equilibrium, chemical potential is ill-defined in active systems. The question arises: is there a set of nonequilibrium (i.e., derived without appealing to equilibrium concepts) symmetry-breaking coexistence criteria which solely contain bulk equations of state?
Resolution of the theoretical question posed above will aid in our physical understanding of a number of recent observations of symmetry-breaking coexistence in driven systems. For example, it was recently shown that the addition of activity profoundly alters the solid-fluid coexistence curve [17] in systems of monodisperse hard spheres [22; 23; 24; 25; 26; 27; 28; 29; 30]. Finite activity was shown to rapidly increase the solid phase density to maximal fcc packing \(\left(\phi^{\mathrm{solid}}\approx\phi^{\mathrm{CP}}\equiv 0.74\right)\) from its equilibrium value \(\left(\phi^{\mathrm{solid}}\approx 0.545\right)\) which is recovered at low activities. While thermodynamics elucidates the origins of equilibrium crystallization transitions, the absence of an analogous nonequilibrium framework has prevented a detailed understanding of the physical origins of out-of-equilibrium solid-fluid coexistence.
In this Article, we develop a theory for constructing symmetry-breaking coexistence curves without appealing to thermodynamic notions and apply this theory to active crystallization [16; 17]. We generalize the mechanical and dynamical theory developed to construct out-of-equilibrium fluid-fluid coexistence curves [8; 11; 31] that has successfully [11] described the motility-induced phase separation (MIPS) [3; 4; 17; 32; 33; 34; 35; 36; 37; 38; 39] of active hard spheres. Beginning with the spatial and temporal evolution equations of our order parameters, we derive the criteria for symmetry-breaking coexistence solely in terms of bulk mechanical and structural equations of state. Our theory can hence predict the coexistence curves of symmetry-breaking transitions both in and out of equilibrium and, moreover, allows us to identify a generalized Gibbs-Duhem relation. We apply our perspective to active hard spheres and quantitatively capture all aspects of the reported phase diagram [17], including recovering the equilibrium hard sphere transition in the limit of vanishing activity, the nearly close-packed density of the solid phase at finite activity, and the location of the triple point. Finally, the violation of the equilibrium Gibbs-Duhem relation is shown to be directly related to the uniquely nonequilibrium structure of the active interface. Our work thus makes clear that understanding phase coexistence of driven systems requires the use of a genuinely nonequilibrium coexistence framework.
## II Theory of symmetry-breaking coexistence
Our aim in this Section is to derive bulk criteria for symmetry-breaking two-phase coexistence that is applicable
to both equilibrium and nonequilibrium systems. Here, we focus on a system described by two coupled order parameters - a conserved density field and a nonconserved field. In Section II.1, we briefly discuss the expected criteria in equilibrium as determined through bulk thermodynamics. There, no description of the interface separating the two coexisting phases is required, and the coexistence criteria solely contain bulk equations of state. We subsequently generalize these criteria in Section II.2 by considering the complete spatial and temporal dynamics of the order parameter fields and examining their stationary state. In this approach, knowledge of interfacial forces becomes crucial in establishing a _generalized Gibbs-Duhem relation_ that allows the _nonequilibrium_ coexistence criteria to be simply expressed with bulk equations of state.
### Equilibrium Coexistence Criteria from Bulk Thermodynamics
Consider a macroscopic system with a fixed overall number density \(\rho\) and volume \(V\). We characterize the degree of order in the system with a scalar phenomenological intensive order parameter \(\psi\). The system is described by the vector of order parameter densities \(\mathbf{X}\equiv\begin{bmatrix}\rho&\rho\psi\end{bmatrix}^{\mathrm{T}}\) with the bulk (mean-field) free energy density of the system denoted as \(f_{0}\left(\mathbf{X}\right)\). In contrast to \(\rho\), \(\psi\) is a nonconserved and unconstrained variable that the system may adjust to reduce its total free energy \(F_{0}=Vf_{0}\left(\mathbf{X}\right)\). In the absence of a coupling between \(\psi\) and a conserved quantity, each phase will have an identical value of \(\psi\): the value that minimizes \(f_{0}\). Symmetry-breaking coexistence emerges from the coupling of \(\psi\) with the constrained \(\rho\). This coupling is reflected in non-additive contributions of \(\rho\) and \(\psi\) to the free energy density, i.e., \(f_{0}\left(\mathbf{X}\right)\neq\sum_{i}f_{0}^{(i)}\left(X_{i}\right)\) (and thus the mean-field probability cannot be factorized, i.e., \(P_{0}\left(\mathbf{X}\right)\propto\exp[-Vf_{0}\left(\mathbf{X}\right)/k_{B}T ]\neq\Pi_{i}\,P_{0}^{(i)}\left(X_{i}\right)\), where \(k_{B}T\) is the thermal energy). A necessary criterion for equilibrium symmetry-breaking coexistence is hence a non-vanishing mixed derivative, \(\partial^{2}f_{0}/\partial\rho\partial\psi\).
In the scenario of coexisting \(\alpha\) and \(\beta\) phases [e.g., coexisting fluid (\(\alpha\)) and solid (\(\beta\))], the total free energy can be expressed as \(F_{0}=V^{\alpha}f_{0}\big{(}\mathbf{X}^{\alpha}\big{)}+V^{\beta}f_{0}\big{(} \mathbf{X}^{\beta}\big{)}\) where \(V^{\alpha/\beta}\) and \(\mathbf{X}^{\alpha/\beta}\) are the respective volume and order parameter densities of the \(\alpha/\beta\) phases and we have neglected the interfacial free energy (the ratio of the interfacial area to the system volume is negligibly small for macroscopic systems). Notably, while the phase volumes and number densities are constrained, there are no constraints on \(\psi^{\beta}\) and \(\psi^{\alpha}\) (i.e., systems prepared at a given density and total volume can take any value of \(\psi\)). Minimizing the free energy with respect to each phase's volume and \(\mathbf{X}\), subject to the above constraints, results in the equilibrium coexistence criteria. Defining \(\mathbf{\mu}_{0}\equiv\partial f_{0}/\partial\mathbf{X}\equiv\begin{bmatrix}\mu _{0}^{\rho}&\mu_{0}^{\psi}\end{bmatrix}^{\mathrm{T}}\) [where \(\mu_{0}^{\rho}\equiv\partial f_{0}/\partial\rho\) is the familiar chemical potential and \(\mu_{0}^{\psi}\equiv\partial f_{0}/\partial(\rho\psi)\)], we arrive at our first criteria: \(\mathbf{\mu}_{0}\big{(}\mathbf{X}^{\alpha}\big{)}=\mathbf{\mu}_{0}\big{(}\mathbf{X}^{ \beta}\big{)}=\mathbf{\mu}^{\mathrm{coexist}}\), where \(\mathbf{\mu}^{\mathrm{coexist}}=\begin{bmatrix}\mu^{\rho,\mathrm{coexist}}&\mu^{ \psi,\mathrm{coexist}}\end{bmatrix}^{\mathrm{T}}\). Here, \(\mu^{\rho,\mathrm{coexist}}\) is the coexistence chemical potential which must be determined and \(\mu^{\psi,\mathrm{coexist}}=0\). The constrained minimization with respect to the phase volumes leads to our final criterion: equality of pressures, \(p_{0}\big{(}\mathbf{X}^{\alpha}\big{)}=p_{0}\big{(}\mathbf{X}^{\beta}\big{)}=p^ {\mathrm{coexist}}\), where \(p_{0}\equiv\mathbf{\mu}_{0}\cdot\mathbf{X}-f_{0}\). The four criteria for equilibrium \(\alpha-\beta\) coexistence are thus:
\[\mu_{0}^{\rho}\big{(}\mathbf{X}^{\alpha}\big{)}=\mu_{0}^{\rho} \big{(}\mathbf{X}^{\beta}\big{)}=\mu^{\rho,\mathrm{coexist}}, \tag{1a}\] \[\mu_{0}^{\psi}\big{(}\mathbf{X}^{\alpha}\big{)}=0,\] (1b) \[\mu_{0}^{\psi}\big{(}\mathbf{X}^{\beta}\big{)}=0,\] (1c) \[p_{0}\big{(}\mathbf{X}^{\alpha}\big{)}=p_{0}\big{(}\mathbf{X}^{ \beta}\big{)}=p^{\mathrm{coexist}}. \tag{1d}\]
While the criteria following from the constrained minimization with respect to \(\rho\) [Eq. (1a)] and \(V\) [Eq. (1d)] are familiar for any state of equilibrium two-phase coexistence, the remaining two criteria ensure that within each phase a stationary value of \(\psi\) is selected for the corresponding \(\rho\).
The four criteria in Eq. (1) allow for the determination of the four unknown variables that characterize states of \(\alpha-\beta\) coexistence: \(\rho^{\alpha}\), \(\rho^{\beta}\), \(\psi^{\alpha}\), and \(\psi^{\beta}\). While derived for equilibrium systems, we can immediately appreciate that Eqs. (1b), (1c), and (1d) are likely applicable to nonequilibrium systems as well. Pressure is a mechanical concept and can thus be defined out of equilibrium [38], and the selection of a stationary \(\psi\) in each phase is a well-defined notion for nonequilibrium systems. Chemical potential (\(\mu_{0}^{\rho}\)), however, is strictly an equilibrium concept. Importantly, equality of chemical potentials can be recast into a _path-independent_ integral condition on the pressure by introducing the Gibbs-Duhem relation. As detailed in the Supplemental Material (SM) [40], the equilibrium Gibbs-Duhem relation [41] is simply \(dp_{0}=\rho d\mu_{0}^{\rho}+\rho\psi d\mu_{0}^{\psi}\), which can be expressed as:
\[d\mu_{0}^{\rho}=\mathcal{E}_{n}^{\mathrm{eqm}}d\mathcal{F}_{n}^{0}, \tag{2}\]
where we have begun using indicial notation. We have introduced the generalized force vector, \(\mathcal{F}_{n}^{0}\left(\left\{X_{i}\right\}\right)\equiv\begin{bmatrix}p_{0}& \mu_{0}^{\psi}\end{bmatrix}^{\mathrm{T}}\), and defined its conjugate, \(\mathcal{E}_{n}^{\mathrm{eqm}}\left(\left\{X_{i}\right\}\right)\equiv \begin{bmatrix}\upsilon&-\psi\end{bmatrix}^{\mathrm{T}}\) (where \(\upsilon\equiv 1/\rho\) is the inverse density). Equality of chemical potentials [Eq. (1a)] can now be equivalently expressed by integrating Eq. (2) between the two phases. A straightforward integration by parts results in:
\[\int_{\mathcal{E}_{n}^{\mathrm{eqm},\beta}}^{\mathcal{E}_{n}^{\mathrm{eqm}, \beta}}\left[\mathcal{F}_{n}^{0}\left(\left\{X_{i}\right\}\right)-\mathcal{F}_{n }^{\mathrm{coexist}}\right]d\mathcal{E}_{n}^{\mathrm{eqm}}\left(\left\{X_{i} \right\}\right)=0,\] (3a) where \[\mathcal{F}_{n}^{0}\big{(}\big{\{}X_{i}^{\alpha}\big{)}=\mathcal{F}_{n}^{0} \big{(}\left\{X_{i}^{\beta}\right\}\big{)}=\mathcal{F}_{n}^{\mathrm{coexist}},\] (3b) and \[\mathcal{F}_{n}^{\mathrm{coexist}}\equiv\begin{bmatrix}p^{\mathrm{coexist}}&0 \end{bmatrix}^{\mathrm{T}}.\]
The criteria presented in Eq. (3) no longer explicitly contain the chemical potential but are entirely equivalent to those shown in Eq. (1). Notably, Eq. (3a) is a multivariate equal-area construction. Its evaluation thus requires the selection of a path between the two phases, characterized by \(\mathcal{E}_{n}^{\mathrm{eqm},\alpha}\) and \(\mathcal{E}_{n}^{\mathrm{eqm},\beta}\). However, this integral is _path-independent_: a fact that is made clear by the Gibbs-Duhem relation. It proves convenient to select an integration path (as shown in the SM [40])
between the two phases such that the value of \(\psi\) is always stationary (i.e., \(\mu_{0}^{\psi}\left(\{X_{i}^{*}\}\right)=0\)), where \(\psi^{*}(\rho)\) is the stationary value of the nonconserved order parameter for a given density. Introducing \(\psi^{*}(\rho)\)_by definition_ ensures that \(\psi^{\alpha}\) and \(\psi^{\beta}\) are stationary, \(\mu_{0}^{\psi}\!\left(\{X_{i}^{\alpha*}\}\right)=\mu_{0}^{\psi}\!\left(\{X_{i }^{\beta*}\}\right)=0\), reducing the four criteria in Eq. (3) to:
\[\int_{v^{\alpha}}^{v^{\beta}}\!\left[p_{0}\left(\{X_{i}^{*}\} \right)-p^{\rm coexist}\right]\!dv=0, \tag{4a}\] \[p_{0}\!\left(\{X_{i}^{\alpha*}\}\right)=p_{0}\!\left(\{X_{i}^{ \beta*}\}\right)=p^{\rm coexist}. \tag{4b}\]
We note that while we have derived these conditions for symmetry-breaking \(\alpha-\beta\) coexistence, these criteria also naturally recover the criteria for coexistence when no symmetry is broken (e.g., liquid-gas coexistence) with \(\psi^{\alpha}=\psi^{\beta}\) and \(\rho^{\alpha}\neq\rho^{\beta}\).
### Nonequilibrium Symmetry-Breaking Coexistence Criteria from Stationary Conditions
We now look to derive general criteria for symmetry-breaking two-phase coexistence through purely mechanical and dynamical considerations, recovering the equilibrium result described above when the underlying dynamics are passive. The variational principle provided by equilibrium thermodynamics allowed us to formulate the coexistence criteria solely in terms of \(p_{0}\left(\{X_{i}^{*}\}\right)\) and \(\psi^{*}\left(\rho\right)\). The absence of this variational principle out of equilibrium makes it unclear _a priori_ if a set of nonequilibrium coexistence criteria with the simple form of Eqs. (3) or (4) (i.e., containing only bulk equations of state) can be obtained. We thus begin by considering the full spatial coexistence profile, now explicitly considering the interface separating the two phases. We then seek a procedure that circumvents the determination of the complete spatial profile and casts the nonequilibrium coexistence criteria in terms of bulk equations of state.
The spatial and temporal dynamics of the density field \(\rho(\mathbf{x};t)\) (subject to the constraint \(\int_{V}d\mathbf{x}p(\mathbf{x};t)=V\rho_{0}\)), and the unconstrained order parameter field \(\psi(\mathbf{x};t)\) satisfy the general balance laws \(\partial\rho/\partial t=-\nabla\cdot\mathbf{j}^{\rho}\) and \(\partial\left(\rho\psi\right)/\partial t=-\nabla\cdot\mathbf{j}^{\psi}+s^{\psi}\), where bold variables indicates quantities that are spatially tensorial. Here, \(\mathbf{j}^{\rho}\) and \(\mathbf{j}^{\psi}\) are the absolute fluxes of \(\rho\) and \(\psi\), respectively, and \(s^{\psi}\) is the generation term of \(\psi\). The dynamics of \(\mathbf{j}^{\rho}\equiv\rho\mathbf{u}\) (where \(\mathbf{u}\) is the average velocity) are governed by linear momentum conservation [11]. The conditions for stationary \(\left(\partial\rho/\partial t=\partial\left(\rho\psi\right)/\partial t=0\right)\) coexistence between \(\alpha\) and \(\beta\) phases with flux-free boundary conditions (\(\mathbf{j}^{\rho}=\mathbf{j}^{\psi}=\mathbf{0}\)) are then:
\[\nabla\cdot\mathbf{\sigma}+\mathbf{b}=\mathbf{0}, \tag{5a}\] \[s^{\psi}=0. \tag{5b}\]
Equation (5a) follows from a linear momentum balance, where \(\mathbf{\sigma}\) is the stress tensor and \(\mathbf{b}\) is the sum of all body forces (e.g., external and active forces) acting on the system. Without loss of generality, we consider a planar solid-fluid interface with a surface normal in the \(z\)-direction with translational invariance in the tangential directions. We define a dynamic pressure \(-\mathcal{P}\equiv\sigma_{zz}+\sigma_{zz}^{b}\) following [11; 38], where \(\sigma_{zz}\) is the true stress and \(\sigma_{zz}^{b}\) is the effective stress arising from body forces \(b_{z}=d\sigma_{zz}^{b}/dz\). Equation (5a) thus implies the dynamic pressure is spatially constant, \(\mathcal{P}=\mathcal{P}^{\rm coexist}\). The generation term can be expressed as \(s_{0}^{\psi}=-L^{\psi}\mu_{0}^{\psi}\), where \(L^{\psi}\) is a positive linear transport coefficient [42; 43]. Here, \(N\mu_{0}^{\psi}\propto-\partial\ln P_{0}\left(\{X_{i}\}\right)/\partial\psi\) (where \(P_{0}\) is the probability of the system having the spatially homogeneous vector of order parameter densities \(X_{i}\)_)_both_ in and out of equilibrium, with \(\beta f_{0}V=-\ln P_{0}\) in equilibrium. We then see Eq. (5b) implies \(\mu^{\psi}=0\) at all points in space. The generalized force vector now takes the form \(\mathcal{F}_{n}\equiv\left[\mathcal{P}\ \ \mu^{\psi}\right]^{\rm T}\), where we note that the dynamic pressure reduces to the static pressure in the absence of body forces. The solution to Eq. (5) is then, generally, \(\mathcal{F}_{n}=\mathcal{F}_{n}^{\rm coexist}=\left[\mathcal{P}^{\rm coexist}\ \ 0 \right]^{\rm T}\), where \(\mathcal{P}^{\rm coexist}\) is the to-be-determined coexistence pressure.
Exact microscopic expressions for the force vector, \(\mathcal{F}_{n}\), can be obtained from first-principles through an Irving-Kirkwood procedure [44] or, for equilibrium systems, variationally from a free energy functional. In general, each component of \(\mathcal{F}_{n}\) depends on the full spatial order parameter profiles. To distinguish the bulk and interfacial contributions to \(\mathcal{F}_{n}\), we expand \(\mathcal{F}_{n}\) with respect to spatial gradients of the order parameters, discarding odd gradients (due to spatial inversion symmetry) and retaining second order gradients (the minimum required to obtain spatially varying order parameters):
\[\mathcal{F}_{n}\approx\mathcal{F}_{n}^{0}-B_{n\ell m}\frac{dX_{\ell}}{dz}\frac{ dX_{m}}{dz}-A_{n\ell}\frac{d^{2}X_{\ell}}{dz^{2}}, \tag{6}\]
where we continue to use indicial notation. Each component of the bulk force vector, \(\mathcal{F}_{n}^{0}=\left[\mathcal{P}_{0}\ \ \mu_{0}^{\psi}\right]^{\rm T}\), and the interfacial coefficients, \(B_{n\ell m}\) and \(A_{n\ell}\), are all equations of state that generally depend on both \(\rho\) and \(\psi\). Macroscopic coexistence requires that there is at least one pair of distinct \(X_{i}\) vectors, \(\{X_{i}^{(1)}\}\) and \(\{X_{i}^{(2)}\}\), satisfying \(\mathcal{F}_{n}^{0}\!\left(\{X_{i}^{(1)}\}\right)=\mathcal{F}_{n}^{0}\!\left( \{X_{i}^{(2)}\}\right)\) as in order for two distinguishable phases to coexist. Additionally, the eigenvalues of \(A_{n\ell}\) must be greater than or equal to zero to ensure that small wavelength spatial fluctuations in \(X_{i}\) are disfavored [45].
Equating the right-hand side of Eq. (6) to \(\mathcal{F}_{n}^{\rm coexist}\) yields two coupled differential equations which can be solved simultaneously to find the full spatial coexistence profiles of \(\rho\) and \(\psi\). Here, our aim is to circumvent solving for these profiles and to simply determine the coexistence values of the density and nonconserved order parameter, \(X_{i}^{\alpha/\beta}\), in terms of bulk equations of state, as is possible with the equilibrium criteria [Eq. (3)]. We do this by converting the two stationary conditions \(\left(\mathcal{F}_{n}=\mathcal{F}_{n}^{\rm coexist}\right)\) into four criteria. The first three criteria can be identified immediately by noting \(dX_{\ell}/dz=d^{2}X_{\ell}/dz^{2}=0\) in the spatially uniform \(\alpha\) and \(\beta\) phases [and hence do not involve the interfacial terms in Eq. (6)], resulting in \(\mathcal{F}_{n}^{0}\!\left(\{X_{i}^{\alpha}\}\right)=\mathcal{F}_{n}^{0}\! \left(\{X_{i}^{\beta}\}\right)=\mathcal{F}_{n}^{\rm coexist}\). These three criteria are identical to those found in equilibrium and are thus universal for all symmetry-breaking two-phase coexistence scenarios.
We now aim to find the fourth criterion, noting that in equilibrium, this criterion is the multivariate equal-area Maxwell construction [Eq. (3a)]. Importantly, the equilibrium Gibbs-Duhem relation [Eq. (2)] allows us to identify two equivalent forms of the fourth criterion: the equal-area construction or, alternatively, equality of chemical potentials. In the absence of a well-defined chemical potential out of equilibrium, we introduce the ansatz that the fourth criterion has a similar form to the equal-area Maxwell construction:
\[\int_{\mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\left[ \mathcal{F}_{n}^{0}\left(\{X_{i}\}\right)-\mathcal{F}_{n}^{\rm coexist}\right]d \mathcal{E}_{n}=0,\] (7a) where \[\mathcal{E}_{n}\] is a generalized vector of variables that must be determined. In equilibrium, \[\mathcal{E}_{n}=\mathcal{E}_{n}^{\rm eqm}\] and Eq. ( 7a ) reduces to the equilibrium equal-area Maxwell construction [Eq. ( 3a )]. Equivalently, Eq. ( 7a ) may be written as: \[\int_{\mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\mathcal{F}_{n}^{0}d \mathcal{E}_{n}=\mathcal{F}_{n}^{\rm coexist}\left(\mathcal{E}_{n}^{\beta}- \mathcal{E}_{n}^{\alpha}\right),\] (7b) indicating that the integral in Eq. ( 7a ) is _path-independent_ by construction.
Comparison of Eqs. ( 7a ) and ( 6 ) (and noting \(\mathcal{F}_{n}=\mathcal{F}^{\rm coexist}\) at all points in space) reveals that our ansatz implies an integral over the interfacial terms (\(\mathcal{F}_{n}^{\rm int}\)) in \(\mathcal{F}_{n}\) vanishes:
\[\int_{\mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\mathcal{ F}_{n}^{\rm int}d\mathcal{E}_{n}\\ =\int_{\mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\left(B_ {n\ell m}\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz}+A_{n\ell}\frac{d^{2}X_{\ell}}{ dz^{2}}\right)d\mathcal{E}_{n}=0. \tag{8}\]
For this integral to vanish, and thus for our proposed fourth criterion to hold, derivatives of \(\mathcal{E}_{n}\), \(E_{nj}\equiv\partial\mathcal{E}_{n}/\partial X_{j}\), must be determined through the following system of equations (as detailed in Appendix A):
\[A_{n\ell}E_{nj} =A_{nj}E_{n\ell}, \tag{9a}\] \[B_{n\ell m}E_{nj} =B_{njm}E_{n\ell},\] (9b) \[B_{n\ell m} =B_{nm\ell},\] (9c) \[2B_{nm\ell}E_{nj} =\!\frac{\partial}{\partial X_{m}}\left(A_{n\ell}E_{nj}\right), \tag{9d}\]
where we emphasize that the number of unique equations in Eq. (9) is precisely the same as the number of elements in \(E_{nj}\) (as detailed in Appendix A).
The four bulk criteria for nonequilibrium \(\alpha-\beta\) coexistence can now be summarized as:
\[\mathcal{F}_{n}^{0}\left(\{X_{i}^{\alpha}\}\right)=\mathcal{F}_{n }^{0}\left(\{X_{i}^{\beta}\}\right)=\mathcal{F}_{n}^{\rm coexist}, \tag{10a}\] \[\int_{X_{j}^{\alpha}}^{X_{j}^{\beta}}\left[\mathcal{F}_{n}^{0} \left(\{X_{i}\}\right)-\mathcal{F}_{n}^{\rm coexist}\right]E_{nj}\left(\{X_{i} \}\right)dX_{j}=0, \tag{10b}\]
where Eq. (10a) contains the first three criteria and Eq. (10b) is the fourth. In Eq. (10b), we have have opted to replace \(d\mathcal{E}_{n}\) that appeared in Eq. (7a) with \(E_{nj}dX_{j}\), as the components of \(\mathcal{E}_{n}\) may not be bijective with respect to the components of \(X_{n}\), in which case integrals between the phases with respect to \(\mathcal{E}_{n}\) cannot be evaluated. Equation (10b) is then a _weighted_-area construction (with weighting tensor \(E_{nj}\)) with respect to \(X_{j}\), rather than an equal-area construction with respect to \(\mathcal{E}_{n}\). When the dynamics of \(\rho\) and \(\psi\) are obtained variationally, the solution to Eq. (9) is the equilibrium weighting tensor, \(E_{nj}\sim E_{nj}^{\rm eqm}=-\upsilon^{2}\left(\delta_{\rho n}\delta_{\rho j}+ \delta_{\psi n}\epsilon_{ij}X_{i}\right)\)[40] (where \(\delta_{ij}\) is the identity tensor and \(\epsilon_{ij}\) is the two-dimensional Levi-Ceviat tensor) and the criteria reduce to their equilibrium form [Eq. (1)], as expected.
In equilibrium, the equal-area construction is a direct consequence of the Gibbs-Duhem relation. As detailed in Appendix A, the generalized equal-area construction provided here is consistent with, and can be derived from, a _generalized Gibbs-Duhem relation_:
\[dg=\mathcal{E}_{n}d\mathcal{F}_{n}, \tag{11}\]
where \(g\) acts as a generalized chemical potential [8], although it does not have a clear physical interpretation out of equilibrium.
We find that the conditions for the generalized construction to hold [Eq. (9)] are precisely the same conditions for the generalized Gibbs-Duhem relation to hold. As shown in Appendix A, we identify the functional form of \(g\) and decompose it into bulk (\(g_{0}\)) and interfacial (\(g^{\rm int}\)) contributions with:
\[g_{0}=\mathcal{E}_{n}\mathcal{F}_{n}^{0}-\Phi_{0}, \tag{12a}\] \[g^{\rm int}=\left(B_{n\ell m}\mathcal{E}_{n}-\frac{1}{2}A_{n \ell}E_{nm}\right)\!\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz}\\ -A_{n\ell}\mathcal{E}_{n}\frac{d^{2}X_{\ell}}{dz^{2}}, \tag{12b}\]
where we have introduced a pseudopotential [8], \(\Phi_{0}\equiv\int\mathcal{F}_{n}^{0}d\mathcal{E}_{n}\), defined by \(\mathcal{F}_{n}^{0}=\partial\Phi_{0}/\partial\mathcal{E}_{n}\). We note that our generalized Gibbs-Duhem relation allows us to equivalently express Eq. (10b) as \(g_{0}(\{X_{i}^{\alpha}\})=g_{0}(\{X_{i}^{\beta}\})\). The latter approach will also require an integral of the form \(\int\mathcal{F}_{n}^{0}d\mathcal{E}_{n}\) and is thus no more or less convenient than Eq. (10b). Only when the dynamics of \(\rho\) and \(\psi\) are obtained variationally does Eq. (11) identically reduce to its equilibrium form [Eq. (2)], with \(\mathcal{E}_{n}=\mathcal{E}_{n}^{\rm eqm}\) and \(g=\mu^{\rho}\).
With general expressions for all four coexistence criteria [Eq. (10)] and a system of equations to solve for the weighting tensor, \(E_{nj}\) [Eq. (9)], we now simply need to select an integration path in order to to evaluate Eq. (10b). The choice of path is purely a matter of convenience as the weighted-area construction is path-independent. We again select a path where \(\psi\) is always stationary (\(\mu_{0}^{\psi}\left(\{X_{i}^{*}\}\right)=0\)), where \(\psi^{*}(\rho)\) is the stationary value of \(\psi\) for a given density. This corresponds to a path where the interfacial terms in the \(n=\psi\) component of Eq. (6) can be neglected, i.e., \(A_{\psi i}=B_{\psi ij}=0\ \forall\ i,j\). Consequently, we need not determine the \(E_{\psi j}\) row of the weighting tensor, as the integrals they weigh are identically zero along the selected path. This greatly simplifies the system of equations in Eq. (9). The problem then reduces to that of one order parameter, \(\rho\), with an additional measurable density
dependent property \(\psi^{*}\left(\rho\right)\):
\[\int_{X_{j}^{\alpha_{j}}}^{X_{j}^{\beta}}\left[\mathcal{P}_{0}\left( \left\{X_{i}^{*}\right\}\right)-\mathcal{P}^{\rm coexist}\right]\!E_{\rho j}dX_{j} =0, \tag{13a}\] \[\mathcal{P}_{0}\!\left(\left\{X_{i}^{\alpha*}\right\}\right)= \mathcal{P}_{0}\!\left(\left\{X_{i}^{\beta*}\right\}\right)=\mathcal{P}^{\rm coexist},\] (13b) \[E_{\rho\rho}\propto\prod_{j}\exp\!\Bigg{[}\int dX_{j}\!\left( \frac{2B_{\rho jj}}{A_{\rho j}}-\frac{\partial A_{\rho j}/\partial X_{j}}{A_{ \rho j}}\right)\!\Bigg{]},\] (13c) \[E_{\rho\psi}= E_{\rho\rho}\frac{A_{\rho\psi}}{A_{\rho\rho}}, \tag{13d}\]
where we are not using the summation convention in Eq. (13c). We now have all four nonequilibrium coexistence criteria in terms of four bulk equations of state, \(\mathcal{P}_{0}\left(\rho,\psi\right)\), \(\psi^{*}\left(\rho\right)\), \(E_{\rho\rho}\left(\rho,\psi\right)\), and \(E_{\rho\psi}\left(\rho,\psi\right)\).
## III Phase diagram of active Brownian spheres
We now look to derive the nonequilibrium coexistence criteria of active crystallization and develop a theory for the complete active phase diagram. To apply the theory developed in the previous section to solid-fluid coexistence, the nonconserved order parameter \(\psi\) represents the crystallinity of the system (its precise definition will, of course, depend on the details of the nature of the broken symmetry [46]). We require an expression for the bulk and gradient terms of the dynamic pressure, \(\mathcal{P}=p^{C}+p^{\rm act}\), where \(p^{C}\) and \(p^{\rm act}\) are the conservative interaction and active pressures [47; 48; 49; 50; 51; 52], respectively. In contrast to MIPS, we require the dynamic pressure as a function of not only \(\rho\) and the dimensionless "run length", \(\ell_{0}/D\) (where \(D\) is the hard-sphere diameter and \(\ell_{0}\) is the run length of an ideal active Brownian particle), but also the crystallinity, \(\psi\). The gradient terms of \(p^{\rm act}\) are derived in Appendix B and are found to scale more strongly with activity than those of \(p^{C}\). As a result, the \(p^{C}\) gradient terms will only be comparable to those of \(p^{\rm act}\) in the equilibrium limit (i.e., \(\ell_{0}/D\to 0\)) and can thus be approximated by the reversible Korteweg stress [53; 11; 54]. This approximation results in recovering the equilibrium coexistence criteria, \(E_{\rho j}\sim E_{\rho j}^{\rm eqm}=-v^{2}\delta_{\rho j}\), with vanishing activity. Additionally, the generalized Gibbs-Duhem relation in Eq. (11) reduces to the equilibrium relation [Eq. (2)]_only_ in this limit. This is expected, as the dynamics of active hard spheres satisfy the fluctuation dissipation theorem in this limit [17] and are thus indistinguishable from passive Brownian particles.
With the form of the gradient coefficients established, we find that in the limit of high activity, \(E_{\rho j}\sim\partial p_{0}^{C}/\partial X_{j}\) (see Appendix B) where \(p_{0}^{C}\) is the conservative interaction contribution to the bulk dynamic pressure \(\mathcal{P}_{0}=p_{0}^{C}+p_{0}^{\rm act}\). This criteria is identical to that recently obtained for the MIPS binodal [11] with the crucial distinction that \(p_{0}^{C}\left(\{X_{i}\}\right)\) is now a multivariate function. While we can analytically obtain \(E_{\rho j}\) in the limits of low and high activity (and motivate an interpolation scheme as detailed in the SM [40]), its full activity dependence must be evaluated numerically.
With the criteria established, we now simply require equations of state for \(p_{0}^{C}\left(\phi,\psi;\ell_{0}/D\right)\), \(p_{0}^{\rm act}\left(\phi,\psi;\ell_{0}/D\right)\), and \(\psi^{*}\left(\phi;\ell_{0}/D\right)\). We first look to determine \(\psi^{*}\left(\phi;\ell_{0}/D\right)\) by computing the most probable crystallinity from Brownian dynamics simulations [55] of homogeneous systems (see the SM [40] for simulation details). Here, we define \(\psi\equiv\left(q_{12}-q_{12}^{\rm CI}\right)/\left(q_{12}^{\rm CP}-q_{12}^{ \rm CI}\right)\), where \(q_{12}\) is the per-particle Steinhardt-Nelson-Ronchetttt order parameter [56] that quantifies twelve-fold bond-orientational symmetry. \(q_{12}^{\rm CI}\) and \(q_{12}^{\rm CP}\) are the values of \(q_{12}\) in an ideal gas and close-packed fcc solid, respectively. Figure 1 displays \(\psi^{*}\) obtained from simulation along with our fit. For all activities, a disordered fluid (\(\psi^{*}=0\)) and a perfectly ordered fcc crystal (\(\psi^{*}=1\)) are found in the limits of \(\phi\to 0\) and \(\phi\to\phi^{\rm CP}\), respectively. Furthermore, at each activity there is a volume fraction at which there is a discontinuity in \(\psi^{*}\) - this is the _order-disorder volume fraction_, \(\phi^{\rm ODT}\). The activity dependence of \(\phi^{\rm ODT}\) is thus crucial in determining \(\psi^{*}\). The order-disorder volume fraction must be less than or equal to random-close packing, \(\phi^{\rm RCP}\approx 0.645\) (a fluid must begin to order when \(\phi>\phi^{\rm RCP}\)[57]), and will ultimately lie within the solid-fluid binodal. At low activities, \(\phi^{\rm ODT}\) approaches the equilibrium hard sphere value of \(0.515\). With increasing activity, \(\phi^{\rm ODT}\) monotonically increases before saturating at \(\phi^{\rm RCP}\) at a remarkably low activity of \(\ell_{0}/D\approx 1\). This activity-induced delay in the ordering transition is, as we will demonstrate, consistent with the reported dramatic shift of the solid-fluid binodal [17] upon departing from the reversible limit.
Equations of state for \(p_{0}^{C}\) and \(p_{0}^{\rm act}\) in a fluid of active Brownian spheres at activities \(\ell_{0}/D\geq 1\) and \(\psi=0\) were recently developed [11]. We extend these to nonzero \(\psi\) and all \(\ell_{0}/D\) as detailed in the SM [40]. For a fixed density and activity, increasing \(\psi\) results in additional free volume that _increases_ the active pressure while reducing the hard-sphere interaction pressure. We ensure that in the limit of low activity, \(\mathcal{P}_{0}\) recovers the equilibrium pressure of hard spheres [58]. Figure 2 shows the resulting equation of state,
Figure 1: Accessible crystallinity, \(\psi^{*}\left(\phi;\ell_{0}/D\right)\), of active hard spheres from Brownian dynamics simulation data (Sim.) and our equation of state (EOS). Here, \(\phi\) is the volume fraction and \(\ell_{0}/D\) is the run length nondimensionalized by the hard sphere diameter. The inset displays the accessible simulation data (symbols) and our equation of state for \(\phi^{\rm ODT}\left(\ell_{0}/D\right)\) (lines).
nondimensionalized by \(\zeta U_{0}/\pi D^{2}\) (where \(\zeta U_{0}\) is the magnitude of the active force), and the weighted-area construction (using the numerically determined \(E_{\rho j}\)) in three distinct activity regimes. At an activity below the MIPS critical point (\(\ell_{0}^{\rm c}\approx 16.9\;D\)) solid-fluid coexistence is the only coexistence scenario, as shown in Fig. 2(a). The dashed line indicates the non-monotonic unstable region of the pressure, which occurs over an infinitesimally narrow region of volume fraction coinciding with \(\phi^{\rm ODT}\). We emphasize that this "spinodal" does not imply that crystallization of a disordered fluid (\(\psi^{*}=0\)) is a spontaneous process, but simply that homogeneous states at _these values_ of \(\phi\) and \(\psi*\) are unstable.
Above the critical point but below the triple point (\(\ell_{0}^{\rm dp}\approx 18.3\;D\)), there are two distinct regions of coexistence [see Fig. 2(b)]. In this regime, the coexisting solid and liquid densities have shifted towards much higher volume fractions and the dynamic pressure continues to exhibit a narrow unstable region at \(\phi^{\rm ODT}\). At lower volume fractions (below \(\phi^{\rm ODT}\)), a broader unstable region emerges in the disordered fluid pressure, resulting in MIPS. The two coexistence scenarios are separated by an appreciable gap in volume fractions. As the activity is increased towards the triple point, the high density branch of the liquid-gas coexistence curve and the low density branch of the solid-fluid coexistence curve will approach each other and coincide at the triple point. Above the triple point, the low density branch of the solid-fluid coexistence curve is now _below_ that of MIPS, with the former coexistence scenario engulfine the latter [see Fig. 2(c)]. Using simple arguments from large deviation theory [59], it was recently shown that solid-gas coexistence is stable over liquid-gas coexistence in this regime [17].
Figure 3 shows the complete activity dependence of our predicted phase diagram in comparison to that obtained from computer simulations [17]. In addition to naturally recovering the MIPS binodal, our theory nearly quantitatively (especially with increasing activity) captures the solid-fluid binodal at all values of activity. The predicted solid-fluid coexistence curve recovers the equilibrium hard sphere limit at vanishing run lengths and captures the rapid approach of the solid phase density towards close-packing at activities as low as \(\ell_{0}/D\approx 1\). The theory correctly predicts the location of the solid-liquid-gas triple point and the high-activity solid-gas coexistence densities are quantitatively recapitulated. To the best of our knowledge, our theory is the first to capture both the coexistence curves associated with MIPS and a symmetry-breaking transition while making _no appeals_ to equilibrium thermodynamics.
Figure 3: Phase diagram of active hard spheres including both solid-fluid and liquid-gas coexistence. Markers correspond to data obtained from simulations while solid lines correspond to the mechanical theory developed. Open circles are solid-fluid coexistence data from Ref. [17] while filled circles are data obtained in this study.
Figure 2: Generalized weighted-area construction applied to the equation of state of active Brownian spheres at three representative run lengths: (a) \(\ell_{0}/D=0.9\), below the MIPS critical point \(\ell_{0}^{\rm c}\), (b) \(\ell_{0}/D=17.4\), above \(\ell_{0}^{\rm c}\) but below the triple point \(\ell_{0}^{\rm tp}\), and (c) \(\ell_{0}/D=22.3\), above \(\ell_{0}^{\rm tp}\). The dashed lines correspond to unstable densities while dotted lines represent the diverging pressure when the density of a solid is increased beyond close-packing. Blue, gray, green, and red regions within the plot represent densities where a homogeneous fluid, solid-fluid coexistence, homogeneous solid, and liquid-gas coexistence are present, respectively. The red region in (c) is shaded as this liquid-gas coexistence is metastable with respect to the globally stable solid-fluid coexistence, whereas it is not shaded in (b) as liquid-gas coexistence is stable below \(\ell_{0}^{\rm tp}\).
Our nonequilibrium coexistence criteria predicts the same coexistence densities as those resulting from the equilibrium criteria in the limit of low activity (\(\ell_{0}/D\to 0\)). With increasing activity, continuing to erroneously use the equilibrium coexistence criteria is found to result in significant error, as detailed in Appendix C. The equilibrium Gibbs-Duhem relation [Eq. (2)], and consequently the Maxwell equal-area construction in Eq. (4a), is thus violated at finite activity. Our work thus makes clear that use of the equilibrium Gibbs-Duhem relation to obtain active phase diagrams [6; 9; 20] (or define the surface tension of coexisting active phases [10]) is formally incorrect and can result in significant error.
The degree to which the equilibrium Gibbs-Duhem relation is violated can provide direct insight into the nature of the interface dividing two coexisting phases. We _define_ the work required to move a particle from the dilute phase (gas/fluid), across the interface, and into the dense phase (liquid/solid) as [11]:
\[\mathcal{W}_{\mathrm{interf}}^{\mathrm{dil.}\rightarrow\mathrm{ dens.}}\equiv\int_{v^{\mathrm{dil.}}}^{v^{\mathrm{clean.}}}\left[\mathcal{P}_{0}\left( \phi,\psi^{*}\right)-\mathcal{P}^{\mathrm{coexist}}\right]d\upsilon, \tag{14}\]
where this work is identically zero when the equilibrium Gibbs-Duhem relation [Eq. (2)] is recovered. We compute this insertion work for both liquid-gas and solid-fluid coexistence, as shown in Fig. 4. For all activities, work is required to move a particle from the liquid phase into the gas phase (\(\mathcal{W}_{\mathrm{interf}}^{\mathrm{gas-liquid.}}\leq 0\)), as reported in Ref. [11]. It is only at the critical point, where the "phases" are indistinguishable, that the work is identically zero.
The physical origin of this required non-zero insertion work is the polarization of active particles within the interface: active particles within the liquid-gas interface are oriented towards the liquid phase, generating an active force density [see schematic in Fig. 4]. The presence of this force density is _required_ for the two phases to mechanically coexist with one another. The direction of this force density is towards the phase with the lower active pressure which, in the case of disordered active hard sphere fluids, is _always_ the denser phase (i.e., the liquid). This interfacial force density - which is only possible for driven systems - must be overcome when a particle is moved out of the liquid phase. We note that Fig. 4 reports the insertion work scaled by \(k_{B}T^{\mathrm{act}}\sim\zeta U_{0}\ell_{0}\) and that the unscaled value of this work monotonically decreases with activity.
In the case of solid-fluid coexistence, the insertion work vanishes in the reversible limit (\(\ell_{0}/D\to 0\)) [see Fig. 4], consistent with the recovery of the equilibrium crystallization transition. Departing from the equilibrium limit, we observe that the work required to move a particle from the solid phase into the liquid phase is _negative_ despite the solid having the higher density of the two phases. At low activities (below the triple point), the density contrast between solid and fluid is relatively small [see Fig. 3]. Despite the slightly higher density, the crystalline solid results in more free volume available to the particles in comparison to the dense disordered fluid, resulting in the solid exhibiting a _higher_ active pressure than the fluid. This causes the force density to point towards the less dense fluid and makes the insertion work negative, shown schematically in Fig. 4. Above the triple point activity, the fluid density markedly decreases, reversing the sign of the insertion work. Interestingly, the sign change is indicative that at the _triple point_ the _equilibrium_ equal-area construction (and thus, the equilibrium Gibbs-Duhem relation) is satisfied in the solid-fluid coexistence scenario.
## IV Discussion and Conclusions
We have derived a set of nonequilibrium coexistence criteria that allow for the determination of phase diagrams of symmetry-breaking coexistence scenarios from bulk equations of state. Our theory does not rely on any thermodynamic notions, instead using only system symmetries, mechanical balances, and stability arguments to describe stationary symmetry-breaking two-phase coexistence. We apply our theory to active crystallization (i.e., solid-fluid coexistence), first developing a series of physically and empirically motivated equations of state that capture the effect of activity on the order-disorder transition and the dependence of the dynamic pressure on crystalline order. We then combine these equations of state with our coexistence criteria to quantitatively recapitulate the phase diagram of active Brownian hard spheres, demonstrating significant improvement over the binodals computed under the naive use of the equilibrium Maxwell construction. Just as in equilibrium, the accuracy of the predicted phase diagram can be increased by developing improved equations of state either phenomenologically or from first principles.
Our theory identifies that the quantitative description of the coexistence curves of symmetry-breaking transitions requires both accurate equations of state (just as in equilibrium) _and_ knowledge of interfacial structure and forces in order to determine the weighting tensor, \(\mathbf{E}\), [see Eq. (9)] to perform the weighted-area construction [Eq. (10b)]. While \(\mathbf{E}\) is the same
Figure 4: Dimensionless work (nondimensionalized by the 3d active energy scale \(k_{B}T^{\mathrm{act}}\equiv\zeta U_{0}\ell_{0}/6\)) to move a particle across the interface during coexistence, from the dilute phase to the dense phase [11]. Schematics depict the transition from the force density within interface pointing into the fluid phase at low activity (top) to pointing into the solid phase at high activity (bottom).
for _all_ systems in equilibrium, it will generally vary for systems out of equilibrium depending on the details of the interfacial contributions to \(\mathbf{\mathcal{F}}\). The truncation of our gradient expansion [see Eq. (6)] has no baring on \(\mathbf{E}\) in equilibrium due to the variational origins of \(\mathbf{\mathcal{F}}\); this is not guaranteed to be the case for driven systems. However, the quantitative accuracy (in comparison to simulation data) of the phase diagram resulting from our approach suggests that the retained leading order terms are sufficient for the active system under consideration. The generalized Gibbs-Duhem relation developed in this work [Eq. (11)] thus appears to be remarkably successful in describing the phase behavior of active systems.
Our theory broadly describes nonequilibrium coexistence scenarios with a conserved order parameter coupled to a nonconserved order parameter. For example, nonequilibrium scenarios of isotropic-nematic coexistence and fluid-fluid coexistence in chemically reactive multicomponent systems are anticipated to be described by our theory. Generally, each nonconserved quantity must be locally stable in each phase, each conserved quantity has a mechanical equation of state (e.g., pressure) that must be equal in each phase, and, independent of the number of order parameters, there is a single (assuming at least one conserved order parameter) system-specific weighted-area construction that must be satisfied between each pair of phases. A nonequilibrium theory describing the stability and phase diagram of multiphase systems with any number of coupled conserved and nonconserved order parameters would greatly enhance our understanding of complex driven coexistence scenarios and phase transformations. Moreover, developing similar coexistence criteria for systems with tensorial order parameters would further aid in this effort. We hope the theory presented here will assist in laying the groundwork to describe the phase behavior of these complex nonequilibrium systems.
###### Acknowledgements.
We thank Yizhi Shen, Dimitrios Fraggedakis, Yu-Jen Chiu, and Luke Langford for helpful discussions and feedback on this manuscript. We acknowledge support from the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory under U.S. Department of Energy Contract No. DE-AC02-05CH11231 and the UC Berkeley College of Engineering. This research used the Savio computational cluster resource provided by the Berkeley Research Computing program.
## Appendix A Derivation of Criteria for Symmetry-Breaking Coexistence
### Weighted-Area Construction and Generalized Gibbs-Duhem Relation
We look to derive the criteria for stationary symmetry-breaking coexistence between an \(\alpha\) phase and a \(\beta\) phase with flux-free boundary conditions, where the system is described by the vector of order parameters \(\mathbf{X}\equiv\begin{bmatrix}\rho&\rho\psi\end{bmatrix}^{\mathrm{T}}\). A stationary state is achieved when the force vector, \(\mathbf{\mathcal{F}}\equiv\begin{bmatrix}\mathcal{P}_{0}&\mu_{0}^{\psi}\end{bmatrix} ^{\mathrm{T}}\), is equal to its coexistence value (see Eq. (5) in the main text and the following discussion), \(\mathbf{\mathcal{F}}=\mathbf{\mathcal{F}}^{\mathrm{coexist}}=\begin{bmatrix}\mathcal{ P}^{\mathrm{coexist}}&0\end{bmatrix}^{\mathrm{T}}\), where \(\mathcal{P}^{\mathrm{coexist}}\) is the coexistence pressure that must be determined. We expand \(\mathbf{\mathcal{F}}\) to second order in gradients of \(\mathbf{X}\):
\[\mathcal{F}_{n}=\mathcal{F}_{n}^{0}-B_{n\ell m}\frac{dX_{\ell}}{dz}\frac{dX_{ m}}{dz}-A_{n\ell}\frac{d^{2}X_{\ell}}{dz^{2}}, \tag{10}\]
where we now use indicial notation. As the order parameters are spatially homogeneous in the bulk phases (i.e., \(dX_{i}/dz=0\)\(\forall\)\(i\)), we immediately identify the first three coexistence criteria: \(\mathcal{F}_{n}^{0}\left(\{X_{i}^{\alpha}\}\right)=\mathcal{F}_{n}^{0}\left( \{X_{i}^{\beta}\}\right)=\mathcal{F}_{n}^{\mathrm{coexist}}\).
To obtain the fourth coexistence criterion, we introduce an ansatz of a generalized Gibbs-Duhem relation:
\[dg=\mathcal{E}_{n}d\mathcal{F}_{n}, \tag{11}\]
where \(\mathcal{E}_{n}\) is a generalized vector of variables conjugate to \(\mathcal{F}_{n}\). We now show that this implies the fourth criterion is equality of \(g\) across phases. Integrating Eq. (11) by parts between any two arbitrary stationary states \((1)\) and \((2)\) we have:
\[\int_{g^{(1)}}^{g^{(2)}}dg=\Delta g^{(1)\rightarrow(2)}=[\mathcal{F}_{n} \mathcal{E}_{n}]_{(1)}^{(2)}-\int_{\mathcal{E}_{n}^{(1)}}^{\mathcal{E}_{n}^{( 2)}}\mathcal{F}_{n}d\mathcal{E}_{n}, \tag{12}\]
where \(\Delta g^{(1)\rightarrow(2)}\equiv g\big{(}\{X_{i}^{(2)}\}\big{)}-g\big{(}\{X_ {i}^{(1)}\}\big{)}\). We now split \(g=g_{0}+g^{\mathrm{int}}\) into bulk (\(g_{0}\)) and interfacial (\(g^{\mathrm{int}}\)) contributions and set the states \((1)\) and \((2)\) to coexisting \(\alpha\) and \(\beta\) phases:
\[\Delta g_{0}^{\alpha\rightarrow\beta}+\Delta g^{\mathrm{int}, \alpha\rightarrow\beta}\\ =\int_{\mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\mathcal{ F}_{n}^{\mathrm{coexist}}d\mathcal{E}_{n}-\int_{\mathcal{E}_{n}^{\alpha}}^{ \mathcal{E}_{n}^{\beta}}\mathcal{F}_{n}^{\mathrm{coexist}}d\mathcal{E}_{n}=0, \tag{13}\]
where we have used the boundary condition \([\mathcal{F}_{n}\mathcal{E}_{n}]_{\alpha}^{\beta}=\int_{\mathcal{E}_{n}^{ \alpha}}^{\mathcal{E}_{n}^{\beta}}\mathcal{F}_{n}^{\mathrm{coexist}}d\mathcal{ E}_{n}\) and the fact \(\mathcal{F}_{n}=\mathcal{F}_{n}^{\mathrm{coexist}}\) during coexistence. Noting interfacial terms are identically zero in the bulk phases (i.e., \(\Delta g^{\mathrm{int},\alpha\rightarrow\beta}=0\)), Eq. (13) implies \(\Delta g_{0}^{\alpha\rightarrow\beta}=0\) and hence equality of \(g_{0}\) across phases is our fourth coexistence criterion.
We now look to use the generalized Gibbs-Duhem relation [Eq. (11)] to recast equality of \(g_{0}\) as a path-independent integral condition. Noting \(-dg_{0}=\mathcal{F}_{n}^{0}d\mathcal{E}_{n}-d\left(\mathcal{F}_{n}^{0} \mathcal{E}_{n}\right)\) from Eq. (11), we integrate this between the coexisting \(\alpha\) and \(\beta\) phases and set it equal to zero:
\[-\Delta g_{0}^{\alpha\rightarrow\beta}=\int_{\mathcal{E}_{n}^{\alpha}}^{ \mathcal{E}_{n}^{\beta}}\mathcal{F}_{n}^{0}d\mathcal{E}_{n}-\left[\mathcal{F}_{ n}^{0}\mathcal{E}_{n}\right]_{\alpha}^{\beta}=0. \tag{14}\]
Again using \(\left[\mathcal{F}_{n}^{0}\mathcal{E}_{n}\right]_{\alpha}^{\beta}=\int_{ \mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\mathcal{F}_{n}^{\mathrm{ coexist}}d\mathcal{E}_{n}\) as the boundary condition, we have the generalized Maxwell construction:
\[\int_{\mathcal{E}_{n}^{\alpha}}^{\mathcal{E}_{n}^{\beta}}\left(\mathcal{F}_{n}^{0} -\mathcal{F}_{n}^{\mathrm{coexist}}\right)d\mathcal{E}_{n}=0, \tag{15a}\]
which is an equal-area construction with respect to \(\mathcal{E}_{n}\). We instead choose to write this as a weighted-area construction, with a weighting tensor \(E_{nj}\equiv\partial\mathcal{E}_{n}/\partial X_{j}\):
\[\int_{X_{j}^{n}}^{X_{j}^{\beta}}\left(\mathcal{F}_{n}^{0}-\mathcal{F}_{n}^{ \rm coexist}\right)E_{nj}dX_{j}=0, \tag{10}\]
as the components of \(\mathcal{E}_{n}\) are not necessarily bijective functions of \(X_{n}\). When the components of \(\mathcal{E}_{n}\) are not bijective, the integrals in the equal-area construction cannot be evaluated. Importantly, while Eq. (10) is a multivariate integral and hence requires the selection of an integration path, the value of the integral is path-independent. We choose the parameterization \(\psi^{*}\left(\rho\right)\) satisfying \(\mu_{0}^{\psi}\left(\rho,\psi^{*}\right)=0\) as our integration path, reducing Eq. (10) to:
\[\int_{X_{j}^{n}}^{X_{j}^{\beta}}\left(\mathcal{P}_{0}\left(\rho,\psi^{*}\right) -\mathcal{P}^{\rm coexist}\right)E_{\rho j}\left(\rho,\psi^{*}\right)dX_{j}=0. \tag{11}\]
We then have the final form of our fourth coexistence criterion.
We now aim to find an expression for \(g\) and determine the conditions under which our ansatz, and consequently our fourth coexistence criterion, holds. Recognizing \(dg=d\left(\mathcal{F}_{n}\mathcal{E}_{n}\right)-\mathcal{F}_{n}d\mathcal{E}_{n}\) and splitting \(g\) and \(\mathcal{F}_{n}\) into bulk (\(g_{0}\) and \(\mathcal{F}_{n}^{0}\)) and interfacial (\(g_{1}^{\rm int}\), \(g_{2}^{\rm int}\), and \(\mathcal{F}_{n}^{\rm int}\)) contributions we have:
\[dg_{0} +dg_{1}^{\rm int}+dg_{2}^{\rm int}\] \[=d\left(\mathcal{F}_{n}^{0}\mathcal{E}_{n}\right)-\mathcal{F}_{n} ^{0}d\mathcal{E}_{n}+d\left(\mathcal{F}_{n}^{\rm int}\mathcal{E}_{n}\right)- \mathcal{F}_{n}^{\rm int}d\mathcal{E}_{n}. \tag{12}\]
Defining a pseudopotential [8], \(\Phi_{0}\equiv\int\mathcal{F}_{n}^{0}d\mathcal{E}_{n}\) (and hence \(\mathcal{F}_{n}^{0}=\partial\Phi_{0}/\partial\mathcal{E}_{n}\)), we identify \(g_{0}\):
\[g_{0}=\mathcal{E}_{n}\mathcal{F}_{n}^{0}-\Phi_{0}. \tag{13}\]
The first interfacial component of \(g\) can be identified from Eq. (12) as:
\[g_{1}^{\rm int}=\mathcal{F}_{n}^{\rm int}\mathcal{E}_{n}=-B_{n\ell m}\mathcal{ E}_{n}\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz}-A_{n\ell}\mathcal{E}_{n}\frac{d^{2}X _{\ell}}{dz^{2}}. \tag{14}\]
We now look to identify the second interfacial component of \(g\):
\[dg_{2}^{\rm int}=\mathcal{F}_{n}^{\rm int}d\mathcal{E}_{n}\] \[=-\left(B_{n\ell m}E_{nj}\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz}+A_{ n\ell}E_{nj}\frac{d^{2}X_{\ell}}{dz^{2}}\right)dX_{j}. \tag{15}\]
In our theory, \(\mathcal{F}_{n}\) only contains gradients of \(X_{i}\) up to second order and hence \(g_{2}^{\rm int}\) can only contain square gradient and Laplacian terms. Consequently, \(g_{2}^{\rm int}\) can generally be expressed as:
\[g_{2}^{\rm int}=G_{\ell j}\left(\left\{X_{i}\right\}\right)\frac{dX_{\ell}}{dz }\frac{dX_{j}}{dz}+h_{\ell}\left(\left\{X_{i}\right\}\right)\frac{d^{2}X_{ \ell}}{dz^{2}}, \tag{16}\]
where we have introduced a symmetric second-rank tensor of state functions, \(G_{\ell j}\left(\left\{X_{i}\right\}\right)\) [antisymmetric contributions to \(G_{\ell j}\) have no consequence as it is double contracted into a symmetric tensor \(\left(dX_{\ell}/dz\right)\left(dX_{j}/dz\right)\)], and an additional vector of state functions, \(h_{\ell}\left(\left\{X_{i}\right\}\right)\). We now equate the differential of Eq. (16) to the right-hand side of Eq. (15):
\[-\left(B_{n\ell m}E_{nj}\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz}\frac {dX_{j}}{dz}+A_{n\ell}E_{nj}\frac{d^{2}X_{\ell}}{dz^{2}}\frac{dX_{j}}{dz} \right)dz\\ =-\frac{d}{dz}\left(G_{\ell j}\frac{dX_{\ell}}{dz}\frac{dX_{j}}{ dz}+h_{\ell}\frac{d^{2}X_{\ell}}{dz^{2}}\right)dz. \tag{17}\]
Expanding the right-hand side of Eq. (17) we have:
\[\frac{d}{dz}\left(G_{\ell j}\left(\left\{X_{i}\right\}\right) \frac{dX_{\ell}}{dz}\frac{dX_{j}}{dz}+h_{\ell}\left(\left\{X_{i}\right\} \right)\frac{d^{2}X_{\ell}}{dz^{2}}\right)\\ =\frac{\partial G_{\ell j}}{\partial X_{m}}\frac{dX_{m}}{dz}\frac {dX_{\ell}}{dz}\frac{dX_{j}}{dz}+G_{\ell j}\frac{d^{2}X_{\ell}}{dz^{2}}\frac {dX_{j}}{dz}+G_{\ell j}\frac{dX_{\ell}}{dz}\frac{d^{2}X_{j}}{dz^{2}}\\ +\frac{\partial h_{\ell}}{\partial X_{j}}\frac{d^{2}X_{\ell}}{dz^{2 }}\frac{dX_{j}}{dz}+h_{\ell}\frac{d^{3}X_{\ell}}{dz^{3}}\\ =\frac{\partial G_{\ell j}}{\partial X_{m}}\frac{dX_{m}}{dz}\frac {dX_{\ell}}{dz}\frac{dX_{j}}{dz}+\left(2G_{\ell j}+\frac{\partial h_{\ell}}{ \partial X_{j}}\right)\frac{d^{2}X_{\ell}}{dz^{2}}\frac{dX_{j}}{dz}\\ +h_{\ell}\frac{d^{3}X_{\ell}}{dz^{3}}. \tag{18}\]
where we have made use of the symmetry of \(G_{\ell j}\) in the second equality. Substituting this expanded form into Eq. (17) we find:
\[\left(B_{n\ell m}E_{nj}\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz} \frac{dX_{j}}{dz}+A_{n\ell}E_{nj}\frac{d^{2}X_{\ell}}{dz^{2}}\frac{dX_{j}}{dz} \right)dz\\ =\left(\frac{\partial G_{\ell j}}{\partial X_{m}}\frac{dX_{m}}{dz} \frac{dX_{\ell}}{dz}\frac{dX_{j}}{dz}+\left(2G_{\ell j}+\frac{\partial h_{\ell}}{ \partial X_{j}}\right)\frac{d^{2}X_{\ell}}{dz^{2}}\frac{dX_{j}}{dz}\\ +h_{\ell}\frac{d^{3}X_{\ell}}{dz^{3}}\right)dz, \tag{19}\]
and immediately recognize:
\[h_{\ell}=0\ \forall\ \ell, \tag{20}\] \[2G_{\ell j}=A_{n\ell}E_{nj},\] (21) \[\frac{\partial G_{\ell j}}{\partial X_{m}}=B_{n\ell m}E_{nj}, \tag{22}\]
where we have made use of Eq. (20) when eliminating \(\partial h_{\ell}/\partial X_{j}\) from Eq. (21). We then identify \(g_{2}^{\rm int}\):
\[g_{2}^{\rm int}=\frac{1}{2}A_{n\ell}E_{nj}\frac{dX_{\ell}}{dz}\frac{dX_{j}}{dz} \tag{23}\]
and now have the full expression for \(g\):
\[g=\mathcal{E}_{n}\mathcal{F}_{n}^{0}-\Phi_{0}-\left(B_{n\ell m} \mathcal{E}_{n}-\frac{1}{2}A_{n\ell}E_{nm}\right)\frac{dX_{\ell}}{dz}\frac{dX_{m}}{dz} \\ -A_{n\ell}\mathcal{E}_{n}\frac{d^{2}X_{\ell}}{dz^{2}}. \tag{24}\]
We now seek the conditions where our ansatz in Eq. (16) holds. Equation (20) implies a series of relationships between \(A_{n\ell}\), \(B_{n\ell m}\), and \(E_{nj}\) that must be met for \(g_{2}^{\rm int}\), and
consequently \(g\), to exist and hence for our ansatz to hold. The first relationship follows from the symmetry of \(G_{\ell j}\) and Eq. (16b), resulting in:
\[A_{n\ell}E_{nj}=A_{nj}E_{n\ell},\] (19a) providing \[\left(n_{O}^{2}-n_{O}\right)/2\] relationships, as the diagonal terms ( \[\ell=j\] ) provide no information and \[A_{n\ell}E_{nj}\] is symmetric. Using Eq. ( 16c ), a similar set of relationships can be obtained between the components of \[B_{n\ell}\] and \[E_{n\ell}\] (again by recognizing the the symmetry of \[G_{\ell j}\] ): \[B_{n\ell m}E_{nj}=B_{njm}E_{n\ell}, \tag{19b}\]
providing \(n_{O}\left(n_{O}-1\right)^{2}\) relationships as we do not gain information when \(\ell\neq j\) and \(m\neq j\). This is because \(B_{n\ell m}\) is symmetric with respect to exchanging \(\ell\) and \(m\):
\[B_{n\ell m}=B_{nm\ell}, \tag{19c}\]
yielding another \(\left(n_{O}^{2}-n_{O}\right)/2\) relationships, again recognizing the diagonal terms (\(\ell=m\)) provide no information. Our final relationship follows from taking the partial derivative with respect to \(X_{m}\) of the left-hand side of Eq. (16b) and equating it to twice the left-hand side of Eq. (16c):
\[2B_{nm\ell}E_{nj}= \frac{\partial}{\partial X_{m}}\left(A_{n\ell}E_{nj}\right), \tag{19d}\]
where we gain \(n_{O}^{3}\) differential equations that can be used to solve for the \(n_{O}^{2}\) components of \(E_{nj}\). Importantly, the number of _unique_ differential equations is the difference between the total number of differential equations in Eq. (19d) \(\left[n_{O}^{3}\right]\) and the sum of the number of relationships found in Eqs. (19a) \(\left[\left(n_{O}^{2}-n_{O}\right)/2\right]\), (19b) \(\left[n_{O}\left(n_{O}-1\right)^{2}\right]\), and (19c) \(\left[\left(n_{O}^{2}-n_{O}\right)/2\right]\). This results in \(n_{O}^{2}\) unique differential equations, the same as the number of components of \(E_{nj}\). While \(B_{n\ell m}\) has certain symmetries, there are no guaranteed symmetries in \(E_{nj}\), even in equilibrium (see SM [40]). Determining \(E_{nj}\) through Eq. (19) is the _condition_ for our ansatz [Eq. (12)] to hold. Importantly, these conditions [Eq. (19)] are the _exact same_ as the conditions for the generalized equal-area construction to hold [Eq. (9)] presented in the main text.
### Integration Weight Tensor Along Selected Path
Equation (16b) is a multivariate integral and hence an integration path must be specified. The value of the integral is path-independent, however, and consequently we are free to choose any relationship between \(\rho\) and \(\psi\) when evaluating it. A particularly convenient relationship is one satisfying \(\mu_{0}^{\psi}\left(\rho,\psi^{*}\right)=0\), where \(\psi^{*}\left(\rho\right)\) is the stationary \(\psi\) for a given \(\rho\). This path selection results in \(B_{\psi ij}=A_{\psi i}=0\)\(\forall\)\(i,j\), simplifying the system of equations in Eq. (19). Now, we only require \(E_{\rho\rho}\) and \(E_{\rho\psi}\), as the integrals weighted by \(E_{\psi\psi}\) and \(E_{\psi\rho}\) are identically zero due to the selected relationship between \(\rho\) and \(\psi\).
Equation (19a) yields a relationship between the two components of the weight tensor, \(E_{\rho\psi}=E_{\rho\rho}A_{\rho\psi}/A_{\rho\rho}\). Applying the relationships in Eqs. (19b) and (19c) to the eight initial differential equations in Eq. (19d) results in two unique differential equations:
\[2B_{\rho\rho\rho}= \frac{\partial}{\partial\rho}\left(A_{\rho\rho}E_{\rho\rho} \right), \tag{20a}\] \[2\rho B_{\rho\psi}= \frac{\partial}{\partial\psi}\left(A_{\rho\psi}E_{\rho\rho} \right). \tag{20b}\]
An integral solution of \(E_{\rho\rho}\) can be straightforwardly obtained:
\[E_{\rho\rho}\propto\exp\Bigg{[}\int d\rho\frac{2B_{\rho\rho\rho} -\partial A_{\rho\rho}/\partial\rho}{A_{\rho\rho}}+\int d\left(\rho\psi\right) \frac{2B_{\rho\psi}\psi-\partial A_{\rho\psi}/\partial\left(\rho\psi\right)}{ A_{\rho\psi}}\Bigg{]}, \tag{21a}\] \[E_{\rho\psi}= E_{\rho\rho}\frac{A_{\rho\psi}}{A_{\rho\rho}}, \tag{21b}\]
Equation (21), under certain conditions, admits an analytical solution for \(E_{\rho j}\) and, more generally, can be solved numerically. All that is required are expressions for the interfacial coefficients (\(A_{\rho i}\) and \(B_{\rho ij}\)) appearing in \(\mathcal{P}\).
## Appendix B Active Solid-Fluid Coexistence Criteria
We now look to develop expressions for \(\mathcal{F}_{n}\) in systems of active hard spheres in terms of our order parameter vector \(X_{n}\), so that we may determine the relevant components of the appropriate weight tensor \(E_{\rho j}\) using the system of equations in Eq. (21). Importantly, we only require an expression for
\(\mathcal{P}\), as \(\mu^{\psi}=\mu_{0}^{\psi}\) due to our selected parameterization along the solid-fluid interface, i.e., \(\mu_{0}^{\psi}=0\). An expression for \(\mathcal{P}\) was recently found [11] for a collection of \(N\) interacting active Brownian particles (ABPs) from first principles through an Irving-Kirkwood procedure [44]. In the overdamped limit, the dynamics of the position \(\mathbf{r}_{i}\) and orientation \(\mathbf{q}_{i}\) of the \(i^{th}\) particle follow equations-of-motion given by:
\[\dot{\mathbf{r}}_{i}=U_{0}\mathbf{q}_{i}+\frac{1}{\zeta}\sum_{j \neq i}^{N}\mathbf{F}_{ij}, \tag{11a}\] \[\dot{\mathbf{q}}_{i}=\mathbf{q}_{i}\times\mathbf{\Omega}_{i}, \tag{11b}\]
where \(U_{0}\) is the active speed of an isolated particle, \(\zeta\) is the translational drag coefficient, \(\mathbf{F}_{ij}\) are conservative pairwise interparticle forces, and \(\mathbf{\Omega}_{i}\) is a stochastic angular velocity with mean \(\left\langle\mathbf{\Omega}_{i}\right\rangle=\mathbf{0}\) and variance \(\left\langle\mathbf{\Omega}_{i}(t)\mathbf{\Omega}_{j}(t^{\prime})\right\rangle=2 \delta_{ij}\delta(t-t^{\prime})\mathbf{I}/\tau_{R}\), where \(\mathbf{I}\) is the identity tensor, \(\delta_{ij}\) is the Kronecker delta, and \(\delta\left(x\right)\) is the Dirac delta function. The orientational relaxation time \(\tau_{R}\) can be used to define the run length \(\ell_{0}\equiv U_{0}\tau_{R}\), the average distance a particle in free space travels before reorienting.
The dynamic pressure generally consists of two contributions, \(\mathcal{P}=p^{C}+p^{\mathrm{act}}\), where \(p^{C}\) is the conservative interaction pressure and \(p^{\mathrm{act}}\) is the active pressure \(p^{\mathrm{act}}\). Both pressure contributions contain bulk terms, denoted as \(p_{0}^{C}\) and \(p_{0}^{\mathrm{act}}\). Interfacial terms in \(p^{C}\) can be well-approximated by the Kosterweg stress of hard spheres. While the form of these interfacial stresses are only strictly valid in equilibrium, we utilize them here as these terms will _only_ be relevant at low activities which is precisely the reversible limit of active hard spheres. At finite activity, where the reversible approximation is invalid, the interfacial terms of \(p^{\mathrm{act}}\) will dominate over those of \(p^{C}\). Consequently, in the limit of high activity the interfacial terms in \(p^{C}\) can be ignored, i.e., \(p^{C}\approx p_{0}^{C}\). The active stresses in \(p^{\mathrm{act}}\) depend on an infinite hierarchy of one-body orientational moments, and consequently a closure is needed. By truncating the infinite hierarchy of orientational moments at the third moment, and approximating \(p^{C}\approx p_{0}^{C}\), Ref. [11] derived \(\mathcal{P}\) in the high activity limit. Generally, we include the Korteweg-like stresses derived in the SM [40]:
\[p^{C}=p_{0}^{C}-\frac{1}{2}\!\left(X_{n}\frac{\partial K_{ij}}{ \partial X_{n}}-K_{ij}\right)\!\frac{dX_{i}}{dz}\frac{dX_{j}}{dz}\] \[-X_{n}K_{ni}\frac{d^{2}X_{i}}{dz},\] (12a) where \[K_{ij}\] is the interfacial free energy coefficient [60; 61], as well as the active stresses derived in Ref. [11] : \[p^{\mathrm{act}}=p_{0}^{\mathrm{act}}-c_{d}\ell_{0}^{2}U\frac{d}{ dz}\left(U\frac{dp_{0}^{C}}{dz}\right), \tag{12b}\] \[c_{d}\equiv \frac{3}{d\left(d-1\right)\left(d+2\right)}, \tag{12c}\]
where \(d\) is the dimensionality. We have introduced the dimensionless effective active speed, \(\overline{U}\equiv p_{0}^{\mathrm{act}}d(d-1)/\big{(}\rho\ell_{0}\zeta U_{0} \big{)}\). The required equations of state are thus \(p_{0}^{C}\), \(p_{0}^{\mathrm{act}}\) (or \(\overline{U}\)), and \(K_{ij}\), all of which generally depend on \(X_{n}\).
We now allow \(p_{0}^{C}\) and \(p_{0}^{\mathrm{act}}\) to depend on both the density _and_ the crystallinity. This contrasts with Ref. [11] where \(p_{0}^{C}\) and \(p_{0}^{\mathrm{act}}\) were taken to depend only on the density, as the aim was to describe liquid-gas coexistence. Expanding Eq. (12b) and adding it to Eq. (12b) we obtain our complete expression for the dynamic pressure:
\[\mathcal{P}=p_{0}^{C}+p_{0}^{\mathrm{act}}-\big{(}B_{\rho ij}^{C} + B_{\rho ij}^{\mathrm{act}}\big{)}\frac{dX_{i}}{dz}\frac{dX_{j}}{dz}\] \[-\big{(}A_{\rho i}^{C}+A_{\rho i}^{\mathrm{act}}\big{)}\frac{d^{2 }X_{i}}{dz^{2}},\] (12a) where \[A_{\rho i}^{C}= X_{n}K_{ni}, \tag{12b}\] \[A_{\rho i}^{\mathrm{act}}= c_{d}\ell_{0}^{2}\overline{U}^{2}\frac{\partial p_{0}^{C}}{ \partial X_{i}},\] (12c) \[B_{\rho ij}^{C}= \frac{1}{2}\!\left(X_{n}\frac{\partial K_{ij}}{\partial X_{n}}-K_{ ij}\right),\] (12d) \[B_{\rho ij}^{\mathrm{act}}= c_{d}\ell_{0}^{2}\overline{U}\frac{\partial}{\partial X_{i}}\left(U \frac{\partial p_{0}^{C}}{\partial X_{j}}\right), \tag{12e}\]
where we have decomposed \(B_{\rho ij}\) and \(A_{\rho i}\) into conservative interaction and active contributions, denoted with superscripts \(C\) and \(\mathrm{act}\), respectively. In the limit of low activity, the active interfacial stresses become irrelevant and we have:
\[\lim_{\ell_{0}/D\to 0}A_{\rho i}= A_{\rho i}^{C}, \tag{12a}\] \[\lim_{\ell_{0}/D\to 0}B_{\rho ij}= B_{\rho ij}^{C}. \tag{12b}\]
Importantly, the selected integration path mandates \(A_{\rho\psi}^{C}=B_{\rho\psi\psi}^{C}=0\) as \(A_{\psi i}^{C}=0\) implies \(K_{\rho\psi}=K_{\psi\psi}=0\). In this limit, we recover the equilibrium weight tensor, \(E_{\rho j}\sim E_{\rho j}^{\mathrm{cpm}}=-v^{2}\delta_{\rho j}\), upon substitution of Eq. (12) into Eq. (12). Conversely, in the high activity limit, the passive interfacial stresses become irrelevant such that:
\[\lim_{\ell_{0}/D\to\infty}A_{\rho i}= A_{\rho i}^{\mathrm{act}}, \tag{12a}\] \[\lim_{\ell_{0}/D\to\infty}B_{\rho ij}= B_{\rho ij}^{\mathrm{act}}. \tag{12b}\]
Substituting the above expressions for \(A_{\rho i}\) and \(B_{\rho ij}\) along the path \(\mu_{0}^{\psi}=0\) into Eq. (12), we find \(E_{\rho j}=\partial p_{0}^{C}/\partial X_{j}\) in the high activity limit. This has the same form as the weighting function found for MIPS with the distinction that \(p_{0}^{C}\) now depends on _both_\(\rho\) and \(\psi\). The weight tensor cannot be determined analytically when both the conservative and active interfacial contributions are relevant, however we determine it numerically by integrating Eq. (12) using the full expressions for \(A_{\rho i}\) and \(B_{\rho ij}\). The phase diagram in the main text was constructed with this numerically determined \(E_{\rho j}\), using \(K_{ij}\) of a passive hard sphere fluid [62].
## Appendix C Active Phase Diagram Using Equilibrium Coexistence Criteria
While the phase diagram in the main text was found by numerically determining \(E_{\rho j}\), as detailed in Section B, the equi
librium Maxwell construction (i.e., \(E_{\rho j}\sim E_{\rho j}^{\rm{eq}}=-\upsilon^{2}\delta_{\rho j}\)) can still be naively applied to construct phase diagrams of active hard spheres. Importantly, doing so will allow us to isolate the role of the nonequilibrium coexistence criteria in shaping the active phase diagram. Figure 5 shows the comparison of the resulting solid-fluid phase diagrams when using the equilibrium and numerically determined (combined active and passive) weight tensor. As anticipated, at low activity the two constructions yield similar phase boundaries. At finite activity, the differences between the predicted boundaries begin to emerge, with the equilibrium construction favoring solids and fluids of lower density. Above the triple point, the equilibrium construction begins to significantly underpredict the fluid density while the exact construction continues to provide quantitatively close predictions. Both constructions predict the solid density will approach close-packing \(\phi^{\rm{solid}}\to 0.74\) at high activity, however the equilibrium construction does not approach close-packing until above the triple point while the exact construction accurately begins to approach close-packing at activities as low as \(\ell_{0}/D\approx 1\). This demonstrates that while the equilibrium construction can still be used at low activities (as this is precisely the reversible limit), its erroneous use quickly causes significant quantitative inaccuracies at finite activities.
## References
* Petroff _et al._ [2015]A. P. Petroff, X.-L. Wu, and A. Libchaber, Fast-moving bacteria self-organize into active two-dimensional crystals of rotating cells, Phys. Rev. Lett. **114**, 158102 (2015).
* Tan _et al._ [2022]T. H. Tan, A. Mietke, J. Li, Y. Chen, H. Higinbotham, P. J. Foster, S. Gokhale, J. Dunkel, and N. Fakhri, Odd dynamics of living chiral crystals, Nature **607**, 287 (2022).
* Fily and Marchetti [2012]Y. Fily and M. C. Marchetti, Athermal phase separation of self-propelled particles with no alignment, Phys. Rev. Lett. **108**, 235702 (2012).
* Redner _et al._ [2013]G. S. Redner, M. F. Hagan, and A. Baskaran, Structure and dynamics of a phase-separating active colloidal fluid, Phys. Rev. Lett. **110**, 055701 (2013).
* Wittkowski _et al._ [2014]R. Wittkowski, A. Tiribocchi, J. Stenhammar, R. J. Allen, D. Marenduzzo, and M. E. Cates, Scalar \(\phi\)4 field theory for active-particle phase separation, Nat. Commun. **5**, 4351 (2014).
* Takatori and Brady [2015]S. C. Takatori and J. F. Brady, Towards a thermodynamics of active matter, Phys. Rev. E **91**, 032117 (2015).
* Speck [2016]T. Speck, Stochastic thermodynamics for active matter, Europhys. Lett. **114**, 30006 (2016).
* Solon _et al._ [2018]A. P. Solon, J. Stenhammar, M. E. Cates, Y. Kafri, and J. Tailleur, Generalized thermodynamics of phase equilibria in scalar active matter, Phys. Rev. E **97**, 1 (2018).
* Hermann _et al._ [2019]S. Hermann, P. Krinninger, D. de Las Heras, and M. Schmidt, Phase coexistence of active Brownian particles, Phys. Rev. E **100**, 052604 (2019).
* Hermann _et al._ [2021]S. Hermann, D. de las Heras, and M. Schmidt, Phase separation of active Brownian particles in two dimensions: anything for a quiet life, Mol. Phys. **119**, e1902585 (2021).
* Omar _et al._ [2023]A. K. Omar, H. Row, S. A. Mallory, and J. F. Brady, Mechanical theory of nonequilibrium coexistence and motility-induced phase separation, Proc. Natl. Acad. Sci. U.S.A. **120** (2023).
* You _et al._ [2020]Z. You, A. Baskaran, and M. C. Marchetti, Nonreciprocity as a generic route to traveling states, Proc. Natl. Acad. Sci. U.S.A. **117**, 19767 (2020).
* Saha _et al._ [2020]S. Saha, J. Agudo-Canalejo, and R. Golestanian, Scalar active mixtures: the nonreciprocal Cahn-Hilliard model, Phys. Rev. X **10**, 041009 (2020).
* Fruchart _et al._ [2021]M. Fruchart, R. Hanai, P. B. Littlewood, and V. Vitelli, Nonreciprocal phase transitions, Nature **592**, 363 (2021).
* Bialke _et al._ [2012]J. Bialke, T. Speck, and H. Lowen, Crystallization in a dense suspension of self-propelled particles, Phys. Rev. Lett. **108**, 168301 (2012).
* Turci and Wilding [2021]F. Turci and N. B. Wilding, Phase separation and multibody effects in three-dimensional active Brownian particles, Phys. Rev. Lett. **126**, 038002 (2021).
* Omar _et al._ [2021]A. K. Omar, K. Klymko, T. GrandPre, and P. L. Geissler, Phase diagram of active brownian spheres: Crystallization and the metastability of motility-induced phase separation, Phys. Rev. Lett. **126**, 188002 (2021).
* Caprini _et al._ [2023]L. Caprini, U. Marini Bettolo Marconi, A. Puglisi, and H. Lowen, Entropons as collective excitations in active solids, J. Chem. Phys. **159** (2023).
* Galliano _et al._ [2023]L. Galliano, M. E. Cates, and L. Berthier, Two-Dimensional Crystals far from Equilibrium, Phys. Rev. Lett. **131**, 47101 (2023).
* Hermann and Schmidt [2023]S. Hermann and M. Schmidt, Active crystallization from power functional theory, arXiv preprint arXiv:2308.10614 (2023).
* Shi _et al._ [2023]X.-q. Shi, F. Cheng, and H. Chate, Extreme Spontaneous Deformations of Active Crystals, Phys. Rev. Lett. **131**, 108301 (2023).
* Alder and Wainwright [1957]B. J. Alder and T. E. Wainwright, Phase transition for a hard sphere system, J. Chem. Phys. **27**, 1208 (1957).
* Hoover and Ree [1968]W. G. Hoover and F. H. Ree, Melting transition and communal entropy for hard spheres, J. Chem. Phys. **49**, 3609 (1968).
* Pusey and Van Megen [2012]P. N. Pusey and W. Van Megen, Phase behaviour of concentrated suspensions of nearly hard colloidal spheres, Nature **320**,
Figure 5: Solid-fluid phase diagram of 3d active hard spheres. The result using equilibrium construction is shown in dashed lines while the numerically determined exact construction is shown in solid lines. See Ref. [11] for an analogous comparison of the predictions for the liquid-gas binodal.
340 (1986).
* Pusey _et al._ [1989]P. N. Pusey, W. Van Megen, P. Bartlett, B. J. Ackerson, J. G. Rarity, and S. M. Underwood, Structure of crystals of hard colloidal spheres, Phys. Rev. Lett. **63**, 2753 (1989).
* Auer and Frenkel [2001]S. Auer and D. Frenkel, Prediction of absolute crystal-nucleation rate in hard-sphere colloids, Nature **409**, 1020 (2001).
* Torquato and Haslach Jr [2002]S. Torquato and H. W. Haslach Jr, Random heterogeneous materials: microstructure and macroscopic properties, Appl. Mech. Rev. **55**, B62 (2002).
* Pusey _et al._ [2009]P. N. Pusey, E. Zaccarelli, C. Valeriani, E. Sanz, W. C. K. Poon, and M. E. Cates, Hard spheres: crystallization and glass formation, Philos. Trans. Royal Soc. **367**, 4993 (2009).
* Richard and Speck [2018]D. Richard and T. Speck, Crystallization of hard spheres revisited. I. Extracting kinetics and free energy landscape from forward flux sampling, J. Chem. Phys. **148**, 124110 (2018).
* Richard and Speck [2018]D. Richard and T. Speck, Crystallization of hard spheres revisited. II. Thermodynamic modeling, nucleation work, and the surface of tension, J. Chem. Phys. **148**, 224102 (2018).
* Aifantis and Serrin [1983]E. C. Aifantis and J. B. Serrin, The mechanical theory of fluid interfaces and Maxwell's rule, J. Colloid Interf. Sci. **96**, 517 (1983).
* Cates and Tailleur [2015]M. E. Cates and J. Tailleur, Motility-induced phase separation, Annu. Rev. Condens. Matter Phys. **6**, 219 (2015).
* Bechinger _et al._ [2016]C. Bechinger, R. Di Leonardo, H. Lowen, C. Reichhardt, G. Volpe, and G. Volpe, Active particles in complex and crowded environments, Rev. Mod. Phys. **88**, 045006 (2016).
* Buttinoni _et al._ [2013]I. Buttinoni, J. Bialke, F. Kummel, H. Lowen, C. Bechinger, and T. Speck, Dynamical clustering and phase separation in suspensions of self-propelled colloidal particles, Phys. Rev. Lett. **110**, 238301 (2013).
* Wysocki _et al._ [2014]A. Wysocki, R. G. Winkler, and G. Gompper, Cooperative motion of active Brownian spheres in three-dimensional dense suspensions, Europhys. Lett. **105**, 48004 (2014).
* Stenhammar _et al._ [2014]J. Stenhammar, D. Marenduzzo, R. J. Allen, and M. E. Cates, Phase behaviour of active Brownian particles: the role of dimensionality, Soft matter **10**, 1489 (2014).
* Nie _et al._ [2020]P. Nie, J. Chattoraj, A. Piscitelli, P. Doyle, R. Ni, and M. P. Ciamarra, Stability phase diagram of active Brownian particles, Phys. Rev. Res. **2**, 023010 (2020).
* Omar _et al._ [2020]A. K. Omar, Z.-G. Wang, and J. F. Brady, Microscopic origins of the swim pressure and the anomalous surface tension of active matter, Phys. Rev. E **101**, 012604 (2020).
* Speck [2021]T. Speck, Coexistence of active Brownian disks: Van der Waals theory and analytical results, Phys. Rev. E **103**, 012607 (2021).
* [40]See Supplemental Material at [URL] for supporting equilibrium derivations as well as equations of state and simulation details.
* Plischke and Bergersen [1994]M. Plischke and B. Bergersen, _Equilibrium statistical physics_ (World scientific, 1994).
* De Groot and Mazur [2013]S. R. De Groot and P. Mazur, _Non-equilibrium thermodynamics_ (Courier Corporation, 2013).
* Kondepudi and Prigogine [2014]D. Kondepudi and I. Prigogine, _Modern thermodynamics: from heat engines to dissipative structures_ (John Wiley & Sons, 2014).
* Irving and Kirkwood [1950]J. H. Irving and J. G. Kirkwood, The statistical mechanical theory of transport processes. IV. The equations of hydrodynamics, J. Chem. Phys. **18**, 817 (1950).
* Aifantis and Serrin [1983]E. C. Aifantis and J. B. Serrin, Equilibrium solutions in the mechanical theory of fluid microstructures, J. Colloid Interf. Sci. **96**, 530 (1983).
* [46]For a cubic solid and considering only elementary lattice vectors \(\mathbf{q}_{i}\), the order parameter is the crystallinity \(\psi\), which is the amplitude of density modulations along \(\mathbf{q}_{i}\): \(\rho_{\mathbf{q}_{i}}=\psi e^{i\varphi_{i}}\), where \(\rho_{\mathbf{q}_{i}}\) is the Fourier transformed density field along \(\mathbf{q}_{i}\).
* Takatori _et al._ [2014]S. C. Takatori, W. Yan, and J. F. Brady, Swim pressure: stress generation in active matter, Phys. Rev. Lett. **113**, 028103 (2014).
* Fily _et al._ [2014]Y. Fily, S. Henkes, and M. C. Marchetti, Freezing and phase separation of self-propelled disks, Soft Matter **10**, 2132 (2014).
* Mallory _et al._ [2014]S. A. Mallory, A. Saric, C. Valeriani, and A. Cacciuto, Anomalous thermomechanical properties of a self-propelled colloidal fluid, Phys. Rev. E **89**, 052303 (2014).
* Solon _et al._ [2015]A. P. Solon, J. Stenhammar, R. Wittkowski, M. Kardar, Y. Kafri, M. E. Cates, and J. Tailleur, Pressure and phase equilibria in interacting active Brownian spheres, Phys. Rev. Lett. **114**, 198301 (2015).
* Solon _et al._ [2015]A. P. Solon, Y. Fily, A. Baskaran, M. E. Cates, Y. Kafri, M. Kardar, and J. Tailleur, Pressure is not a state function for generic active fluids, Nat. Phys. **11**, 673 (2015).
* Epstein _et al._ [2019]J. M. Epstein, K. Klymko, and K. K. Mandadapu, Statistical mechanics of transport processes in active fluids. II. Equations of hydrodynamics for active Brownian particles, J. Chem. Phys. **150** (2019).
* Korteweg [1904]D. J. Korteweg, Archives neerl, Sci. Exacts. Nat. **6** (1904).
* Yang _et al._ [1976]A. J. M. Yang, P. D. Fleming III, and J. H. Gibbs, Molecular theory of surface tension, J. Chem. Phys. **64**, 3732 (1976).
* Anderson _et al._ [2020]J. A. Anderson, J. Glaser, and S. C. Glotzer, HOOMD-blue: A Python package for high-performance molecular dynamics and hard particle Monte Carlo simulations, Comput. Mater. Sci. **173**, 109363 (2020).
* Steinhardt _et al._ [1983]P. J. Steinhardt, D. R. Nelson, and M. Ronchetti, Bond-orientational order in liquids and glasses, Phys. Rev. B **28**, 784 (1983).
* Torquato _et al._ [2000]S. Torquato, T. M. Truskett, and P. G. Debenedetti, Is random close packing of spheres well defined?, Phys. Rev. Lett. **84**, 2064 (2000).
* Song _et al._ [1988]Y. Song, R. M. Stratt, and E. A. Mason, The equation of state of hard spheres and the approach to random closest packing, J. Chem. Phys. **88**, 1126 (1988).
* Touchette [2009]H. Touchette, The large deviation approach to statistical mechanics, Phys. Rep. **478**, 1 (2009).
* Lowen _et al._ [1990]H. Lowen, T. Beier, and H. Wagner, Multiple order parameter theory of surface melting: a van der Waals approach, Z. Phys. B Con. Mat. **79**, 109 (1990).
* Hansen and McDonald [2013]J.-P. Hansen and I. R. McDonald, _Theory of simple liquids: with applications to soft matter_ (Academic press, 2013).
* Kierlik and Rosinberg [1990]E. Kierlik and M. L. Rosinberg, Free-energy density functional for the inhomogeneous hard-sphere fluid: Application to interfacial adsorption, Phys. Rev. A **42**, 3382 (1990).
Supplemental Material - Theory of Nonequilibrium Symmetry-Breaking Coexistence and Active Crystallization
Daniel Evans
Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
Ahmad K. Omar
[email protected] Department of Materials Science and Engineering, University of California, Berkeley, California 94720, USA
###### Abstract
We consider the _bulk_ thermodynamics of a one-component system with internal energy \(U\left(S,V,N,\Psi\right)\)1. Here, the natural variables of the energy are the system entropy \(S\), volume \(V\), particle number \(N\), and a _phenomenological, extensive, and scalar_ order parameter, \(\Psi\). We define the intensive (on a per-particle basis) order parameter as \(\psi\equiv\Psi/N\), as defined in the main text.
The total differential of \(U\) for a reversible process is:
\[dU=TdS-p_{0}dV+\mu_{0}^{\rho}dN+\mu_{0}^{\psi}d\Psi,\] (S1)
where the first term on the right-hand-side represents the reversible heat exchange and the last three terms represent changes in the system energy resulting from the reversible work performed on (by) the system. Euler's homogeneous function theorem allows us to express the absolute energy as [1]:
\[U=TS-p_{0}V+\mu_{0}^{\rho}N+\mu_{0}^{\psi}\Psi.\] (S2)
A Gibbs-Duhem equation relating the _intensive_ variables (i.e., \(T,p_{0},\mu_{0}^{\rho},\mu_{0}^{\psi}\)) can be obtained by taking the total differential of Eq. (S2) and comparing the result to Eq. (S1):
\[0=-Vdp_{0}+Nd\mu_{0}^{\rho}+\Psi d\mu_{0}^{\psi}+SdT.\] (S3)
For isothermal processes, we can simplify this relation to that provided in the main text. Dividing Eq. (S3) by the system volume and defining the order parameter, \(\mathbf{X\equiv\left[\rho\ \ \rho\psi\right]^{\mathrm{T}}}\), and chemical potential, \(\mathbf{\mu}_{0}\equiv\left[\mu_{0}^{\rho}\ \ \mu_{0}^{\psi}\right]^{\mathrm{T}}\), vectors we arrive at:
\[dp_{0}=\mathbf{X}\cdot d\mathbf{\mu}_{0}.\] (S4a) It is sometimes more convenient to rearrange this Gibbs-Duhem relation with: \[d\mu_{0}^{\rho}=\mathbf{\mathcal{E}}^{\mathrm{eqm}}\cdot d\mathbf{\mathcal{F}}_{0}.\] (S4b)
We have introduced a force vector, \(\mathbf{\mathcal{F}}^{0}\equiv\left[p_{0}\ \mu_{0}^{\psi}\right]^{\mathrm{T}}\), and its conjugate, \(\mathbf{\mathcal{E}}^{\mathrm{eqm}}\equiv\left[\upsilon\ -\psi\right]^{\mathrm{T}}\) (where \(\upsilon=1/\rho\) is the specific volume). We note that if the order parameter represents the per-particle magnetization within the Ising model (i.e., \(\psi=m\)), the Gibbs-Duhem relation would read \(dp_{0}=\rho d\mu_{0}^{\rho}+\rho mdh_{0}\), where \(\mu_{0}^{\psi}=h_{0}\) is the magnetic field [1].
The free energy density is defined on a per-volume basis \(f_{0}\equiv F_{0}/V=\left(U-TS\right)/V\) and can be expressed as:
\[f_{0}=\mathbf{\mu}_{0}\cdot\mathbf{X}-p_{0},\] (S5)
where we now recognize that the pressure is \(p_{0}=\mathbf{\mu}_{0}\cdot\mathbf{X}-f_{0}\). We now express the equilibrium criteria for two macroscopic phases (\(\alpha\) and \(\beta\)) with differing densities and/or order parameter values. The coexistence criteria can be compactly expressed as \(\mathbf{\mu}_{0}\left(\mathbf{X}^{\alpha}\right)=\mathbf{\mu}_{0}\left(\mathbf{X}^{ \beta}\right)=\mathbf{\mu}^{\mathrm{coexist}}\) and
\(p_{0}\left(\mathbf{X}^{\alpha}\right)=p_{0}\left(\mathbf{X}^{\beta}\right)=p^{ \text{coexist}}\), where \(\boldsymbol{\mu}^{\text{coexist}}=\left[\mu^{\rho,\text{coexist}}\ \ 0\right]^{ \text{T}}\), as stated in the main text. Explicitly, the four thermodynamic criteria for equilibrium \(\alpha\)-\(\beta\) coexistence are:
\[\mu_{0}^{\rho}\left(\rho^{\alpha},\psi^{\alpha}\right)= \mu^{\rho}\left(\rho^{\beta},\psi^{\beta}\right)=\mu^{\rho,\text{ coexist}},\] (S6a) \[\mu_{0}^{\psi}\bigg{(}\rho^{\alpha},\psi^{\alpha}\bigg{)}=0,\] (S6b) \[\mu_{0}^{\psi}\bigg{(}\rho^{\beta},\psi^{\beta}\bigg{)}=0,\] (S6c) \[p_{0}\left(\rho^{\alpha},\psi^{\alpha}\right)= p_{0}\left(\rho^{\beta},\psi^{\beta}\right)=p^{\text{coexist}}.\] (S6d)
We now look to use the Gibbs-Duhem relation [Eq. (S4)] to re-frame equality of chemical potentials in Eq. (S6a) into an integral expression of bulk equations-of-state. Equilibrium equations of state are, by definition, state functions. As a result, \(\mu_{0}^{\rho}\left(\mathbf{X}^{\beta}\right)-\mu_{0}^{\rho}\left(\mathbf{X}^ {\alpha}\right)=\int_{\alpha}^{\beta}d\mu_{0}^{\rho}=0\). Applying the Gibbs-Duhem relation we arrive at:
\[\int_{\mu_{0}^{\rho,\alpha}}^{\mu_{0}^{\rho,\beta}}d\mu_{0}^{\rho}=0=\int_{ \mathcal{P}_{0}^{\rho}}^{p_{0}^{\beta}}\upsilon dp_{0}-\int_{\mu_{0}^{\psi, \alpha}}^{\mu_{0}^{\psi,\beta}}\psi d\mu_{0}^{\psi}.\] (S7)
After integrating by parts we have the initial form of the \(\alpha\)-\(\beta\) Maxwell construction:
\[\int_{\upsilon^{\alpha}}^{\upsilon^{\beta}}\left[p_{0}\left(\left\{X_{i} \right\}\right)-p^{\text{coexist}}\right]d\upsilon-\int_{\psi^{\alpha}}^{ \psi^{\beta}}\mu_{0}^{\psi}d\psi=\int_{\mathcal{E}_{n}^{\alpha\text{quom}}}^{ \mathcal{E}_{n}^{\text{quom},\beta}}\left[\mathcal{F}_{n}^{0}\left(\left\{X_ {i}\right\}\right)-\mathcal{F}^{\text{coexist}}\right]=0,\] (S8)
where we have begun using indicial notation and invoked \(\mathcal{F}_{n}^{\text{coexist}}=\left[p^{\text{coexist}}\ \ 0\right]^{ \text{T}}\). While each integral is one-dimensional, the integrand on the left-hand side of Eq. (S7) is a multivariable state function. Consequently, an integration path (i.e., a relationship between \(\upsilon\) and \(\psi\)) between the \(\alpha\) and \(\beta\) phase properties must be specified. While the path details will impact the individual integrals on the right-hand-side of Eq. (S8), their sum is guaranteed to vanish as the chemical potential is a state function. It is convenient to select a path defined by \(\mu_{0}^{\psi}\left(\left\{X_{i}^{*}\right\}\right)=0\) which entirely eliminates the second integral in Eq. (S8). This condition implies the parametric relationship where \(\psi^{*}\left(\rho\right)\) are the stable values of \(\psi\) at each density, and _automatically satisfies_ Eqs. (S6b) and (S6c). We then have the final criteria for \(\alpha\)-\(\beta\) coexistence presented in the main text:
\[\int_{\upsilon^{\alpha}}^{\upsilon^{\beta}}\left[p_{0}\left(\left\{X_{i}^{*} \right\}\right)-p^{\text{coexist}}\right]d\upsilon=0,\] (S9a) \[p_{0}\big{(}\{X_{i}^{\alpha*}\}\big{)}=p_{0}\big{(}\{X_{i}^{ \beta*}\}\big{)}=p^{\text{coexist}},\] (S9b)
### Recovery of Equilibrium Criteria from Mechanical Theory
We now look to recover the equilibrium Maxwell construction through the mechanical approach described in the main text. The mechanical conditions that emerge from taking the stationary limit
of the dynamics of our order parameter vector, \(X_{n}\equiv\left[\rho\ \ \rho\psi\right]^{\mathrm{T}}\), are \(\mathcal{F}_{n}=\mathcal{F}_{n}^{\mathrm{coexist}}=\mathcal{P}^{\mathrm{coexist}} \delta_{n\rho}\), where \(\mathcal{F}_{n}=\left[\mathcal{P}\ \ \mu^{\psi}\right]^{\mathrm{T}}\). The absence of body forces in equilibrium reduces the dynamic pressure \(\mathcal{P}\) to the static (or "true") pressure \(p\), which, in the quasi-1d planar interface under consideration, is related to the \(zz\) component of the stress tensor as \(p=-\sigma_{zz}\). In equilibrium, \(\mu^{\psi}\) and \(p\) can be related to functional derivatives of the free energy functional \(F\). To second order in spatial gradients of \(\left\{X_{n}\right\}\), the free energy functional can be expressed as:
\[F\left[\left\{X_{\ell}\right\}\right]=\int_{V}d\mathbf{x}\left[f_{0}\left( \left\{X_{\ell}\right\}\right)+\frac{1}{2}K_{ij}\left(\left\{X_{\ell}\right\} \right)\mathbf{\nabla}X_{i}\cdot\mathbf{\nabla}X_{j}\right],\] (S10)
where \(f_{0}\) is the bulk (mean-field) free energy density and \(K_{ij}\) is a symmetric tensor of state functions capturing the increase in free energy due to spatial gradients in the order parameters. These interfacial coefficients can be related to the second moment of the direct correlation function \(c\left(\mathbf{r};\left\{X_{n}\right\}\right)\)[2]:
\[K_{ij}=\frac{k_{B}T}{6}\int d\mathbf{r}r^{2}c\left(\mathbf{r};\left\{X_{\ell} \right\}\right)\frac{\partial\hat{\rho}\left(\mathbf{r};\left\{X_{\ell}\right\} \right)}{\partial X_{i}}\frac{\partial\hat{\rho}\left(\mathbf{r};\left\{X_{ \ell}\right\}\right)}{\partial X_{j}},\] (S11)
where \(\hat{\rho}\left(\mathbf{r};\left\{X_{\ell}\right\}\right)\) is the density field within classical density functional theory. We emphasize that while \(\hat{\rho}\) is the true one-body density, it is parameterized by our density and phenomenological order parameter: \(\hat{\rho}\left(\mathbf{r};\rho,\psi\right)\). Here, bold variables indicate quantities that are tensorial in Cartesian space while we continue to use indicial notation to describe quantities that are tensorial in the space of our order parameters. \(\mu^{\psi}\) is the functional derivative of \(F\) with respect to \(\rho\psi\)\(\left[\mu^{\psi}=\delta F/\delta\left(\rho\psi\right)\right]\) and \(\sigma_{zz}\) (and hence \(p\)) is related to \(F\) through the Gibbs-Duhem relation in Eq. (S4a) [\(-\mathbf{\nabla}\cdot\mathbf{\sigma}=X_{n}\mathbf{\nabla}\delta F/\delta X_{n}\)]. Evaluating \(\delta F/\delta X_{n}\) we have:
\[\frac{\delta F\left[\left\{X_{\ell}\right\}\right]}{\delta X_{n}}=\frac{ \partial f_{0}}{\partial X_{n}}-\frac{1}{2}\frac{\partial}{\partial X_{n}}K_{ ij}\mathbf{\nabla}X_{i}\cdot\mathbf{\nabla}X_{j}-K_{ni}\nabla^{2}X_{i},\] (S12)
where we have recognized \(\partial K_{jn}/\partial X_{i}=\partial K_{ij}/\partial X_{n}\) if \(\partial^{2}\hat{\rho}/\partial\mathbf{X}\partial\mathbf{X}=\mathbf{0}\) in Eq. (S11). We now look to identify \(\mathcal{F}_{n}^{0}\), \(A_{n\ell}\), and \(B_{n\ell m}\) so we may determine \(E_{nj}\) through Eq. (9) and recover the equilibrium Maxwell construction in Eq. (S8). Equation (S12) immediately yields \(\mathcal{F}_{0}^{\psi}=\mu_{0}^{\psi}=\partial f_{0}/\partial\left(\rho\psi\right)\), \(B_{\psi ij}=1/2\partial K_{ij}/\partial\psi\), and \(A_{\psi i}=K_{\psi i}\), leaving \(\mathcal{F}_{0}^{\rho}\), \(B_{\rho ij}\), and \(A_{\rho i}\) to be determined. Expressing the divergence of the stress in terms of \(\delta F/\delta X_{n}\) we have:
\[-\mathbf{\nabla}\cdot\mathbf{\sigma}=\mathbf{\nabla}\left(X_{n}\frac{\partial }{\partial X_{n}}f_{0}\right)-\frac{\partial}{\partial X_{n}}f_{0}\mathbf{\nabla} X_{n}-\mathbf{\nabla}\left(\frac{1}{2}X_{n}\frac{\partial}{\partial X_{n}}K_{ ij}\mathbf{\nabla}X_{i}\cdot\mathbf{\nabla}X_{j}+X_{n}K_{ni}\nabla^{2}X_{i}\right)\\ +\left(\frac{1}{2}\frac{\partial}{\partial X_{n}}K_{ij}\mathbf{\nabla} X_{i}\cdot\mathbf{\nabla}X_{j}+K_{ni}\nabla^{2}X_{i}\right)\mathbf{\nabla}X_{n}.\] (S13)
Noting \((\partial f_{0}/\partial X_{n}\mathbf{\nabla}X_{n}=\mathbf{\nabla}f_{0})\) from the chain rule and the following identity:
\[\left(\frac{1}{2}\frac{\partial}{\partial X_{n}}K_{ij}\mathbf{\nabla}X_{i}\cdot\mathbf{ \nabla}X_{j}+K_{ni}\nabla^{2}X_{i}\right)\mathbf{\nabla}X_{n}=\mathbf{\nabla}\cdot \left(\frac{K_{ni}}{2}\mathbf{\nabla}X_{n}\mathbf{\nabla}X_{i}\right).\] (S14)
We identify \(\mathbf{\sigma}\):
\[-\mathbf{\nabla}\cdot\mathbf{\sigma}=\mathbf{\nabla}\cdot\bigg{[}\left(X_{n }\frac{\partial}{\partial X_{n}}f_{0}-f_{0}-\frac{1}{2}X_{n}\frac{\partial}{ \partial X_{n}}K_{ij}\mathbf{\nabla}X_{i}\cdot\mathbf{\nabla}X_{j}-X_{n}K_{ni}\nabla^ {2}X_{i}\right)\mathbf{I}\\ +K_{ni}/2\mathbf{\nabla}X_{n}\mathbf{\nabla}X_{i}\bigg{]},\] (S15)
where \(\mathbf{I}\) is the identity tensor. Moving to a quasi-1D planar coexistence scenario to extract the \(\sigma_{zz}\) component of Eq. (S15), we identify \(\mathcal{F}_{0}^{\rho}=p_{0}=X_{n}\partial f_{0}/\partial X_{n}-f_{0}\), \(2B_{\rho ij}=X_{n}\partial K_{ij}/\partial X_{n}-K_{ij}\), and \(A_{\rho i}=X_{n}K_{ni}\). We now have the full expressions for \(B_{n\ell m}\) and \(A_{n\ell}\), and can substitute them into the system of equations in Eq. (9) to determine the weight tensor \(E_{nj}\).
From Eq. (S8), the expected weight tensor is:
\[E_{nj}^{\text{eqm}}=\begin{bmatrix}-1/\rho^{2}&0\\ \psi/\rho&-1/\rho\end{bmatrix},\] (S16)
where \(n\) in \(E_{nj}\) corresponds to rows and \(j\) to columns. Using the equilibrium expressions for \(B_{n\ell m}\) and \(A_{n\ell}\), it is straightforward to show that the above \(E_{nj}^{\text{eqm}}\) indeed satisfies Eq. (9).
## II Approximate analytical weight tensor at intermediate activities
While Appendix B analytically determined the weight tensor \(E_{nj}\) in the high and low activity limits, the combined case, where both active and passive interfacial stresses are relevant, cannot be determined analytically. Instead, we may numerically obtain the weight tensor, as was done to construct the phase diagram in the main text. We may also gain physical intuition for \(E_{nj}\) when both active and passive stresses are relevant, and motivate a scheme to interpolate between the high and low activity limits, by considering the limit of equal active and passive contributions to \(\mathcal{P}\). For simplicity, we will now perform this analysis for liquid-gas coexistence where the one-component density is the only order parameter. We expect the result to extend to the two order parameter solid-fluid case with the distinction that \(p_{0}^{C}\) depends on both \(\rho\) and \(\psi\), as was the case in both the high and low activity limits.
The dynamic pressure with combined active and passive stresses [see Eq. (B3)] can be expressed
as (in our quasi-1d coexistence scenario):
\[\mathcal{P}=\mathcal{P}_{0}- B\left(\frac{d\rho}{dz}\right)^{2}-A\frac{d^{2}\rho}{dz^{2}},\] (S17a) \[B=\frac{1}{2}\rho\frac{d}{d\rho}K- \frac{K}{2}+c_{d}\ell_{0}^{2}\overline{U}\frac{d}{d\rho}\left( \overline{U}\frac{dp_{0}^{C}}{d\rho}\right),\] (S17b) \[A=\rho K+c_{d}\ell_{0}^{2}\overline{U}^{2}\frac{dp_{0}^{C}}{d\rho},\] (S17c)
where \(\mathcal{P}_{0}=p_{0}^{C}+p_{0}^{\text{act}}\), \(K=K_{\rho\rho}\), \(B=B_{\rho\rho\rho}\), and \(A=A_{\rho\rho}\). With only one order parameter present, Eq. (9d) can be expressed as \(E=E_{\rho\rho}=\exp\left(2\int d\rho B/A\right)/A\)[3; 4; 5]. Splitting \(B=B^{C}+B^{\text{act}}\) and \(A=A^{C}+A^{\text{act}}\) into equilibrium and active contributions, we set \(A^{C}=A^{\text{act}}\) to take the limit of equal active and passive contributions to \(E\). Noting \(A=2A^{C}=2A^{\text{act}}=\sqrt{A^{C}}\sqrt{A^{\text{act}}}\) we can rewrite the differential equation for \(E\) as:
\[E=\sqrt{\frac{\exp\left(2\int d\rho B^{C}/A^{C}\right)}{A^{C}}}\sqrt{\frac{ \exp\left(2\int d\rho B^{\text{act}}/A^{\text{act}}\right)}{A^{\text{act}}}}.\] (S18)
It is now clear that in the limit \(A^{C}=A^{\text{act}}\), \(E\) is the geometric mean of the isolated equilibrium and active results (low and high activity limits, respectively). We then see that when interpolating between the low and high activity limits, a geometric weighting between the limits is more appropriate than an arithmetic one. We forgo using the multi-order parameter equivalent of Eq. (S18) and instead solve for the exact weighting tensor [Eq. (A21)] numerically.
## III Equations of state of active brownian spheres
Ultimately, the application of the coexistence criteria derived in Appendix B to active hard spheres will require equations of state for the bulk and interfacial pressure coefficients as _continuous_ functions of \(\rho\) and \(\psi\) for each activity. By selecting an integration path in Eq. (7) such that \(\mu_{0}^{\psi}\left(\mathbf{X}^{*}\right)=0\) everywhere, the coexistence criteria reduce to the following (with the criteria \(\mu_{0}^{\psi}\left(\mathbf{X}^{*}\right)=0\) implicitly satisfied):
\[\mathcal{P}_{0}\left(\rho^{\text{fluid}},\psi^{*}\right)= \mathcal{P}_{0}\left(\rho^{\text{solid}},\psi^{*}\right)=\mathcal{ P}^{\text{coexist}}\] (S19a) \[E_{\rho\rho}\propto\exp\bigg{[}\int d\rho\frac{2B_{\rho\rho\rho}- \partial A_{\rho\rho}/\partial\rho}{A_{\rho\rho}}+\int d\left(\rho\psi^{*} \right)\frac{2B_{\rho\psi\psi}-\partial A_{\rho\psi}/\partial\left(\rho\psi^{ *}\right)}{A_{\rho\psi}}\bigg{]},\] (S19c) \[E_{\rho\psi}= E_{\rho\rho}\frac{A_{\rho\psi}}{A_{\rho\rho}}.\] (S19d)
where we have made the dependencies of \(\psi^{*}\), \(A_{\rho i}\), and \(B_{\rho ij}\) implicit (e.g. \(\psi^{*}\left(\rho\right)\rightarrow\psi^{*}\)). Simulation data can _only_ be obtained for systems in which a state of homogeneous \(\rho\) is at least locally stable. Consequently, it is not possible to obtain the complete relevant functional dependence of the required state functions directly from simulation. However, application of our coexistence criteria in Eq. (S19) only requires knowledge of the equations of state at \(\psi=\psi^{*}(\rho)\) for each density \(\rho\). We therefore proceed by devising a simple simulation protocol, outlined in Section III.1, to obtain as much of this limited data as possible. We subsequently use this data, along with the known physical limits we require our equations of state to capture, in order to develop physical and semi-empirical bulk equations of state in Section III.2. Finally, we approximate the interfacial equations of state in Section III.4.
### Simulation Details
Brownian dynamics simulations [see Eq. (B1)] of active hard spheres were performed following Ref. [6]. The hard-sphere diameter, \(D\), is the only natural length scale in addition to the run length. As a result, the system state is entirely characterized by two dimensionless, intensive, geometric parameters: the volume fraction of spheres \(\phi\equiv\rho\pi D^{3}/6\) and the dimensionless run length \(\ell_{0}/D\)[6].
All simulations were performed using HOOMD-Blue and consisted of at least 55296 particles [7]. The primary purpose of our simulations was to inform the development of our bulk equations of state, \(\psi^{*}\), \(p_{0}^{C}\), and \(p_{0}^{\text{act}}\), by measuring these properties in regions of the phase diagram where the _system is spatially homogeneous_. To determine these equations of state at high volume fractions (where a homogeneous solid is the stable configuration), simulations were initialized in a perfect fcc lattice (\(\phi=\phi^{\text{CP}}=0.74\)). The simulation box was periodically (and isotropically) expanded to reduce the volume fraction in increments of \(\Delta\phi=0.0025\). At each volume fraction, the interaction and active contributions to the dynamic pressure along with the average crystallinity order parameter (taken to be \(\psi^{*}\)) were measured after the system was determined to have relaxed to a steady state. Below an activity-dependent volume fraction, homogeneous states are no longer stable and a fluid nucleates. This volume fraction can be quite high and, above an activity of \(\ell_{0}/D\sim 1\)[6], the _only_ observable stable solid phase is a nearly close-packed fcc crystal (see phase diagram in the main text), severely restricting the amount of high volume fraction data that can be obtained. Figure S1 displays the contributions to the dynamic pressure obtained from this protocol.
We also measure equations of state by initializing the system at a dilute volume fraction (\(\phi=0.05\)) and periodically compressing the simulation box (isotropically) to increase the vol
ume fraction in increments of \(\Delta\phi=0.025\). The locally stable configurations from this protocol corresponded to both globally stable and metastable fluids (\(\psi^{*}\approx 0\)) with the measured pressures (not shown here) consistent with those of Ref. [5]. However, by determining the volume fraction at which these fluids develop a finite \(\psi^{*}\), this protocol provides direct insight into the location of the order-disorder transition, \(\phi^{\rm{ODT}}\).
Our simulations also allow us to extend the solid-fluid boundary reported in Ref. [6] to activities of \(\ell_{0}/D<0.9\). These additional points are reported in the phase diagram displayed in the main text.
### Physical and Semi-Empirical Bulk Equations of State
To construct the ABP solid-fluid phase diagram by applying our derived coexistence criteria, we need equations of state for the preferred crystallinity, \(\psi^{*}\left(\phi;\ell_{0}/D\right)\), and pressures, \(p_{0}^{C}\left(\phi,\psi;\ell_{0}/D\right)\) and \(p_{0}^{\rm{act}}\left(\phi,\psi;\ell_{0}/D\right)\), that accurately describe both fluid (\(\psi\approx 0\)) and solid (\(\psi>0\)) phases at all activities. We combine existing equations of state for an ABP fluid [5] (developed for moderate activities \(\ell_{0}/D>1\)) and an equilibrium hard sphere fluid [8] to develop accurate equations of state for ABP fluids at all activities. To extend these equations of state to describe crystalline systems, we develop auxiliary equations of state [e.g., an equation of state for the maximum possible packing fraction, \(\phi^{\rm{max}}(\psi;\ell_{0}/D)\)] to capture the effects of nonzero \(\psi\).
The active pressure of ABP fluids developed in Ref. [5] (\(p_{0}^{\rm{act}}\)) correctly recovers the ideal gas pressure in the reversible limit (\(\ell_{0}/D\to 0\)), i.e., \(p_{0}^{\rm{act}}=\rho k_{B}T^{\rm{act}}\) where the active energy scale is \(k_{B}T^{\rm{act}}\equiv\zeta U_{0}\ell_{0}/6\). We extend \(p_{0}^{\rm{act}}\) to nonzero \(\psi\) by introducing an equation of state \(\phi^{\rm{max}}\left(\psi;\ell_{0}/D\right)\) capturing the crystallinity-dependent maximum volume fraction:
\[p_{0}^{\rm{act}}\left(\phi,\psi;\ell_{0}/D\right)=\frac{\zeta U_{0}}{\pi D^{2} }\phi\left(\frac{\ell_{0}}{D}\right)\bigg{[}1+\left(1-\exp\left[-2^{7/6}\left( \frac{\ell_{0}}{D}\right)\right]\right)\frac{\phi}{1-\phi/\phi^{\rm{max}} \left(\psi;\ell_{0}/D\right)}\bigg{]}^{-1},\] (S20)
where \(\phi^{\max}\left(\psi=0;\ell_{0}/D\right)=\phi^{\rm RCP}=0.645\) to recover the fluid pressure in Ref. [5] and \(\phi^{\max}\left(\psi=1;\ell_{0}/D\right)=\phi^{\rm CP}=0.74\) when the system has perfect crystalline order. The conservative interaction pressure in Ref. [5]\(\left(p_{C}^{0,{\rm ABP}}\right)\)_does not_ recover the equilibrium hard sphere pressure \(\left(p_{C}^{0,{\rm HS}}\right)\)[8] in the low activity limit. We remedy this by including an interpolation [through an equation of state \(x\left(\ell_{0}/D\right)\)] between the conservative interaction pressures of an ABP fluid and an equilibrium hard sphere fluid. Extending \(p_{C}^{0,{\rm ABP}}\) to nonzero \(\psi\) requires an equation of state capturing an empirical crystallinity-induced slowing of its divergence [\(\beta\left(\psi;\ell_{0}/D\right)\)] in addition to using \(\phi^{\max}\left(\psi;\ell_{0}/D\right)\) as the maximum volume fraction:
\[p_{0}^{C}=x\big{(}\ell_{0}/D\big{)}p_{0}^{C,{\rm ABP}}+\left[1-x \left(\ell_{0}/D\right)\right]p_{0}^{C,{\rm HS}},\] (S21a) \[p_{0}^{C,{\rm ABP}}\left(\phi,\psi;\ell_{0}/D\right)=6\times 2^{-7 /6}\frac{\phi^{2}}{\left[1-\phi/\phi^{\max}\left(\psi;\ell_{0}/D\right) \right]^{\beta\left(\psi;\ell_{0}/D\right)}}\] (S21b) \[p_{0}^{C,{\rm HS}}\left(\phi,\psi;k_{B}T\right)=\frac{k_{B}T}{2 }\phi^{2}\sum_{n=1}^{9}\frac{c_{n}\phi^{n-1}}{\left[1-\phi/\phi^{\max}\left( \psi;\ell_{0}/D\right)\right]^{0.76}},\] (S21c)
where \(\beta\left(\psi=0;\ell_{0}/D\right)=1/2\) to recover the pressure in Ref. [5], \(c_{n}\) are a series of coefficients from Ref. [8] found in Table 1, and \(m_{x}=0.18\) and \(c_{x}=0.63\) are constants that have been fit. We have introduced the thermal energy \(k_{B}T\), which, in systems of active hard spheres, is generally density (and crystallinity) dependent and can be defined as \(k_{B}T\equiv p_{0}^{\rm act}/\rho\). We find no appreciable differences in the resulting phase diagram when approximating this active temperature with that of ideal ABPs in 3d, \(k_{B}T=k_{B}T^{\rm act}\)[9], however. We then use the simpler density-independent effective temperature, \(k_{B}T^{\rm act}\), when constructing phase diagrams but note that the density dependence of the effective temperature may be more important for other systems.
The equations of state \(x\left(\ell_{0}/D\right)\), \(\phi^{\rm max}\left(\psi;\ell_{0}/D\right)\), and \(\beta\left(\psi;\ell_{0}/D\right)\) were empirically fit:
\[x\big{(}\ell_{0}/D\big{)}=\min\big{(}1,\ \max\big{[}0,\ m_{x}\ln \left(\ell_{0}/D\right)+c_{x}\big{]}\big{)},\] (S22a) \[\phi^{\rm max}\left(\psi;\ell_{0}/D\right)=\phi^{\rm RCP}+\left( \phi^{\rm CP}-\phi^{\rm RCP}\right)\tanh\left(A_{\rm max}\psi\right)\tanh\left( \psi\left[\Delta_{\rm max}+\ln\left(1+\ell_{0}/D\right)\right]\right),\] (S22b) \[\beta\left(\psi;\ell_{0}/D\right)= \beta_{0}-\Theta\left(\psi\right)\tanh\left[\Delta_{\beta}^{(1)}+ A_{\beta}\left(\Delta_{\beta}^{(2)}+\tanh\left(\frac{\ell_{0}-\ell_{0}^{*}}{D} \right)\right)\right],\] (S22c)
where \(\Theta\) is the Heaviside step function and \(m_{x}=0.18\), \(c_{x}=0.63\), \(A_{\rm max}=10\), \(\Delta_{\rm max}=5\), \(\Delta_{\beta}^{(1)}=0.1\), \(A_{\beta}=0.6\), \(\Delta_{\beta}^{(2)}=1\), and \(\ell_{0}^{*}=17.6\)\(D\) are fitted constants; generally, \(\ell_{0}^{*}\) lies between the critical point (\(\ell_{0}^{c}\approx 17.37\)\(D\)) and the triple point (\(\ell_{0}^{\rm tp}\approx 18.26\)\(D\)). The forms of these fits were motivated by the previously discussed physical limits that we require to be met.
In order to use the equations of state in Eqs. (S20) and (S21) we require an equation of state for \(\psi^{*}\). We fit an expression for the preferred crystallinity \(\psi^{*}\left(\phi;\ell_{0}/D\right)\) [see Fig. 1 in the main text]:
\[\psi^{*}\left(\phi;\ell_{0}/D\right)=\Theta\left(\phi-\phi^{\rm ODT }\right)\tanh\biggl{[}\exp\left(m^{\psi}\phi+c^{\psi}+A^{\psi}\frac{\phi}{ \sqrt{1-\phi/\phi^{\rm CP}}}\right)\\ \times\left(\frac{\Delta_{2}^{\psi}+\ln\left[\Delta_{3}^{\psi}+ \left(\ell_{0}/D\right)^{r_{1}^{\psi}}\right]}{\Delta_{1}^{\psi}+\ell_{0}/D} \right)^{r_{2}^{\psi}\left(1-\phi/\phi^{\rm CP}\right)}\biggr{]},\] (S23)
where \(m^{\psi}=18.8\), \(c^{\psi}=-13.1\), \(A^{\psi}=0.05\), \(\Delta_{1}^{\psi}=0.01\), \(\Delta_{2}^{\psi}=\Delta_{3}^{\psi}=1\), and \(r_{1}^{\psi}=r_{2}^{\psi}=2\), are again constants that have been fit. The equation of state for the order-disorder volume fraction, \(\phi^{\rm ODT}\left(\ell_{0}/D\right)\), [see the inset of Fig. 1 in the main text] was determined to be:
\[\phi^{\rm ODT}\left(\ell_{0}/D\right)=\phi^{\rm ODT}_{\rm eqm}+\frac{\phi^{ \rm RCP}-\phi^{\rm ODT}_{\rm eqm}}{2}\tanh\left[A_{\rm ODT}\ln\left(m_{\rm ODT }\ell_{0}/D+c_{\rm ODT}\right)\right],\] (S24)
where \(\phi^{\rm ODT}_{\rm eqm}=0.515\) is the equilibrium hard sphere \(\phi^{\rm ODT}\) and \(m_{\rm ODT}=3.3\), \(c_{\rm ODT}=0.3\), and \(A_{\rm ODT}=2\) are fitted constants.
We see that since our equation for \(\psi^{*}\) in Eq. (S23) experiences a discontinuity at \(\phi^{\rm ODT}\), our equation for \(p_{0}^{C}\) in Eq. (S21) does as well. This discontinuity is necessary for passive solid-fluid coexistence, as the pressure (evaluated at \(\psi^{*}\)) must be non-monotonic with increasing \(\rho\) in order to find binodal densities. Importantly, this prevents Eq. (S19b) from being an equal-area construction with respect to \(p_{0}^{C}\) in the high activity limit as \(\mathcal{E}^{\rho}=p_{0}^{C}\) is not a bijection.
Figure S1 shows the fits for \(p_{0}^{C}\) and \(p_{0}^{\rm act}\) at low activities after inserting the expressions for \(x\), \(\phi^{\rm max}\), \(\beta\), \(\phi^{\rm ODT}\), and \(\psi^{*}\) into Eqs. (S20) and (S21). While the fit for \(p_{0}^{C}\) is an overestimate, the qualitative \(\ell_{0}/D\) and \(\phi\) dependent trends are captured. Since \(p_{0}^{\rm act}<<p_{0}^{C}\) at low activity, \(\mathcal{P}_{0}\) is dominated by \(p_{0}^{C}\) and the underestimation of \(p_{0}^{\rm act}\) is unimportant at these activities.
### Characterization of the "Pseudo" Spinodal
There are two spinodals, or regions of instability, in our dynamic pressure \(\left(\mathcal{P}_{0}=p_{0}^{C}+p_{0}^{\text{act}}\right)\) of active hard spheres described in Section III.2. The first is a true spinodal indicating that the fluid phase (\(\psi\approx 0\)) at the densities is unstable. The fluid spinodal, which occurs above the critical activity, arises from a non-monotonic active pressure and results in MIPS. The second is a "pseudo"-spinodal which drives crystallization, even in the reversible limit. We distinguish this spinodal as it indicates that states of intermediate density and finite \(\psi\) (which cannot generally be prepared) are unstable.
For a solid-fluid transition to occur for passive hard spheres, \(p_{0}^{C}\) must contain a discontinuity at the order-disorder volume fraction, \(\phi^{\text{ODT}}\). This discontinuity represents a region of instability that occurs over an infinitely narrow range of \(\phi\) where \(\psi^{*}\) adopts a nonzero value, representing a pseudo-spinodal. The pseudo-spinodal widens at finite activity due to the non-monotonicity of \(p_{0}^{\text{act}}\), encompassing a finite range of volume fractions above \(\phi^{\text{ODT}}\). Figure S2 shows the widening of this pseudo-spinodal, showing the active and conservative interaction contributions to \(\mathcal{P}_{0}\) at low, intermediate, and high activity (the same activities as Fig. 2 in the main text).
### Interfacial Equations of State
We look to determine the integral weighting functions \(E_{\rho\rho}\left(\rho,\psi^{*}\right)\) and \(E_{\rho\psi}\left(\rho,\psi^{*}\right)\) of active Brownian spheres through Eqs. (S19c) and (S19d). To do so, we need expressions for the interfacial coefficients \(B_{\rho ij}\) and \(A_{\rho i}\) evaluated at \(\psi^{*}\) at all activities. Equation B3 contains general expressions for these coefficients at finite activity. While \(B_{\rho ij}^{\mathrm{act}}\) and \(A_{\rho i}^{\mathrm{act}}\) can be expressed in terms of the bulk equations of state \(p_{0}^{C}\) and \(p_{0}^{\mathrm{act}}\), the passive terms, \(B_{\rho ij}^{C}\) and \(A_{\rho i}^{C}\), require knowledge of the interfacial coefficient tensor \(K_{ij}\). Once a relationship \(\psi^{*}\left(\rho\right)\) has been established, Eq. (S11) indicates \(K_{\rho\psi}=K_{\psi\rho}=K_{\rho\rho}\left(\partial\left(\rho\psi^{*}\right)/ \partial\rho\right)^{-1}\) and \(K_{\psi\psi}=K_{\rho\rho}\left(\partial\left(\rho\psi^{*}\right)/\partial \rho\right)^{-2}\). Generally, \(K\equiv K_{\rho\rho}\) can be computed from the direct correlation function \(c\left(\mathbf{r};\rho,\psi^{*}\right)\)[2; 10]:
\[K=\frac{k_{B}T}{6}\int d\mathbf{r}r^{2}c\left(\mathbf{r};\rho,\psi^{*}\right),\] (S25)
where we use the active temperature of ideal ABPs as the effective temperature (\(k_{B}T=k_{B}T^{\mathrm{act}}\)) as in Section III.2. Equation (S25) requires knowledge of the direct correlation function. While \(c\left(\mathbf{r};\rho,\psi^{*}\right)\) is generally \(\psi^{*}\) dependent (and may be measured through simulations), we analytically approximate it to be that of a hard sphere fluid in the scaled particle theory [11]:
\[-c^{2}\left(r;\rho\right)=\frac{1}{1-\phi}\left[-\left(\left(D/2- r\right)^{2}+\frac{4\left(D/2-r\right)^{3}}{3r}\right)\delta^{\prime}\left(D/2-r \right)+\frac{\left(D/2-r\right)^{3}}{3}\delta^{\prime\prime}\left(D/2-r \right)\right]\\ +\frac{\rho\pi D^{2}}{\left(1-\phi\right)^{2}}\left[-\left(D/2- r\right)^{2}\delta\left(D/2-r\right)+\frac{\left(D/2-r\right)^{3}}{3}\delta^{ \prime}\left(D/2-r\right)\right]\\ +\left(\frac{\rho D/2}{\left(1-\phi\right)^{2}}+\frac{\left(\rho \pi D^{2}\right)^{2}}{4\pi\left(1-\phi\right)^{3}}\right)\frac{8\pi\left(D/2- r\right)^{3}\delta\left(D/2-r\right)}{3}\\ +\left(\frac{\rho}{\left(1-\phi\right)^{2}}+\frac{2\rho^{2}\pi D ^{3}/2}{\left(1-\phi\right)^{3}}+\frac{\left(\rho\pi D^{2}\right)^{3}}{4\pi \left(1-\phi\right)^{4}}\right)\Theta\left(D/2-r\right)\frac{4\pi\left(D/2-r \right)^{3}}{3}\] (S26)
where \(r\equiv|\mathbf{r}|\) and the prime indicates a derivative. We then numerically determine \(K\) by integrating Eq. (S25) using the direct correlation function in Eq. (S26). This, combined with the bulk equations of state developed in Section III.2, allows us to numerically determine the integration weight functions \(E_{\rho\rho}\) and \(E_{\rho\psi}\) through Eqs. (S19c) and (S19d). We now have all of the equations of state necessary to construct active solid-fluid phase diagrams using Eqs. (S19a) and (S19b). The phase diagram of active Brownian spheres resulting from these equations of state and our nonequilibrium criteria is displayed in the main text [see Fig. 3].
|
2309.06523 | Directed Sets of Topology -- Tukey Representation and Rejection | Every directed set is Tukey equivalent to (a) the family of all compact
subsets, ordered by inclusion, of a (locally compact) space, to (b) a
neighborhood filter, ordered by reverse inclusion, of a point (of a compact
space, and of a topological group), and to (c) the universal uniformity,
ordered by reverse inclusion, of a space. Two directed sets are Tukey
equivalent if they are cofinally equivalent in the sense that they can both be
order embedded cofinally in a third directed set. In contrast, any totally
bounded uniformity is Tukey equivalent to $[\kappa]^{<\omega}$, the collection
of all finite subsets of $\kappa$, where $\kappa$ is the cofinality of the
uniformity. All other Tukey types are `rejected' by totally bounded
uniformities. Equivalently, a compact space $X$ has weight (minimal size of a
base) equal to $\kappa$ if and only if the neighborhood filter of the diagonal
is Tukey equivalent to $[\kappa]^{<\omega}$. A number of questions from the
literature are answered with the aid of the above results. | Ziqin Feng, Paul Gartside | 2023-09-12T18:56:53Z | http://arxiv.org/abs/2309.06523v1 | # Directed Sets of Topology - Tukey Representation and Rejection
###### Abstract
Every directed set is Tukey equivalent to (a) the family of all compact subsets, ordered by inclusion, of a (locally compact) space, to (b) a neighborhood filter, ordered by reverse inclusion, of a point (of a compact space, and of a topological group), and to (c) the universal uniformity, ordered by reverse inclusion, of a space. Two directed sets are Tukey equivalent if they are cofinally equivalent in the sense that they can both be order embedded _cofinally_ in a third directed set.
In contrast, any totally bounded uniformity is Tukey equivalent to \([\kappa]^{<\omega}\), the collection of all finite subsets of \(\kappa\), where \(\kappa\) is the cofinality of the uniformity. All other Tukey types are'rejected' by totally bounded uniformities. Equivalently, a compact space \(X\) has weight (minimal size of a base) equal to \(\kappa\) if and only if the neighborhood filter of the diagonal is Tukey equivalent to \([\kappa]^{<\omega}\).
A number of questions from the literature are answered with the aid of the above results.
Keywords: Directed set, Tukey order, compact set, neighborhood filter, universal uniformity, totally bounded uniformity.
MSC Classification: 03E04, 06A07, 54D30, 54E15, 54E35, 54F05.
## 1 Introduction
Two directed sets are _Tukey equivalent_ if they are cofinally equivalent in the sense that they can both be order embedded _cofinally_ in a third directed set. Tukey equivalence [15] was originally introduced, early in the 20th century, as a tool to understand convergence in general topological spaces, however it was quickly seen to have broad applicability in comparing partial orders. Key to the utility of Tukey equivalence (and more generally, Tukey order) is that because it focuses on what happens cofinally it is sufficiently coarse to allow comparison of very different directed sets, but nevertheless preserves many order invariants. Fremlin [8] was the first to realize the relevance of the Tukey order in analysis, showing that a fundamental result of Bartoszynski and Raisonnier & Stern on additivity of the measure and category ideals was due to the Tukey order relation between the relevant ideals.
Directed sets arise naturally in topology and in a variety of contexts. (In this paper all topological spaces are Tychonoff.) Here we show that every directed set can be represented, up to Tukey equivalence, by such a topological directed set. Specifically, Theorem 2.2 states that for every directed set \(P\) there is a locally compact space \(X_{P}\), a compact space \(K_{P}\) and point \(x_{P}\), and a space \(Y_{P}\) such that \(P\) is Tukey equivalent to \(\mathcal{K}(X)\) (all compact subsets of \(X\) ordered by inclusion), \(\mathcal{N}_{x_{P}}^{K_{P}}\) (all open neighborhoods of \(x_{P}\) in \(K_{P}\) ordered by reverse inclusion), and \(\mathcal{U}_{Y_{P}}\) (the universal uniformity of \(Y_{P}\) ordered by reverse inclusion). In principle, then, the study of arbitrary directed sets up to Tukey equivalence can be restricted simply to that of \(\mathcal{K}(X)\) (or neighborhood filters of points in a compact space, or universal uniformities). Fortunately the Tukey types of the directed set \(\mathcal{K}(X)\) have been studied intensively [7, 10, 11, 12, 5].
Strikingly, in the opposite direction we show, see Theorem 3.1, that any totally bounded uniformity is Tukey equivalent to \([\kappa]^{<\omega}\), the collection of all finite subsets of \(\kappa\), the cofinality of the uniformity - all other Tukey types are'rejected'. A more topological interpretation of this result, given in Theorem 3.5, is that a compact space \(X\) has weight (minimal size of a base) equal to \(\kappa\) if and only if the neighborhood filter of the diagonal is Tukey equivalent to \([\kappa]^{<\omega}\). Some variations and open problems on this theme are discussed in Section 3.2.
A space \(X\) is said to have a \(P\)_-base_, where \(P\) is a directed set, if every point \(x\) has a neighborhood base, \(\{U_{p}:p\in P\}\), where \(U_{p}\subseteq U_{p^{\prime}}\) if \(p\geq p^{\prime}\). It is easy to see that a space with a compatible uniformity Tukey equivalent to some \(P\) has a \(P\)-base, and it is immediate that if a space has a \(P\)-base then it contains a point whose neighborhood filter is Tukey equivalent to \(P\). It follows from the results outlined above that although in general (even in topological groups) there are spaces with a \(P\)-base for every \(P\) (up to Tukey equivalence), for _compact_ spaces having a \(P\)-base for some \(P\) is delicately balanced between 'everything goes' and 'everything rejected except \([\kappa]^{<\omega}\)'. This helps explain the recent surge of interest [1, 2, 3, 6] in spaces with \(P\)-bases, especially compacta, for certain nice \(P\). One particular outstanding question is whether compact, scattered spaces, of countable scattered height with a \(P\)-base where \(P\) has calibre \((\omega_{1},\omega)\) are necessarily countable. Theorem 3.7 gives a positive answer under a mild additional condition.
In the final section we apply the results above to answer a number (eleven) of questions from the literature.
## 2 Representing Directed Sets
### Directed Sets from Topology
For any space \(X\), \(\mathcal{K}(X)\) will denote the directed set of all compact subsets of \(X\), ordered by inclusion. If \(A\) is a subset of \(X\), then \(\mathcal{N}_{A}^{X}\) will denote the neighborhood filter of \(A\) in \(X\), ordered by reverse inclusion. We abbreviate \(\mathcal{N}_{\{x\}}^{X}\), the neighborhood filter of a point \(x\) in \(X\), to \(\mathcal{N}_{x}^{X}\). Along with neighborhood filters of points, we pay particular attention to the neighborhood filter, \(\mathcal{N}_{\Delta}^{X^{2}}\), of the diagonal, and certain subfilters, specifically
compatible uniformities. Recall that a (compatible) uniformity on \(X\) is a subfilter, say \(\mathcal{U}\), of \(\mathcal{N}_{\Delta}^{X^{2}}\) (such that for every \(x\) in \(X\) the family of sets \(U[x]=\{y:(x,y)\in U\}\), where \(U\) is in \(\mathcal{U}\), is a neighborhood base at \(x\) in \(X\)) where, for any \(U\) we have that \(U^{-1}=\{(y,x):(x,y)\in U\}\) is in \(\mathcal{U}\) and there is a \(V\) in \(\mathcal{U}\) such that \(V\circ V\subseteq U\), where \(V\circ V=\{(x,z):(x,y),(y,z)\in V\}\). The universal uniformity, \(\mathcal{U}_{X}\), is the finest compatible uniformity on \(X\). We note that if \(X\) is paracompact then the universal uniformity and neighborhood filter of the diagonal coincide. A uniformity, \(\mathcal{U}\), on \(X\) is totally bounded if for every \(U\) in \(\mathcal{U}\) there is a finite subset \(F\) of \(X\) such that \(\{U[x]:x\in F\}\) covers \(X\). Every space has at least one compatible totally bounded uniformity. Every uniformity, \(\mathcal{U}\), has a completion, \(\widehat{\mathcal{U}}\), defined on a superset, \(\widehat{X}\), of \(X\), and an alternative description of totally bounded uniformities are those whose completion is compact. (See [4] for a general reference on all topics mentioned above.)
Let \(P\) and \(Q\) be directed sets. Then \(Q\) is a _Tukey quotient_ of \(P\), denoted \(P\geq_{T}Q\), if there is a _cofinal map_\(\phi\) from \(P\) to \(Q\), in other words: for every cofinal subset \(C\), say, of \(P\) we have that \(\phi(C)\) is cofinal in \(Q\). Any map \(\phi:P\to Q\) which is order-preserving and has cofinal image is a cofinal map. A map \(\psi:Q\to P\) is a called a _Tukey map_ if for every unbounded subset, \(U\) say, of \(Q\) we have that \(\psi(U)\) is unbounded in \(P\). It turns out that there is a Tukey quotient from \(P\) to \(Q\) if and only if there is a Tukey map from \(Q\) to \(P\). Two directed sets \(P\) and \(Q\) are _Tukey equivalent_, denoted \(P=_{T}Q\), if and only if \(P\geq_{T}Q\) and \(Q\geq_{T}P\). (This is equivalent to the definition of Tukey equivalence stated in the introduction, namely \(P\) and \(Q\) embed cofinally in a third directed set.)
**Lemma 2.1**.: _Let \(\mathcal{U}\) be a uniformity on \(X\). Then the uniformity and its completion are Tukey equivalent, \(\mathcal{U}=_{T}\widehat{\mathcal{U}}\)._
To see this, define \(\phi:\mathcal{U}\to\widehat{\mathcal{U}}\) by \(\phi(U)=\overline{U}\) where the closure is taken in \(\widehat{X}\), and \(\psi:\widehat{\mathcal{U}}\to\mathcal{U}\) by \(\psi(\widehat{U})=\widehat{U}\cap X^{2}\). Then \(\phi\) and \(\psi\) are order-preserving and have cofinal image.
Note that in a topological group \(G\), with identity element \(e\), the left translates (say) of neighborhoods of the identity gives a compatible uniformity, \(\mathcal{U}_{L}\), which is order isomorphic to \(\mathcal{N}_{e}^{G}\). Also, recall that in a compact space all compatible uniformities are equal, and coincide with the family of all neighborhoods of the diagonal. Now we show that _any_ directed set can be represented, up to Tukey equivalence, by each of the types of directed sets arising in topology introduced above.
**Theorem 2.2**.: _Let \(P\) be a directed set._
_(1) \(P=_{T}\mathcal{K}(X_{P})\) for some locally compact Hausdorff space \(X_{P}\)._
_(2) \(P=_{T}\mathcal{N}_{x_{P}}^{K_{P}}\) for some compact Hausdorff space \(K_{P}\) and point \(x_{P}\) in \(K_{P}\)._
_(3) \(P=_{T}\mathcal{N}_{e}^{G_{P}}\) for some topological group \(G_{P}\) with identity \(e\)._
_(4) \(P\times\omega=_{T}\mathcal{N}_{0}^{L_{P}}\) for some locally convex topological vector space \(L_{P}\)._
_(5) \(P=_{T}\mathcal{U}_{Y_{P}}\), the fine uniformity, for some space \(Y_{P}\)._
_(6) \(P=_{T}\mathcal{N}_{\Delta}^{Y_{P}^{2}}\), for some space \(Y_{P}\)._
Proof.: First we establish (1). Let \(D(P)\) be \(P\) with the discrete topology. For a \(p\) in \(P\) let \(K_{p}=\overline{\downarrow p}\) where \(\,\downarrow p\,=\{p^{\prime}\in P:p^{\prime}\leq p\}\) and the closure is
in \(\beta D(P)\). Note that \(K_{p}\) is compact and open. Let \(X_{P}=\bigcup\{K_{p}:p\in P\}\) considered as a subspace of \(\beta D(P)\). Then \(X_{P}\) is locally compact and Hausdorff. The map \(\phi(p)=K_{p}\) is an order-embedding of \(P\) into \(\mathcal{K}(X_{P})\) whose image is cofinal. To see cofinality note that \(\{K_{p}:p\in P\}\) is an open cover of \(X_{P}\) so any compact subset, \(K\) say, is contained in finitely many \(K_{p_{1}},\ldots,K_{p_{n}}\), and now if \(p\) is an upper bound of \(p_{1},\ldots,p_{n}\) then \(K\subseteq\phi(p)\). Hence \(P=_{T}\mathcal{K}(X_{P})\).
Now for (2). By (1) we know there is a locally compact Hausdorff space \(X_{P}\) such that \(P\) and \(\mathcal{K}(X_{P})\) are Tukey equivalent. Let \(K_{P}\) be the one-point compactification of \(X_{P}\), and let \(x_{P}\) be the point at infinity. Then clearly \((\mathcal{K}(X_{P}),\subseteq)\) and \((\mathcal{N}_{x_{P}}^{K_{P}},\supseteq)\) are order isomorphic, and so Tukey equivalent.
For (3), let \(X_{P}\) be the space in part (1), so \(\mathcal{K}(X_{P})=_{T}P\), and note it is zero dimensional. Set \(G_{P}=C_{k}(X_{P},\mathbb{Z}_{2})\), the group under co-ordinatewise addition modulo 2 of all continuous maps of \(X_{P}\) into the discrete two point space, with the compact-open topology. The identity of \(G_{P}\) is \(\mathbf{0}\), the constant zero function. Standard basic open neighborhoods of \(\mathbf{0}\) have the form \(B(K)=\{f\in C(X,\mathbb{Z}_{2}):f(K)=0\}\), for compact subsets \(K\) of \(X_{P}\). Clearly, if \(K\subseteq K^{\prime}\) then \(B(K)\supseteq B(K^{\prime})\), and if \(x\in K\setminus K^{\prime}\) then the characteristic function of a clopen neighborhood of \(x\) disjoint from \(K^{\prime}\) is in \(B(K^{\prime})\) but not \(B(K)\). Thus \(\phi:\mathcal{K}(X_{P})\to\mathcal{N}_{\mathbf{0}}^{G_{P}}\) is an order isomorphism of \((\mathcal{K}(X_{P}),\subseteq)\) with a cofinal subset of \((\mathcal{N}_{\mathbf{0}}^{G_{P}},\supseteq)\), and thus this latter directed set is Tukey equivalent to \(P\).
For (4), let \(X_{P}\) be the space of part (1), and set \(L_{P}=C_{k}(X_{P})\). As shown in [13]\(\mathcal{N}_{0}^{L_{P}}=_{T}\mathcal{K}(X_{P})\times\omega\), and the claim follows.
Finally, we establish (5) and (6). We can suppose that \(P\) is the neighborhood filter, \(\mathcal{N}_{x_{P}}^{X_{P}}\), of part (2). Rename the point \(x_{P}\) to be \(y_{P}\). Let \(Y_{P}\) have underlying set \(X_{P}\), and topology obtained from \(X_{P}\) by isolating all points except \(y_{p}\). Then \(\mathcal{N}_{y_{P}}^{Y_{P}}=_{T}P\). Now \(Y_{P}\) is paracompact, so \(\mathcal{U}_{Y_{P}}=\mathcal{N}_{\Delta}^{Y_{P}^{2}}\). The collection of all \(\Delta\cup U^{2}\) where \(U\) is in \(\mathcal{N}_{y_{P}}^{Y_{P}}\), is cofinal in \(\mathcal{N}_{y_{P}}^{Y_{P}}\) and order isomorphic to \(P\). Claims (5) and (6) follow.
Regarding part (4) above we know (see [12]) which directed sets \(Q\) are Tukey equivalent to some \(P\times\omega\).
**Lemma 2.3**.: _Let \(Q\) be a directed set. Then the following are equivalent: (i) \(Q=_{T}P\times\omega\) for some directed set \(P\), (ii) \(Q\geq_{T}\omega\), and (iii) \(Q\) is not countably directed._
### Cofinality, Calibres, and Other Properties of \(\mathcal{K}(X)\)
The _cofinality_ of a directed set \(P\), denoted \(\operatorname{cof}(P)\), is the minimal size of a cofinal subset of \(P\). If \(P\geq_{T}Q\) then \(\operatorname{cof}(P)\geq\operatorname{cof}(Q)\). Indeed, \(\operatorname{cof}(P)\leq\kappa\) if and only if \([\kappa]^{<\omega}\geq_{T}P\). A directed set \(P\) has _calibre_\((\mu,\lambda)\) where \(\mu\) and \(\lambda\) are cardinals, \(\mu\) regular, if every \(\mu\)-sized subset of \(P\) contains a \(\lambda\)-sized subset which is bounded. If \(P\geq_{T}Q\) and \(P\) has calibre \((\mu,\lambda)\) then so does \(Q\). Indeed, \(P\not\geq_{T}[\mu]^{<\omega}\) if and only if \(P\) has calibre \((\mu,\omega)\).
A _topological directed set_ is a directed set with a topology. For example, if \(X\) is any space, then \(\mathcal{K}(X)\) is directed as usual by inclusion, and is
naturally equipped with the Vietoris topology. A topological directed set \(P\) is said to be _KSB_ (compact sets bounded) if every compact subset of \(P\) is bounded (above), and _DK_ (down sets compact) if every down set, \(\downarrow p=\{p^{\prime}\in P:p^{\prime}\leq p\}\), of \(P\) is compact. Observe that \(\mathcal{K}(X)\) is KSB and DK.
**Lemma 2.4**.: _Let \(Q\) be a topological directed set. If \(Q\) is locally compact, \(e(Q)\leq\kappa\), and \(Q\) is KSB, then \(Q\) has calibre \((\kappa^{+},\omega)\)._
Proof.: Let \(S\) be a subset of \(Q\) of size \(\kappa^{+}\). We show \(S\) has an infinite subset with an upper bound. As \(e(Q)\leq\kappa\), \(S\) is not closed and discrete, so there is a \(q\) in \(Q\) such that \(q\) is in the closure of \(S\setminus\{q\}\). Since \(Q\) is locally compact there is a compact neighborhood \(C\) of \(q\). Then \(C\) contains an infinite subset \(S_{0}\) of \(S\). Then \(\overline{S_{0}}\) is compact, so by KSB, \(S_{0}\) has an upper bound.
A space \(X\) is said to have a \(P\)-ordered compact cover, where \(P\) is a directed set, if it has a compact cover \(\{K_{p}:p\in P\}\) where \(p\leq p^{\prime}\) implies \(K_{p}\subseteq K_{p^{\prime}}\). Clearly if \(P\geq_{T}\mathcal{K}(X)\) then \(X\) has a \(P\)-ordered compact cover. Let us say that a space \(X\) has _relative calibre \((\mu,\omega)\)_ (in \(\mathcal{K}(X)\)) if every subset \(S\) of \(X\) of size \(\mu\) has an infinite subset \(S_{0}\) with compact closure. Clearly, if \(\mathcal{K}(X)\) has calibre \((\mu,\omega)\) then \(X\) has relative calibre \((\mu,\omega)\). But also easy to check:
**Lemma 2.5**.: _If a space \(X\) has a \(P\)-ordered compact cover, where \(P\) has calibre \((\mu,\omega)\) then \(X\) is relative calibre \((\mu,\omega)\)._
Define the _extent_ of a space \(Y\), to be \(e(Y)=\sup\{|E|:E\text{ is closed and discete in }Y\}\). Observe \(Y\) has countable extent if and only if \(e(Y)\leq\aleph_{0}\).
**Lemma 2.6**.: _If \(X\) has relative calibre \((\kappa^{+},\omega)\) then \(e(X)\leq\kappa\)._
Proof.: To show that \(e(X)\leq\kappa\) we need to show that no subset of \(X\) of size \(>\kappa\) is closed discrete. Well take, any \(S\) a subset of \(X\) of size \(>\kappa\). Then by relative calibre \((\kappa^{+},\omega)\), there is an infinite subset \(S_{0}\) of \(S\) with compact closure. Then \(S_{0}\), and \(S\), can not be closed and discrete.
**Lemma 2.7**.: _For any space \(X\) we have \(\mathcal{K}(X)=_{T}\mathcal{K}(\mathcal{K}(X))\)._
Proof.: Since \(\mathcal{K}(X)\) embeds as a closed subspace of \(\mathcal{K}(\mathcal{K}(X))\), certainly \(\mathcal{K}(X)\leq_{T}\mathcal{K}(\mathcal{K}(X))\). For the converse, define \(\phi:\mathcal{K}(X)\to\mathcal{K}(\mathcal{K}(X))\) by \(\phi(K)=\downarrow K\,=\{L\in\mathcal{K}(X):L\subseteq K\}\). Recalling that \(\mathcal{K}(\mathcal{K}(X))\) is DK, we see \(\phi\) does map into \(\mathcal{K}(\mathcal{K}(X))\). Clearly \(\phi\) is order-preserving. For any \(\mathcal{K}\) in \(\mathcal{K}(\mathcal{K}(X))\), we know \(\bigcup\mathcal{K}\) is a compact subset of \(X\), then \(\phi(\bigcup\mathcal{K})\supseteq\mathcal{K}\), and so \(\phi\) has cofinal image.
**Lemma 2.8**.: _Let \(X\) be locally compact. Then \(\mathcal{K}(X)\) has calibre \((\kappa^{+},\omega)\) if and only if \(e(\mathcal{K}(X))\leq\kappa\)._
Proof.: Suppose, first, that \(\mathcal{K}(X)\) has calibre \((\kappa^{+},\omega)\). Then \(\mathcal{K}(\mathcal{K}(X))\) has calibre \((\kappa^{+},\omega)\), and \(\mathcal{K}(X)\) has relative calibre \((\kappa^{+},\omega)\) in \(\mathcal{K}(\mathcal{K}(X))\). Hence \(e(\mathcal{K}(X))\leq\kappa\).
Now suppose, \(e(\mathcal{K}(X))\leq\kappa\). Apply Lemma 2.4.
'Rejecting' Directed Sets in the Compact Case
### Totally Bounded Uniformities
**Theorem 3.1**.: _Let \(X\) be a set. For any totally bounded uniformity, \(\mathcal{U}\), on \(X\) we have \(\mathcal{U}=_{T}[\kappa]^{<\omega}\) where \(\kappa=\operatorname{cof}(\mathcal{U})\). More generally, for any uniformity \(\mathcal{U}\) on \(X\) and totally bounded subset \(S\) we have \(\mathcal{U}\upharpoonright S=_{T}[\kappa]^{<\omega}\) where \(\kappa=\operatorname{cof}(\mathcal{U}\upharpoonright S)\)._
Proof.: Evidently, the'more generally' claim follows from the first part, which we now prove.
For any directed set \(P\) we have \([\operatorname{cof}(P)]^{<\omega}\geq_{T}P\), so certainly \([\kappa]^{<\omega}\geq_{T}\mathcal{U}\). We show \([\kappa]^{<\omega}\leq_{T}\mathcal{U}\). To do this it suffices to show that there is a \(\kappa\)-sized subcollection, \(\{U_{\alpha}:\alpha<\kappa\}\), of \(\mathcal{U}\) such that no infinite subset has an upper bound, for then \(\psi:[\kappa]^{<\omega}\to\mathcal{U}\) defined by \(\psi(F)=\bigcap_{\alpha\in F}U_{\alpha}\) is a Tukey map (carries unbounded sets to unbounded sets).
We may assume \(X\) is compact in the topology induced by \(\mathcal{U}\), and so \(\mathcal{U}=\mathcal{N}_{\Delta}^{X^{2}}\). Indeed we can replace \(X\) with the space completion, \(\widehat{X}\), which is compact as \(\mathcal{U}\) is totally bounded, and \(\mathcal{U}\) with its completion \(\widehat{\mathcal{U}}\), which is totally bounded. To see this, recall, see Lemma 2.1, that \(\mathcal{U}=_{T}\widehat{\mathcal{U}}\), so, in particular, \(\kappa=\operatorname{cof}(\mathcal{U})=\operatorname{cof}(\widehat{\mathcal{U}})\).
Construct by recursion a family of pairs \((x_{\alpha,1},x_{\alpha,2})\) in \(X^{2}\) and \(U_{\alpha}\) in \(\mathcal{U}\), for \(\alpha<\kappa\), such that \((\bigstar)\): \(x_{\alpha,i}\neq x_{\beta,j}\) if \(i\neq j\) or \(\alpha\neq\beta\) and, if \(x_{\beta,i}\in U_{\alpha}[x_{\alpha,i}]\) and \(x_{\alpha,i}\in U_{\beta}[x_{\beta,i}]\) for \(i=1,2\) then \(\alpha=\beta\). To facilitate the construction we additionally ensure that for every \(\alpha\) we have \(\overline{U_{\alpha}[x_{\alpha,1}]}\cap\overline{U_{\alpha}[x_{\alpha,2}]}=\emptyset\).
At stage \(\alpha\) of the recursive construction first observe that \(\{U_{\beta}[x_{\beta,1}]\times U_{\beta}[x_{\beta,2}]:\beta<\alpha\}\) is not a cover of \(X^{2}\setminus\Delta\). Otherwise every compact (equivalently, closed) subset of \(X^{2}\setminus\Delta\) is contained in some finite union of these open rectangles, and so the collection of all complements in \(X^{2}\) of finite unions of closures of rectangles \(U_{\beta}[x_{\beta,1}]\times U_{\beta}[x_{\beta,2}]\), where \(\beta<\alpha\), would be a base for \(\mathcal{N}_{\Delta}^{X^{2}}=\mathcal{U}\) of size strictly less than \(\kappa\), which is the cofinality of \(\mathcal{U}\) - contradiction. Pick \((x_{\alpha,1},x_{\alpha,2})\) in \(X^{2}\setminus\Delta\) but not in \(\bigcup_{\beta<\alpha}(U_{\beta}[x_{\beta,1}]\times U_{\beta}[x_{\beta,2}])\). Pick \(U_{\alpha}\) in \(\mathcal{U}\) so that \(\overline{U_{\alpha}[x_{\alpha,1}]}\cap\overline{U_{\alpha}[x_{\alpha,2}]}=\emptyset\). By construction, \(x_{\alpha,1}\neq x_{\alpha,2}\), and for no \(\beta<\alpha\) do we have \(x_{\alpha,i}\in U_{\beta}[x_{\beta,i}]\) for \(i=1,2\). Hence \((\bigstar)\) holds.
To complete the proof it remains to show that \(\{U_{\alpha}:\alpha<\kappa\}\) contains no infinite bounded subcollection. For a contradiction, suppose \(A\) is an infinite subset of \(\kappa\) such that \(U_{A}=\bigcap_{\alpha\in A}U_{\alpha}\) is in \(\mathcal{U}\). Pick \(U_{\infty}\) in the uniformity \(\mathcal{U}\) so that \(U_{\infty}^{-1}\circ U_{\infty}\subseteq U_{A}\). As \(\mathcal{U}\) is totally bounded, fix finite \(F\) such that \(U_{\infty}[F]=X\). Then \(\{U_{\infty}[a]\times U_{\infty}[b]:a,b\in F\}\) is a finite cover of \(X^{2}\), so there is an infinite subset \(A^{\prime}\) of \(A\) and \((x_{\infty,1},x_{\infty,2})\) so that for all \(\alpha\) in \(A^{\prime}\) we have \((x_{\alpha,1},x_{\alpha,2})\in U_{\infty}[x_{\infty,1}]\times U_{\infty}[x_{ \infty,2}]\).
Take any two distinct \(\alpha,\beta\) from \(A^{\prime}\). Then, for \(i=1,2\), we have \(x_{\alpha,i}\in U_{\infty}[x_{\alpha,i}]\) so (a) \((x_{\alpha,i},x_{\infty,i})\in U_{\infty}^{-1}\), and \(x_{\beta,i}\in U_{\infty}[x_{\infty,i}]\) so (b) \((x_{\infty,i},x_{\beta,i})\in U_{\infty}\). From (a) and (b), for \(i=1,2\), we have \((x_{\alpha,i},x_{\beta,i})\in U_{\infty}^{-1}\circ U_{\infty}\subseteq U_{\alpha}\), so \(x_{\beta,i}\) is in \(U_{\alpha}[x_{\alpha,i}]\). Interchanging \(\alpha\) and \(\beta\) in the
argument above, for \(i=1,2\), along with \(x_{\beta,i}\in U_{\alpha}[x_{\alpha,i}]\), we also have \(x_{\alpha,i}\in U_{\beta}[x_{\beta,i}]\), and this contradicts \((\bigstar)\), as desired.
We now re-interpret this theorem, which is a combinatorial statement about the Tukey type of certain uniformities (namely the totally bounded ones) topologically and in terms of calibres. Recalling that \(P\not\geq_{T}[\mu]^{<\omega}\) if and only if \(P\) has calibre \((\mu,\omega)\), and applying this to \(\mu=\kappa^{+}\), the following is immediate.
**Theorem 3.2**.: _Suppose \(\mathcal{U}\) is a compatible uniformity for a space \(X\). If \(A\) is a totally bounded subset and \(\mathcal{U}\) is calibre \((\kappa^{+},\omega)\) then \(w(A)\leq\kappa\)._
Totally bounded subsets of uniform spaces are also called _precompact_ subsets (because they are precisely those subsets whose closure in the completion is compact). Pseudocompact, and, a fortiori, countably compact and compact subsets are totally bounded with respect to any compatible uniformity.
**Corollary 3.3**.: _Suppose \(\mathcal{U}\) is a compatible uniformity for a space \(X\). If \(A\) is a pseudocompact subspace and \(\mathcal{U}\) is calibre \((\kappa^{+},\omega)\) then \(w(A)\leq\kappa\)._
_In particular, if \(\mathcal{U}\) is calibre \((\omega_{1},\omega)\) then every pseudocompact (and, countably compact or compact) subspace of \(X\) is (compact, second countable and) metrizable._
**Corollary 3.4**.: _Let \(G\) be a topological group with identity \(e\). If \(A\) is a precompact subset of \(G\), and \(\mathcal{N}_{e}^{G}\) is calibre \((\kappa^{+},\omega)\) then \(w(A)\leq\kappa\)._
Compact spaces have a unique compatible uniformity, which coincides with all neighborhoods of the diagonal, and so with complements of compact subsets of the square disjoint from the diagonal. Consequently we can rephrase again for compact spaces, or subspaces, without (direct) reference to a uniformity. Further recall (Lemma 2.8), if \(Y\) is locally compact then \(\mathcal{K}(Y)\) has calibre \((\kappa^{+},\omega)\) if and only if the extent of \(\mathcal{K}(Y)\), \(e(\mathcal{K}(Y))\), is no more than \(\kappa\).
**Theorem 3.5**.: _Let \(X\) be a space._
_(1) Suppose \(X\) is compact. Then \(w(X)=\kappa\) if and only if \(\mathcal{N}_{\Delta}^{X^{2}}=_{T}[\kappa]^{<\omega}\)._
_Alternatively phrased, \(w(X)\leq\kappa\) if and only if \(\mathcal{N}_{\Delta}^{X^{2}}\) has calibre \((\kappa^{+},\omega)\). And a second alternative phrasing, \(w(X)=e(\mathcal{K}(X^{2}\setminus\Delta))\)._
_(2) More generally, if \(A\) is a compact subset of a space \(X\), and \(\mathcal{N}_{\Delta}^{X^{2}}\) is calibre \((\kappa^{+},\omega)\) then \(w(A)\leq\kappa\)._
_Both (1) and (2) hold for any directed set Tukey equivalent to \(\mathcal{N}_{\Delta}^{X^{2}}\), including \(\mathcal{U}_{X}\), or any other compatible uniformity, and \(\mathcal{K}(X^{2}\setminus\Delta)\)._
### Variations and Problems
Comparing what we can prove about pseudocompact and countably compact subsets of uniform spaces with calibre \((\omega_{1},\omega)\), with what we can show for compact subsets of a space, \(X\), with \(\mathcal{N}_{\Delta}^{X^{2}}\) or \(\mathcal{K}(X^{2}\setminus\Delta)\) having calibre \((\omega_{1},\omega)\), the following questions naturally arise.
**Question 1**.: _Let \(X\) be a space. Suppose one of the directed sets specified has calibre \((\omega_{1},\omega)\): (A) \(\mathcal{N}_{\Delta}^{X^{2}}\) or (B) \(\mathcal{K}(X^{2}\setminus\Delta)\)._
_(i) If \(X\) is countably compact then is \(X\) (compact, second countable and) metrizable?_
_(ii) If \(X\) is pseudocompact then is \(X\) (compact, second countable and) metrizable?_
_(iii) If \(A\) is a countably compact (closed) subset of \(X\) then is \(A\) (compact, second countable and) metrizable?_
_(iv) If \(A\) is a pseudocompact (closed) subset of \(X\) then is \(A\) (compact, second countable and) metrizable?_
Next we answer (A)(i) positively. The remaining questions are wide open.
**Proposition 3.6**.: _Let \(X\) be a space such that \(\mathcal{N}_{\Delta}^{X^{2}}\) has calibre \((\omega_{1},\omega)\). If \(X\) is countably compact then \(X\) is compact (and metrizable)._
Proof.: We show the contra-positive. So suppose \(X\) is not compact. Then either \(X\) is not countably compact, and we are done, or \(X\) is not Lindelof, so we can find a strictly increasing sequence of open sets, \(\{U_{\alpha}:\alpha<\kappa\}\) covering \(X\) with no countable subcover (so \(\kappa\) is uncountable). Fix \(y_{\alpha}\in U_{\alpha}\setminus\bigcup_{\beta<\alpha}U_{\beta}\). For each \(\alpha<\kappa\), let \(V_{\alpha}=X\setminus\{y_{\alpha}\}\) and \(N_{\alpha}=U_{\alpha}^{2}\cup V_{\alpha}^{2}\), which is a neighborhood of the diagonal. As \(\mathcal{N}_{\Delta}^{X^{2}}\) has calibre \((\omega_{1},\omega)\), we know there is an infinite \(A\subseteq\kappa\) and a neighborhood \(N\) of the diagonal such that \(N_{\alpha}\supseteq N\) for all \(\alpha\in A\). Without loss of generality, we may assume \(N=\bigcup_{W\in\mathcal{W}}W^{2}\) for some open cover \(\mathcal{W}\) of \(X\). We will show that each member of \(\mathcal{W}\) contains at most one point of the infinite set \(\{y_{\alpha}:\alpha\in A\}\). Indeed, if \(y_{\alpha},y_{\beta}\in W\) for some \(W\) from \(\mathcal{W}\) and distinct \(\alpha,\beta\in A\), then we see that \((y_{\alpha},y_{\beta})\in W^{2}\subseteq N\subseteq N_{\alpha}=U_{\alpha}^{2} \cup V_{\alpha}^{2}\). Now since \(y_{\alpha}\notin V_{\alpha}\), then we must have \(y_{\beta}\in U_{\alpha}\), which gives \(\beta\leq\alpha\). But symmetrically, \((y_{\alpha},y_{\beta})\in U_{\beta}^{2}\cup V_{\beta}^{2}\), which implies that \(\alpha\leq\alpha\). Thus \(\alpha=\beta\). Hence, \(\mathcal{W}\) witnesses that \(\{y_{\alpha}:\alpha\in A\}\) is an infinite closed discrete subset of \(X\), and thus \(X\) is not countably compact.
### \(P\)-Bases
Recall from the Introduction that a space \(X\) is said to have a \(P\)-base, where \(P\) is a directed set, if every point \(x\) has a neighborhood base, \(\{U_{p}:p\in P\}\), where \(U_{p}\subseteq U_{p^{\prime}}\) if \(p\geq p^{\prime}\). Also recall that a topological space \(X\) is scattered if each non-empty subspace of \(X\) has an isolated point. It is known that any compact scattered space is zero-dimensional. Scattered spaces can be stratified by the scattered height, as follows. For any subspace \(A\) of a space \(X\), let \(A^{\prime}\) be the set of all non-isolated points of \(A\). It is straightforward to see that \(A^{\prime}\) is a closed subset of \(A\). Let \(X^{(0)}=X\) and define \(X^{(\alpha)}=\bigcap_{\beta<\alpha}(X^{(\beta)})^{\prime}\) for each \(\alpha>0\). Then a space \(X\) is scattered if \(X^{(\alpha)}=\emptyset\) for some ordinal \(\alpha\). If \(X\) is scattered then for each of its points, \(x\), there exists a unique ordinal \(h(x)\) such that \(x\in X^{(h(x))}\setminus X^{(h(x)+1)}\). The ordinal \(h(X)=\sup\{h(x):x\in X\}\) is called the scattered height of \(X\) and is denoted by \(h(X)\). Also, it straightforward to show that for any compact scattered space \(X\), \(X^{(h(X))}\) is a non-empty finite subset.
The ordinal space \(\omega_{1}+1\) is compact, scattered, of scattered height \(\omega_{1}\), with a \(P\)-base for \(P=\mathcal{K}(\mathbb{Q})\), (and consistently, \(P=\omega^{\omega}\)), and so for \(P\) with calibre \((\omega_{1},\omega)\). But it is an interesting open question, raised in [6, Question 3.3], whether every compact scattered space, with countable scattered height and a \(P\)-base where \(P\) is calibre \((\omega_{1},\omega)\), is countable. We show the answer is positive if the space is additionally hereditarily meta-Lindelof (every open cover of any subspace has a point-countable open refinement).
**Theorem 3.7**.: _Let \(X\) be a hereditarily meta-Lindelof, compact, scattered space with countable scattered height. If \(X\) has a \(P\)-base with \(P\) having calibre \((\omega_{1},\omega)\), then \(X\) is countable, hence metrizable._
This follows immediately from the next technical result.
**Theorem 3.8**.: _Let \(X\) be a compact scattered space with countable scattered height. Suppose that, to each \(x\) in \(X\), we can assign a clopen neighborhood \(U_{x}\) such that \(U_{x}\cap X^{(h(x))}=\{x\}\) and \(\{U_{x}:x\in X\}\) is point-countable. If \(X\) has a \(P\)-base with \(P\) having calibre \((\omega_{1},\omega)\), then \(X\) is countable, hence metrizable._
Proof.: The proof is by induction on the scattered height. In [3] it is shown that the result holds when the scattered height is finite. Assume, then, that the scattered height of \(X\) is \(\alpha\) with \(\alpha<\omega_{1}\) and the result holds for any compact space with scattered height \(<\alpha\). We show that \(X\) is countable.
Suppose, first, \(\alpha\) is a successor, \(\alpha=\alpha^{-}+1\). Let \(Y=X^{(\geq\alpha^{-})}\). Clearly \(Y\) is a compact subspace of \(X\) with scattered height \(2\), hence it is countable. Then it is straightforward to verify that \(X\) is countable.
Now suppose that \(\alpha\) is a limit. Without loss of generality, we assume that \(X^{(\alpha)}\) is a singleton, denoted by \(x_{*}\). As \(X\) has a \(P\)-base we can fix a clopen base \(\{B_{p}:p\in P\}\) at \(x_{*}\), where \(B_{p}\subseteq B_{p^{\prime}}\) if \(p\geq p^{\prime}\). For each \(p\in P\), \(K_{p}=X\setminus B_{p}\) is compact and with scattered height \(\beta_{p}\) which is clearly \(<\alpha\), hence it is countable and also \(K_{p}^{(\beta_{p})}\) is finite. Note that \(\{K_{p}:p\in P\}\) is a \(P\)-ordered compact cover of \(X\setminus\{x_{*}\}\).
Assume, for a contradiction, that \(X\) is uncountable. Let \(\{U_{x}:x\in X\}\) be the collection of clopen sets satisfying the conditions in the statement of this theorem. Clearly for each \(x\in X\setminus\{x_{*}\}\), \(U_{x}\) is compact with scattered height \(<\alpha\), hence is countable. Also, without loss of generality, we assume that \(X\setminus X^{(1)}\) is uncountable. For each \(y\in X\setminus X^{(1)}\), let \(C_{y}=\{x\in X\setminus\{x_{*}\}:y\in U_{x}\}\) which is clearly countable and \(D_{y}=\bigcup\{U_{x}:x\in C_{y}\}\) which is countable too since each \(U_{x}\) for \(x\in C_{y}\) is countable. By a transfinite induction, we pick a set \(\{y_{\lambda}:\lambda<\omega_{1}\}\subseteq X\setminus X^{(1)}\) such that \(y_{\lambda}\notin\bigcup\{D_{y_{\gamma}}:\gamma<\lambda\}\). Such a set exists because \(X\setminus X^{(1)}\) is uncountable. Also, it is clear that the cardinality of \(U_{x}\cap\{y_{\lambda}:\lambda<\omega_{1}\}\) is at most \(1\). For each \(\lambda<\omega_{1}\), pick \(p_{\lambda}\in P\) such that \(y_{\lambda}\in K_{p_{\lambda}}\). Since \(P\) has calibre \((\omega_{1},\omega)\), there is a countable subset \(\{\lambda_{n}:n\in\omega\}\) in \(\omega_{1}\) such that \(\{p_{\lambda_{n}}:n\in\omega\}\) is bounded above. We denote \(p_{*}\) to be an upper bound of \(\{p_{\lambda_{n}}:n\in\omega\}\) in \(P\). Then \(\{y_{\lambda_{n}}:n\in\omega\}\subseteq K_{p_{*}}\). Since \(K_{p_{*}}\) is compact, there is a finite subset \(\{x_{i}:i\leq m\}\) of \(K_{p_{*}}\) such that \(K_{p_{*}}\subseteq\bigcup\{U_{x_{i}}:i\leq m\}\). Then, \(\{y_{\lambda_{n}}:n\in\omega\}\subseteq\{U_{x_{i}}:i\leq m\}\) which is a contradiction because each \(y_{\lambda_{n}}\) is at most in one of the \(\{U_{x_{i}}:i\leq m\}\). This finishes the proof.
Examples and Applications
### Strong Diagonals
We present as an application of Theorem 3.5 the following partial solution to [14, Problem 4.1]. Partial because Sanchez asked about compact spaces, \(X\), with an '\(M\)-diagonal' which means that \(X^{2}\setminus\Delta\) has a \(\mathcal{K}(M)\)-ordered compact cover, and we require a'strong \(M\)-diagonal'. A space \(X\) has a _strong \(M\)-diagonal_ if \(\mathcal{K}(M)\geq_{T}\mathcal{K}(X^{2}\setminus\Delta)\).
**Theorem 4.1**.: _Suppose \(M\) is a metric space and \(K\) is a compact space with a strong \(M\)-diagonal. Then \(w(K)\leq w(M)\)._
Proof.: First note that because every convergent sequence (say, \((K_{n})_{n}\) converging to \(K_{\infty}\)) in \(\mathcal{K}(M)\) is bounded (by \(K_{\infty}\cup\bigcup_{n}K_{n}\)) and every subset of \(\mathcal{K}(M)\) of size strictly bigger than \(w(M)\) has a proper limit point, we have that \(\mathcal{K}(M)\) is calibre \((w(M)^{+},\omega)\). Then, because \(\mathcal{K}(M)\geq_{T}\mathcal{K}(X^{2}\setminus\Delta)\) we see \(\mathcal{N}_{\Delta}^{X^{2}}\) is calibre \((w(M)^{+},\omega)\). Now apply Theorem 3.5.
Sanchez's [14, Problem 4.2] remains an interesting open problem, however the next example gives a counter-example, at least consistently, to his remaining Problems 4.3-4.10.
**Example 4.2**.: _There are spaces \(X\) and \(Y\) with \(\mathcal{K}(Y)\geq_{T}\mathcal{K}(X^{2}\setminus\Delta)\) (i.e. '\(X\) has a strong \(Y\)-diagonal') such that:_
_(1) \(X\) is a first countable compact space, with \(w(X)=\mathfrak{c}\), and \(Y\) is \(\sigma\)-compact, first countable and cosmic;_
_(2) assuming \((\neg CH)\), taking \(\kappa=\aleph_{1}\), \(X\) is first countable and compact, \(Y\) is the union of \(\kappa\)-many compact sets, and \(nw(Y)\leq\kappa<w(X)\)._
Proof.: Let \(Y\) be the bowtie space. Let \(X\) be any first countable, compact space of weight \(\mathfrak{c}\) (the double arrow space, for example, or the Alexandrov duplicate of \([0,1]\)). Then \(X\) is compact, first countable and, assuming \((\neg CH)\), \(w(X)=\mathfrak{c}>\aleph_{1}=\kappa\). While \(Y\) is \(\sigma\)-compact (hence is the union of \(\aleph_{1}\)-many compact subsets), \(nw(Y)=\aleph_{0}\leq\aleph_{1}\). Since \(X\) has weight \(\mathfrak{c}\), by Theorem 3.5 we know \(\mathcal{K}(X^{2}\setminus\Delta)=_{T}[\mathfrak{c}]^{<\omega}\). But from [3] we know, as \(Y\) is the bowtie space, \(\mathcal{K}(Y)=_{T}[\mathfrak{c}]^{<\omega}\) Hence, \(\mathcal{K}(Y)\geq_{T}\mathcal{K}(X^{2}\setminus\Delta)\), as claimed.
### General Topological Groups
Solving, negatively, Question 6.5 from [9] and Question 5.9 from [6] we offer the following example.
**Example 4.3**.: _There is an Abelian topological group \(G\), with identity \(0\), such that all precompact subsets of \(G\) are metrizable and \(\chi(G)=\operatorname{cof}(\mathcal{N}_{0}^{G})=\mathfrak{d}\) but for no separable metrizable space \(M\) do we have \(\mathcal{K}(M)\geq_{T}\mathcal{N}_{0}^{G}\), in particular, \(\omega^{\omega}\geq_{T}\mathcal{N}_{0}^{G}\) (\(G\) does not have an \(\omega^{\omega}\)-base)._
Proof.: Let \(P=\sum\omega^{\omega_{1}}\) (the \(\Sigma\) product of \(\omega_{1}\)-many copies of \(\omega\)). Then \(P\) is calibre \((\omega_{1},\omega)\)[11]. And the cofinality of \(P\) is \(\mathfrak{d}\) (for each \(\alpha<\omega_{1}\) take a cofinal family, \(C_{\alpha}\), of size \(\leq\mathfrak{d}\) of \(\alpha^{\omega}\), extend each element of \(C_{\alpha}\) to have value \(0\) for all \(\beta\geq\alpha\), giving a \(C_{\alpha}^{\prime}\) contained in \(\sum\omega^{\omega_{1}}\), and
union together the \(C^{\prime}_{\alpha}\)'s). For no separable metrizable \(M\)[11] do we have \(\mathcal{K}(M)\geq_{T}\sum\omega^{\omega_{1}}\), in particular not \(\omega^{\omega}\).
Now construct the topological group, \(G=G_{P}\), as in Theorem 2.2(3). Corollary 3.4 guarantees that precompact subsets of \(G\) are metrizable.
|
2303.00121 | Laser calibration of the ATLAS Tile Calorimeter during LHC Run 2 | This article reports the laser calibration of the hadronic Tile Calorimeter
of the ATLAS experiment in the LHC Run 2 data campaign. The upgraded Laser II
calibration system is described. The system was commissioned during the first
LHC Long Shutdown, exhibiting a stability better than 0.8% for the laser light
monitoring. The methods employed to derive the detector calibration factors
with data from the laser calibration runs are also detailed. These allowed to
correct for the response fluctuations of the 9852 photomultiplier tubes of the
Tile Calorimeter with a total uncertainty of 0.5% plus a luminosity-dependent
sub-dominant term. Finally, we report the regular monitoring and performance
studies using laser events in both standalone runs and during proton
collisions. These studies include channel timing and quality inspection, and
photomultiplier linearity and response dependence on anode current. | M. N. Agaras, A. Ahmad, A. Blanco, D. Boumediene, R. Bonnefoy, D. Calvet, M. Calvetti, R. Chadelas, P. Conde Muino, A. Cortes Gonzalez, M. Crouau, C. Crozatier, F. Daudon, T. Davidek, G. Di Gregorio, L. Fiorini, B. Galhardo, Ph. Gris, P. Klimek, P. Lafarguette, D. Lambert, S. Leone, A. Maio, M. Marjanovic, F. Martins, M. Mlynarikova, B. Pereira, R. Pedro, K. Petukhova, S. Polacek, R. Rosten, C. Santoni, F. Scuri, D. Simon, Y. Smirnov, A. Solodkov, O. Solovyanov, M. Van Woerden, F. Veloso, H. Wilkens | 2023-02-28T22:48:08Z | http://arxiv.org/abs/2303.00121v2 | # Laser calibration of the ATLAS Tile Calorimeter during LHC Run 2
###### Abstract
This article reports the laser calibration of the hadronic Tile Calorimeter of the ATLAS experiment in the LHC Run 2 data campaign. The upgraded Laser II calibration system is described. The system was commissioned during the first LHC Long Shutdown, exhibiting a stability better than 0.8% for the laser light monitoring. The methods employed to derive the detector calibration factors with data from the laser calibration runs are also detailed. These allowed to correct for the response fluctuations of the 9852 photomultiplier tubes of the Tile Calorimeter with a total uncertainty of 0.5% plus a luminosity-dependent sub-dominant term. Finally, we report the regular monitoring and performance studies using laser events in both standalone runs and during proton collisions. These studies include channel timing and quality inspection, and photomultiplier linearity and response dependence on anode current.
Calorimeter, Detector alignment and calibration methods +
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
## 1 Introduction
The ATLAS Tile Calorimeter [1] is the central hadronic calorimeter of the ATLAS experiment [2] at CERN's Large Hadron Collider (LHC). The TileCal is a scintillator-based calorimeter employing photomultiplier tubes (PMTs) to measure the scintillation light. The TileCal is crucial to identify and measure the energy and direction of hadronic jets, provides information for the online
trigger system and participates in the reconstruction of the missing transverse momentum associated to weakly-interacting particles. Thus, the TileCal plays a central role in the reconstruction of collision events for subsequent physics analyses.
The stability and resolution of the calorimeter response are parameters with direct impact on the precision of the reconstruction of jets and missing energy by the ATLAS experiment. To control these parameters, the TileCal is equipped with dedicated systems that allow to monitor the different components of the detector and calibrate its energy measurements. These procedures were conducted during the LHC Run 1 and Run 2 data taking campaign, contributing to a good operation and performance of the TileCal [3]. An important aspect was to correct for the response variation of the PMTs, achieved with a laser system. This article describes the laser calibration of the calorimeter in the LHC Run 2 data taking campaign. A previous report about the laser calibration in Run 1 can be found in Ref. [4].
The calorimeter is briefly described in Section 2 and the Laser II system operating during Run 2 is detailed in Section 3. A major upgrade of the system employed in Run 1 was performed in 2014 and this is reported here. The description of the calibration procedure is presented in Section 4. Section 5 describes the monitoring of the calorimeter timing and PMT dependence on anode current with laser pulses fired during physics runs. Channel quality monitoring and studies of PMT linearity based on laser calibration data are reported in Section 6. Finally, conclusions are drawn in Section 7.
## 2 The ATLAS Tile Calorimeter
### Detector overview
The TileCal is a non-compensating sampling calorimeter that employs steel as absorber material and scintillating tiles constituting the active medium placed perpendicular to the beam axis. The scintillation light produced by the ionising particles crossing the detector is collected from each tile edge by a wavelength-shifting (WLS) optical fibre and guided to a photomultiplier tube, see Figure 1.
The calorimeter covers a pseudorapidity range of \(|\eta|<1.7\) and is divided into three segments along the beam axis: one central long barrel (LB) section that is 5.8 m in length (\(|\eta|<1.0\)), and two extended barrel (EB) sections (\(0.8<|\eta|<1.7\)) on either side of the LB that are each 2.6 m long1. Full azimuthal coverage around the beam axis is achieved with 64 wedge-shaped modules, each covering \(\Delta\phi=0.1\) radians. Moreover, these are radially separated into 3 layers: A, B/BC and D. The readout cell units at each module are defined by the common readout of bundles of WLS fibres through a single PMT, as shows Figure 2. The great majority of the cells have an independent readout by left/right PMTs for each cell side providing redundancy for the cell energy measurement. Additionally, single scintillator plates are placed in the gap region between the barrels (E1 and E2
cells) and in the crack in front of the ATLAS electromagnetic calorimeter End-Cap (E3 and E4 cells).
The data acquisition system of the TileCal is split into four partitions, the ATLAS A-side (\(\eta>0\)) and C-side (\(\eta<0\)) for both the LB and EB, yielding four logical partitions: LBA, LBC, EBA, and EBC. In total, the TileCal has 5182 cells and 9852 PMTs. PMT model Hamamatsu R7877 is used, which is a special customised 8-stage fine-mesh version of Hamamatsu R5900 [5]. The front-end electronics [6] receive the electrical signals from the PMTs, which are shaped, amplified with two different gains in a 1:64 proportion, and then digitised at 40 MHz sampling frequency [7]. The bi-gain system is used in order to achieve a 16-bit dynamic range using 10-bit ADCs. The digital samples are stored in a pipeline memory. Upon ATLAS Level 1 [8] trigger decision, seven signal samples are sent to the detector back-end electronics for the reconstruction of the signal amplitude. Complementarily, the PMT signals are integrated over a long period of time (10-20 ms) with analog integrator electronics to measure the energy deposited during caesium calibration scans and the charge induced by proton-proton (\(pp\)) collisions.
### Signal reconstruction
In each TileCal channel, an analog electrical signal is sampled with seven samples at 25 ns spacing synchronised with the LHC master clock. These samples are referred to as \(S_{i}\), where \(1\leq i\leq 7\), and are in units of ADC counts. Depending on the amplitude of the pulse, either High or Low Gain is used to maximise the signal to noise ratio while avoiding saturation. To reconstruct the sampled signal produced during physics runs, the Optimal Filtering (OF) method is used in the Tile
Figure 1: Sketch of a TileCal module, showing the scintillation light readout from the tiles by wavelength-shifting optical fibres and photomultiplier tubes (PMTs).
Calorimeter [9; 10]. The method linearly combines the samples \(S_{i}\) to calculate the amplitude \(A\), phase \(\tau\) with respect to the 40 MHz clock and pedestal \(p\) of the pulse:
\[A=\sum_{i=1}^{n=7}a_{i}S_{i},\qquad\qquad A\tau=\sum_{i=1}^{n=7}b_{i}S_{i}, \qquad\qquad p=\sum_{i=1}^{n=7}c_{i}S_{i} \tag{1}\]
where \(a_{i}\), \(b_{i}\) and \(c_{i}\) are linear coefficients optimised to minimise the bias on the reconstructed quantities introduced by the electronic noise. The normalised pulse shape function, taken as the average pulse shape from test beam data, is used to determine the coefficients. Separate functions are defined for high and low gain. The pulse shape and coefficients are stored in a dedicated database for calibration constants.
The system clock in each digitiser [7] is tuned so that the signal pulses, originating from collisions at the interaction point, peak at the central (fourth) sample, synchronous with the LHC clock. The reconstructed value of \(\tau\) represents the small time phase in ns between the expected pulse peak and the time of the actual reconstructed signal peak, arising from fluctuations in particle travel time and uncertainties in the electronics readout.
To reconstruct the signals produced in each TileCal channel by the laser calibration system, the same OF method is used as during the physics runs. In this case, the pulse shape function corresponding to the signal produced by laser is used to calculate the linear coefficients \(a_{i}\), \(b_{i}\) and \(c_{i}\) from Equation (1).
### Energy reconstruction and calibration
At each level of the TileCal signal reconstruction, there is a dedicated calibration system to monitor the behaviour of the different detector components. Three calibration systems are used to maintain a time-independent electromagnetic (EM) energy scale, and account for variations in the hardware and electronics. A movable caesium radioactive \(\gamma\)-source calibrates the optical components and
Figure 2: Scheme of the TileCal cell layout in the plane parallel to the beam axis, on the positive \(\eta\) side of the detector. The single scintillators E1 and E2 (gap cells), and E3 and E4 (crack cells) located between the barrel and the end-cap are also displayed.
the PMTs but not the front-end electronics [11]. The laser system monitors the PMTs and front-end electronic components used for collision data. The charge injection system (CIS) calibrates the front-end electronics [1]. Figure 3 shows a flow diagram summarising the different calibration systems along with the paths followed by the signals from different sources. These three complementary calibration systems also aid in identifying the source of problematic channels. Moreover, the minimum-bias currents ("Particles" in Figure 3) are used to validate response changes observed by the caesium calibration system.
In each TileCal channel, the signal amplitude \(A\) is reconstructed in units of ADC counts using the OF algorithm defined in Equation (1). The reconstructed energy \(E\) in units of GeV is derived from the signal amplitude as follows:
\[E\ [\text{GeV}]=\frac{A\ [\text{ADC}]}{f_{\text{PC}\rightarrow\text{GeV}} \cdot f_{\text{Cs}}\cdot f_{\text{Las}}\cdot f_{\text{ADC}\rightarrow\text{pC}}} \tag{2}\]
where each \(f_{i}\) represents a calibration constant or correction factor. The factors can evolve in time because of variations in PMT high voltage, stress induced on the PMTs by high light flux or ageing of scintillators due to radiation damage. The calibration systems are used to monitor the stability of these factors and provide corrections for each channel.
The \(f_{\text{PC}\rightarrow\text{GeV}}\) conversion factor is the absolute EM energy scale constant measured in test beam campaigns [12]. \(f_{\text{ADC}\rightarrow\text{pC}}\) is the charge to ADC counts conversion factor determined regularly by charge injection, and the remaining factors, \(f_{\text{Cs}}\) and \(f_{\text{Las}}\), are calibration factors measured frequently with the TileCal calibration systems. These are updated frequently in the database according to an _interval of validity_ (IOV) and used by the data preparation software to keep the cell energy response stable over time. The IOV has a start and end run identifier, between which the stored conditions are valid and applicable to data, and is also stored in the database.
Figure 3: The signal paths for each of the three calibration systems used by the TileCal. The physics signal is denoted by the thick solid line and the path taken by each of the calibration systems is shown with dashed lines.
The Laser II calibration system
### Laser calibration system
The TileCal laser system is installed in the ATLAS main service cavern, USA15, located about 100 m away from the detector. In summary, it consists of a laser source, light guides and beam expanders, an optical filter wheel to adjust the light intensity, and beam splitters to dispatch the light to the Tile Calorimeter PMTs through 400 clear optical fibres, 100 to 120 m long. In addition, the system is equipped with a calibration setup designed to monitor the light in various points of the dispatching chain, and with dedicated control and acquisition electronics boards.
The original Laser I system used during the LHC Run 1 operation [4] was upgraded during the LHC Long Shutdown 1 to a newer version, referred to as Laser II, which is used since the beginning of the Run 2 data taking. The main purpose of the laser upgrade was to overcome the shortcomings observed in the previous system while maintaining its precision and stability marks. The essential aspects to improve were the stability of the beam expander distributing light to the 400 clear fibres, the photodiodes grounding, reproducibility of the filter wheel position, the electronics, and the overall estimate of the laser light injected into the PMTs of the calorimeter.
### Upgraded Laser II system
The Laser II system can be described by six main functional blocks: the optics box, the optical filter patch panel, the photodiode box, the PHOCAL (PHOtodiode CALibration) module, a PMT box and a VME crate featuring the LASCAR (LASer CALibration Rod) electronics card. These blocks are briefly described below, highlighting the upgrades with respect to the previous Laser I version.
Figure 4: Scheme of the Laser II optics box, depicting the internal elements and optical paths.
#### Optics box
The light source of the laser system, installed in the so-called optics box, is a commercial Q-switched diode-pumped solid state laser manufactured by SPECTRA-PHYSICS [13], kept from the predecessor system. The frequency doubler permits the infrared laser to emit 532 nm green light, close to the wavelength of the light coming from the detector WLS fibres, peaked at 480 nm. The time width of the individual pulses generated by the laser is 10 ns. Besides the laser source, the optics box houses also the main optical components acting on the laser beam across the light path, as depicted in Figure 4. A picture of the laser box is shown in Figure 5.
A beam splitter is located at the output of the laser cavity. It divides the laser primary beam into two parts: a small fraction of the light is sent back to a light mixer and the major part is transmitted through a beam expander and a 45\({}^{\circ}\) dielectric mirror to a filter wheel. The light exiting the mixer is collected by five clear optical fibres: two are coupled to the PMT box to the two PMTs responsible for generating the trigger signal for the Laser II data acquisition (DAQ); and three are connected to the monitoring photodiodes (D0 to D2) located in the photodiode box.
Figure 5: Picture of the optics box (cover removed) placed on the anti-vibration rails and coupled to the fiber bundle.
In the expander, the beam spot is expanded from 700 \(\mu\)m to 2 mm, reducing the light power density to the forthcoming optical elements. The light reflected by the following mirror passes through a motorised filter wheel hosting eight neutral density filters with varied optical densities, with the filter transmissions ranging between 100% (no filter) and 0.3%. The combination of this transmission variation and the range of intensities where the laser operation is stable allows to calibrate the TileCal PMTs in an equivalent cell energy range of 500 MeV to 1 TeV.
The light transmitted by the selected filter is fed into a light mixer by a beam splitter placed downstream the wheel. Three clear fibres routed to the photodiode box collect the light for monitoring (diodes D3 to D5). A second 45\({}^{\circ}\) dielectric mirror reflects the light through a shutter into the final beam expander where the laser light is finally dispatched to the detector by 400 clear fibres. Four fibres route the light output of the expander to the photodiode box to monitor its transmission (photodiodes D6 to D9).
The 400 long clear fibres are bundled together and transfer the light coming out of the optics box to the TileCal modules (one fibre for two half modules of the central Long barrel, one fibre for each half module per extended barrel and 16 spare fibres). The association between TileCal PMTs and the clear fibre in the bundle is as follows:
* Long Barrel: one fibre per full LB module, for even PMT numbers in A side and odd numbers C side. Conversely, another fibre for the same LB module, for odd PMT numbers in A side and even numbers in C side.
* Extended Barrel: one fibre per module per EB side for the even number PMTs and another one for the odd PMTs.
Inside each detector module an optical system composed of a light mixer in air dispatches the light to each PMT with individual clear fibres.
This optics box comprises major upgrades with respect to the previous system. Now, a compact design of the optical layout fully includes all the optical elements in one single box, whereas in the Run 1 system the optical elements were located into two different boxes optically connected with a liquid fiber. The optics box is set in an horizontal position to minimise the dust accretion on optical parts and to ease interventions and is mounted on an anti-vibration system, improving beam stability. The final beam expander is new. It was re-designed to improve uniformity in the distribution of the 2 mm beam spot across the 400 fibres' bundle, which has a circular surface of 30 mm diameter. Finally, the system now permits a better estimate of the laser light injected in the calorimeter through a redundant monitoring of the light transmitted in different points of the optical line with 10 photodiodes.
### Optical filters
A patch panel with ten optical filters is used to adjust the intensity of the light read by each of the monitoring photodiodes in the photodiode box. In this set up, each one of the ten optical fibres reading out the light at the various points of the beam path in the optics box (after the laser head, after the filter wheel, and at the output of the beam expander) is coupled to a given optical filter in the patch panel. The optical density of the filters range from 0.5 to 2.5 and are such that for each light point probed there is always at least two filters of equal density, providing a redundant light intensity probe for the monitoring photodiodes.
#### Photodiode box
The photodiode box is a rack containing a set of ten modules, each composed of a Si PIN photodiode (Hamamatsu S3590-08 [14]) coupled to a pre-amplifier, a control card, and a charge injection card to inject an electrical charge into the ten pre-amplifiers. A set of two fibres is connected to the rear end of the photodiode box, in front of each photodiode. One fibre conveys the laser light for monitoring and is connected from the patch panel with the optical filters. The other one comes from the PHOCAL module, where LED light is injected to assess the stability of the photodiodes. In order to minimise the photodiodes' response dependency on the temperature, the temperature in rack is controlled by the water and fan cooling system. The temperature of each photodiode is monitored and kept constant at approximately 30 \({}^{\circ}\)C with a long term stability below 1 \({}^{\circ}\)C.
#### PHOCAL module
This module implements a redundant internal calibration scheme using a blue LED (Nichia blue NSPB520S \(\lambda\)=470 nm [15]) to monitor the ten photodiodes of the Laser II system. The calibration light is simultaneously transmitted to a reference photodiode (Hamamatsu S2744, active area: 10x20 mm2, spectral response range from 320 to 1100 nm [16]) providing the signal for the photodiodes' response normalisation. PHOCAL also contains a radioactive source of \({}^{241}\)Am, releasing mostly \(\alpha\) particles of 5.6 MeV with an activity of 3.7 kBq. This source ensures the monitoring of the reference photodiode. This module is an addition with respect to the Laser I system installed in Run 1, where the existing photodiodes were all monitored with a moveable scheme of the \({}^{241}\)Am source.
#### PMT box
The PMT box contains two PMTs (Hamamatsu R5900 [17]) reading out two optical fibres from the optical box. These provide the trigger signal for the Laser II acquisition system when the laser is flashing. The PMT box also includes a control module used to drive the shutter and the filter wheel in the optics box.
#### LASCAR electronics
LASCAR, viewed in Figure 6, is the electronics board for the acquisition and control of the Laser II system. It digitises the analog signals (from the eleven photodiodes, the two PMTs and the charge injection system), contains a chip to retrieve the LHC clock signal, makes the interface to drive the laser and contains the module for charge injection and monitoring of the pre-amplifier and digitisation chain of the photodiodes.
The LASCAR board is housed in a VME crate. It's central brain, a Field Programmable Gate Array (FPGA) Cyclone V manufactured by ALTERA [18], controls and provides the interface to the main components:
* **Charge ADC (QDC):** 32 channel 14-bit2 QDC that performs a 500 ns integration and digitization of the analog input charge signals coming from the eleven photodiodes, the two
PMTs and from the charge injection system. Prior to the QDC, the analog signal pass through a charge amplifier circuit with two possible gains (\(\times\)1 and \(\times\)4).
* **LILAS (LInearity LASer) card:** This module is responsible for injecting a known charge into the readout electronics (photodiodes pre-amplifiers and digitisation) to monitor its linearity and stability with time. A digital signal from the FPGA is converted to an electric charge by the LILAS 16-bit DAC. The charge is then injected directly into a QDC channel and distributed to the PHOCAL and Photodiode box through lemo cables.
* **Time-to-Digital Converter (TDC)**: A TDC is used to measure the Laser time response as function of its intensity. The device has two channels and a time resolution of 280 ps. LASCAR is equipped with a delay system to insure the adequate laser pulse timing irrespective of laser amplitude.
* **Timing, Trigger and Control Receiver (TTCrx)**: The TTCrx is an ASIC chip that receives the LHC signals relative to bunch crossing, event counter reset and trigger.
* **HOLA**: The High-speed Optical link for ATLAS was conceived to send data fragments via optical fibre to the Read Out System of ATLAS upon receiving a Level 1 Accept trigger from the ATLAS central DAQ.
* **LASER Interface:** This mixed analog and digital board is used to control the laser head. The laser intensity is set through an analog signal (0 to 4 V) and the trigger is set with a TTL signal.
### Operating modes
The Laser II system can be operated independently as a stand-alone system or integrated in the ATLAS detector data acquisition framework.
Figure 6: Views of the LASCAR card.
#### Stand-alone operating mode
The stand-alone operation of the Laser II allows to verify that the system is responding as expected, to monitor its stability and to perform its internal calibration. In this internal calibration mode, LASCAR controls the Laser II components, switching the shutter off by default, without sending any laser pulse to TileCal PMTs. The following running modes are possible:
* **Pedestal mode:** This mode is used to measure a high number of events when no input signal is injected (from the laser, the LED or the radioactive source).
* **Alpha source mode:** This mode is used to measure the response of the reference photodiode in the PHOCAL module to the \(\alpha\) particles emitted by the \({}^{241}\)Am source.
* **LED mode:** In this mode, the LED signal is transmitted to all the photodiodes, including the reference photodiode, via optical fibres. It allows to probe the stability of the photodiodes used to monitor the laser light.
* **Linearity mode:** This mode resources the LILAS card to inject a known electrical charge into the preamplifiers of the photodiodes in order to assess the stability of the electronics. It also allows to vary the injected charge to evaluate the linearity of the readout electronics.
* **Laser mode:** In this mode, the laser signals of adjustable intensity are sent in the system. The light can be transmitted to the TileCal PMTs, depending on the status of the shutter located inside the optics box.
A standard internal calibration run combines all the above running modes, starting with the pedestal mode. Once enough pedestal events have been recorded, LASCAR is switched to the next calibration mode, starting from the alpha mode, then the LED mode and the linearity mode increasing the injected charge from 0 to 60000 DAC counts (\(\sim\)1.9 pC) by steps of 10000 (\(\sim\)0.3 pC). Finally, the internal calibration ends with the laser mode.
#### ATLAS DAQ operating mode
The ATLAS DAQ mode is the main operating mode of Laser II. Its role is to calibrate the TileCal PMTs with laser light. To do so, the Laser II DAQ is integrated within the global ATLAS DAQ infrastructure, which handles the readout of the PMT signal induced by the laser and the Laser II run control. This mode is used in two ways:
* **Laser mode:** This is the main mode to perform the dedicated calibration runs, when the TileCal is operated independently of the remaining ATLAS detector. Laser pulses are sent to the calorimeter by request of the SHAre Few Trigger board (SHAFT) to LASCAR. Amplitudes of the signals produced by the photodiodes and the PMTs of the Laser II system are sent back to the ATLAS DAQ by LASCAR. At the start of run, the filter wheel position and the laser intensity are configured and the shutter opened.
* **Laser-in-gap mode:** Laser pulses are emitted in empty bunch-crossings during standard physics runs of the LHC. The TileCal is synchronised with the other ATLAS sub-detectors.
The light is fired only in exclusive periods of the beam orbit, where no collisions can occur. The SHAFT board sends a request to LASCAR at a fixed time with respect to the beginning of the LHC orbit to synchronise the laser pulses with the pre-defined orbits and ensure no overlap between laser and physics events. Upon pulse emission, a laser calibration request is sent to the ATLAS central trigger processor by the SHAFT interface. This arrangement synchronises the pulse emission and the TileCal readout with ATLAS DAQ in physics runs.
### Stability of the laser system
The installation of Laser II involved a commissioning phase where the performance and the stability of the system were evaluated in the course of the first three months of Run 2. The main parameters to monitor are the ones obtained with the operation of the laser internal calibration mode. The measurements included the pedestal of the photodiodes, the response of the electronics to a known injected charge, the signal of the monitoring photodiodes in response to the PHOCAL LED pulses, and the response of the PHOCAL photodiode to the \({}^{241}\)Am \(\alpha\)-source. The results are shown in Figure 7.
Figure 6(a) shows the average pedestal value recorded in the Laser II stand-alone acquisition mode in high gain for the monitoring photodiodes (D0 to D9) and for the PHOCAL photodiode. The data is normalised relatively to the first data point. The pedestals of the D0-D9 photodiodes are stable within 0.8% during the commissioning period, whereas for the PHOCAL diode a maximum fluctuation of 1.8% is observed.
The stability of the readout electronics response is obtained by injecting a constant charge of 171 pC, 256 pC or 342 pC in several runs across the considered time period. For each injected charge, the signal is acquired in low gain and in high gain. Figure 6(b) shows the results obtained for a 256 pC injected charge in low gain readout. The data, normalised to the first day of data taking, are shown for the electronics channels corresponding to each photodiode and the pedestals are subtracted. All channels exhibit a consistent up-drift reaching 0.8% at the most in the end of the data taking period.
In Figure 6(c), the outcome of the photodiode monitoring with the PHOCAL LED in stand-alone high gain runs of Laser II is presented. The values are normalised to the first data point and the pedestals are subtracted. The response of the photodiodes to the calibration light is very stable in time. The maximum fluctuations do not overcome 0.4% and do not exhibit any particular trend with time.
Finally, the PHOCAL response to the \({}^{241}\)Am internal \(\alpha\)-source is displayed in Figure 6(d) for the low and high gain signal acquisition mode. The response is normalised to the first data point and the pedestals are subtracted. Figure 6(d) shows a consistent down-drift for the two gain modes, that reach \(-0.8\%\) in the end of the period under analysis. Given the larger pedestal variation observed for the PHOCAL photodiode, seen in Figure 6(a), the pedestals are further corrected for this fluctuation. The correction has a substantial effect on the obtained photodiode response since the signal induced by the radioactive source (around 600 and 2500 ADC counts in low and high gain, respectively) is just three to six times larger than the pedestal values (around 100 and 750 ADC counts in low and high gain, respectively). Figure 6(d) also shows the corrected responses, exhibiting a maximum relative variation of about 0.4%.
The effects of fluctuations of the light monitoring system are taken into account in the calibration of the TileCal PMTs with a run-by-run correction factor. This will be described in Section 4.
## 4 Calibration of the calorimeter with Laser II
### Calibration procedure
As can be seen in Equation 2, the reconstruction of the energy in TileCal depends on several constants, some of them being updated regularly. The main calibration of the TileCal energy scale is obtained using the caesium system [11]. However, since a caesium scan needs a pause in the \(pp\) collisions of at least six hours, this calibration cannot be performed very often. Therefore, regular relative calibrations are accomplished between two caesium scans using the laser system. Moreover, during the LHC technical stop at the beginning of data taking period in 2016, few liquid traces
Figure 7: Relative (a) pedestal and (b) charge injection signal for each photodiode (D0 to D9 and reference photodiode in PHOCAL) as a function of time in the course of the first three months of Run 2. The mean signal values are normalised to the mean signal value of the first measurement. (c) Relative response to the PHOCAL LED for each photodiode (D0 to D9) as a function of time. Signal values are normalised to the PHOCAL photodiode signal and to the mean signal value of the first measurement. (d) Relative response of the PHOCAL photodiode to the \({}^{241}\)Am \(\alpha\)-source subtracting a constant pedestal or correcting the pedestal for the observed fluctuation in time. The mean signal values are normalised to the mean signal value of the first measurement.
coming from the caesium hydraulic system were found in the detector cavern. Since then until the end of Run 2, caesium scans were restricted to be taken only during the end of year technical stops, due to risk of the leak. In absence of the caesium calibration, the laser became the main calibration system, calibrating the PMTs and readout electronics. In order to address the fast drift of PMT response caused by the large instantaneous luminosity, the laser calibration constants were updated every 1-2 weeks, since July 2016. These constants were used in so-called prompt data processing, performed during the data taking period.
Each year, the data recorded by the ATLAS detector is reprocessed. Data reprocessing consists of the update of the physics dataset (proton-proton and heavy ion collision runs) with updated conditions and calibration constants. Moreover, a reprocessing of the full Run 2 dataset was performed during LHC Long Shutdown 2 at the end of Run 2. This step is necessary to apply new reconstruction and calibration algorithms as well as the corrections that were impossible to be done or missed during prompt data processing. The IOVs are readjusted and chosen to coincide with the data taking periods. For laser calibration, they occur every 1-2 weeks in order to smoothly follow the evolution of PMTs response during the data taking period.
The method to compute the laser constant \(f_{\text{Las}}\) introduced in Equation 2 is based on the analysis of specific laser calibration runs, taken daily during the data taking period, for which both the laser system photodiodes and the TileCal PMTs are read out. The laser calibration employs two types of successive laser runs:
* Low Gain run (labeled as LG) consists of \(\sim\)10,000 pulses with a constant amplitude and the filter attenuation factor equal to 3,
* High Gain run (labeled as HG) consists of \(\sim\)20,000 pulses with a constant amplitude and the filter attenuation factor equal to 330.
The laser system is employed to perform the PMT response calibration relative to the previous global calibration of the TileCal detector with the caesium scan. Thus, to determine the laser calibration constants, a laser run taken close to the caesium scan is used to set the reference signals for each PMT. By definition, if the response of a channel to a given laser intensity is stable (the response of the PMT and of the associated readout electronics are stable), the laser constant \(f_{\text{Las}}\) is 1. The references were set close to the start of each year's \(pp\) collision runs. The laser references and laser constants are stored in the conditions database.
The laser calibration procedure evolved during Run 2. Due to increasing instantaneous luminosity and response variation observed in all PMTs, the methods to derive laser constants were adapted. The applied methods are described in detail in Section 4.2.
### Determination of the calibration constants
The laser runs are constituted by a set of laser pulses with corresponding signal readout from the individual PMTs, from which the pedestal is subtracted. For each pulse, the normalised response of a PMT channel, the ratio \(R_{i,p}\), is defined as:
\[R_{i,p}=\frac{A_{i,p}}{A_{\text{D6},p}} \tag{1}\]
where \(p\) denotes the pulse, \(A_{i}\) is the reconstructed signal amplitude of the PMT readout channel \(i\) and \(A_{\text{D6},p}\) is the signal amplitude measured by the photodiode 6 (D6) in the laser box. The D6 measures the laser light after the beam expander and probes the beam close to the TileCal PMTs in the best dynamic range among available photodiodes D6-D9. The average of the ratio \(R_{i,p}\) over all pulses of the laser run, denoted as \(R_{i}\equiv\langle R_{i,p}\rangle\), is analysed for each PMT.
The laser calibration factors employed to reconstruct the cell energy, in Equation 2, are simply the relative response of the channel:
\[f_{\text{Las}}^{i}=\frac{R_{i}}{R_{i}^{\text{ref}}} \tag{2}\]
where \(R_{i}^{\text{ref}}\) is the normalised response of the PMT channel \(i\) during the laser reference run. For monitoring purposes, these factors are usualy presented in percentage as a relative response variation:
\[(f_{\text{Las}}^{i}-1)\times 100\ [\%] \tag{3}\]
The measurement of \(f_{\text{Las}}^{i}\) may be influenced by instabilities with origin at the laser system itself, both at a global level, i.e. affecting equally all the detector PMTs, or at the fibre level, i.e. affecting the set of PMTs associated with each clear fibre. To take these effects into account, global and fibre corrections are determined, such that the corrected laser constant reads as:
\[f_{\text{Las}}^{i}\to f_{\text{Las}}^{i}\times\frac{1}{\alpha_{\text{G}} \times\alpha_{\text{f}(i)}} \tag{4}\]
* The global correction \(\alpha_{\text{G}}\) is associated with a coherent drift of all channels. The effect can be related to an instability of the reference diode, from the variation of light received by the TileCal PMTs or common ageing of the long fibres.
* The fibre correction \(\alpha_{\text{f}(i)}\), computed per fibre f(\(i\)), is associated with a time variation of the light transmission from fibre to fibre.
During Run 2, two methods were used to evaluate these optics corrections: the so-called Direct and Combined methods. In the Direct method, the global and fibre corrections are simply determined from the average response variations of a set of stable PMTs reading outermost and least irradiated cells in the D layer used as references, and the sub-set of D-layer PMTs associated with the fibre, respectively. This method was used to calibrate and monitor the detector response during 2015-2017 data taking but revealed to be inadequate for calibration when the response of the reference PMTs started to fluctuate due to larger integrated currents in the middle of the 2017 run. Then the Combined method was developed and employed in the 2018 TileCal calibration and also for the reprocessing of 2017 data. Instead of relying on the stability of a set of reference PMTs, the gain of the PMT is explicitly evaluated to determine the optics corrections by the Combined method.
#### Direct method
In the Direct method, the global correction is evaluated from the relative response of all PMTs reading cells in the D-layer:
\[\alpha_{\text{G}}=\left\langle\frac{R_{i}}{R_{i}^{\text{ref}}}\right\rangle^{ \text{D-cells}} \tag{10}\]
The fibre corrections are evaluated using information from PMTs of the D layer for the fibres associated to the LB, and from PMTs reading the D, B13, B14 and B15 cells for the EB3, corrected from global effects. This quantity is evaluated for each long clear fibre f(\(i\)) as
Footnote 3: These cells are less exposed to particle fluence, so their readout PMTs experience smaller integrated currents and a more stable response.
\[\alpha_{\text{f($i$)}}=\frac{1}{\alpha_{\text{G}}}\left\langle\frac{R_{i}}{R_{i }^{\text{ref}}}\right\rangle_{\text{f($i$)}}^{\text{D,B-cells}} \tag{11}\]
In Equations 10 and 11, \(\left\langle\ \right\rangle\) represents a geometric weighted average, where the weight is proportional to the number of laser pulses in the run, and the average RMS of the PMT signals.
Saturated channels, channels with bad status in the TileCal condition database, and channels for which the absolute difference between the applied and requested HV is above 10 V (\(\Delta\text{HV}>\)10 V) are excluded from the computation of the optics corrections. Moreover, an iterative procedure rejects outlier channels, more than \(3\sigma\) apart from the \(R_{i}/R_{i}^{\text{ref}}\) distribution average.
#### Combined method
In the Combined method, the actual PMT gain is measured based on the statistical nature of photoelectron production and multiplication inside the PMT. It assumes that the noise is negligible with respect to the laser-induced PMT signals and that the laser light is coherent. Under these conditions, the two main contributions to the PMT signal fluctuations to the laser scans are the poissonian fluctuations in the photoelectron emission spectrum and multiplication, and the variation of the intensity of the light source [19]. The PMT gain \(G\) can be written as:
\[G=\frac{1}{f\cdot e}\cdot\left(\frac{\text{Var}[q]}{\left\langle q\right\rangle }-k\cdot\left\langle q\right\rangle\right) \tag{12}\]
where \(e=1.6\times 10^{-19}\) C is the electron charge constant, \(f\) stands for the excess noise factor extracted from the known gain of the individual PMT dynodes [20]. For the eight dynode TileCal PMTs, \(f=1.3\) at the nominal gain of \(G=10^{5}\), \(\left\langle q\right\rangle\) is the average value of PMT anode charge associated to each laser pulse, and \(\text{Var}[q]\) is the variance of the anode charge distribution. The coherence factor \(k=\frac{\text{Var}[l]}{\left\langle f\right\rangle^{2}}\) depends on the characteristics of the light source itself but not on the light intensity. The factor \(k\) ranges from 0, for an ideal fully coherent light source, to 1, for a totally incoherent light source, and is determined with a set of PMTs measuring the same light source. For any PMT pair \(i\) and \(j\), \(k\) is given by the average PMT measured charge \(q_{i}\) and \(q_{j}\) respectively, and the covariance \(\text{Cov}[q_{i},q_{j}]\) of the charge measurements, as:
\[k=\frac{\text{Cov}[q_{i},q_{j}]}{\left\langle q_{i}\right\rangle\left\langle q _{j}\right\rangle} \tag{13}\]
In order to decrease the dependence of the gain measurement on the \(k\) factor determination, to which the sensitivity is more limited, the PMT gain is analysed in high gain laser calibration runs taken with filter wheel in position 8 with 31.6% transmission (optical density of 2.5). For these runs the light intensity is lower leading also to a lower average PMT anode charge \(\langle q\rangle\), thus the \(k\) term in Equation 4.7 has a smaller effect on the gain measurement.
Moreover, since the PMT gain determination presents significant fluctuations, the average over a set of runs within \(\pm 10\) days around the laser reference run is taken to set the PMT reference gain, \(G_{i}^{\rm ref}\). The PMT gain \(G_{i}\) is used as an independent measure of the PMT signal and as the basis to evaluate the optics corrections by the Combined method. The global correction is determined from the average ratio between the PMT relative response and the PMT relative gain using PMTs reading the D-layer and the BC1, BC2, B13, B14, B15 cells:
\[\alpha_{\rm G}=\left\langle\frac{R_{i}}{R_{i}^{\rm ref}}\left/\frac{G_{i}}{G_ {i}^{\rm ref}}\right\rangle^{\rm D,B-cells} \tag{4.9}\]
The fibre corrections are determined in approximately the same way as the global correction except that the average runs over all the channels connected to a common long fibre f(\(i\)), with the global correction taken into account to avoid double correcting:
\[\alpha_{\rm f(i)}=\frac{1}{\alpha_{\rm G}}\left\langle\frac{R_{i}}{R_{i}^{\rm ref }}\left/\frac{G_{i}}{G_{i}^{\rm ref}}\right\rangle_{\rm f(i)}\right. \tag{4.10}\]
As for the Direct method, the PMTs having bad status or with \(\Delta\)HV \(>\)10 V or saturated channels are discarded from analysis.
### Evolution of the optics corrections
Figure 8: Evolution of the (a) global correction and (b) LB10C fibre correction associated with even/odd numbered PMTs in LBA10/LBC10 over time in 2018. The corrections are determined using laser high gain runs with the Combined method and are calculated as a weighted geometric mean. The corresponding errors are included in the data points.
Figure 8 shows the time evolution in 2018 of the global correction and the LB10C fibre correction (associated with the even/odd numbered PMTs in LBA10/LBC10), both shown in percentage and determined with the Combined method using laser runs taken in high gain. The global correction in 2018 is stable in time within 1% and the correction is about the same order. During Run 2, the magnitude of this correction did not exceed 2.5%. The fibre correction shown is generally representative of the 384 clear fibres in total. For all the years, the magnitude of the corrections did not exceed 1% and was also found to be constant throughout the time.
The global correction dominates the scale of the PMT calibration. Its precision should match the global scale uncertainty on the PMT calibration assessed from laser and caesium comparisons presented in Section 4.4, and thus be better than 0.4%. The accuracy on the global correction was further assessed using two symmetric sets of PMTs, one composed of PMTs reading the TileCal A side and another with PMTs installed in the C side to derive independent corrections. The corrections obtained for the A and C sides matched well below the sub-percent level for all years in Run 2, attesting the robustness of the Combined method at disentangling the effects of fluctuations in the monitored light intensity common to all PMTs.
### Comparison with caesium calibration
The response variation of PMTs measured with the laser system should match the full detector response variation obtained with the caesium system within short periods of time, where fluctuations from the scintillators and WLS fibers can be safely neglected. Thus, the comparison between the laser and caesium measurements constitute a procedure to validate the laser algorithm itself, employed to validate the Combined method.
During 2015 and 2016, three periods of low integrated luminosity were available within consecutive caesium scans. Figure 9 shows the response variation between July 17 and November 3, 2015, obtained with the caesium system as a function of the response variation obtained with the laser. The results are displayed at channel level and separating channels per layer A, B/BC and D. The great majority of channels have the same response variation for laser and for caesium.
The corresponding distribution of the ratio between the caesium constants (\(f_{\mathrm{Cs}}\)) and the laser calibration constants (\(f_{\mathrm{Las}}\)), calculated to address the response variation of the PMTs during the same period of time, is shown in Figure 10 separated by layer and Long/Extended barrel. Each distribution is fitted with a Gaussian function to measure its average and standard deviation. The differences observed between the caesium and the laser systems are more evident in the extended barrel and on the A layer. These regions of the calorimeter are less shielded and thus the effects of radiation damage to scintillator and WLS fibre are faster. The average difference is well bellow 0.1% and the standard deviation is 0.6%.
For the three periods analysed, the maximum average difference observed was 0.4%. This value is taken as the uncertainty on the scale of the PMT calibration with laser.
### Uncertainties on the PMT calibration
Besides the systematic uncertainty on the PMT calibration scale, the uncertainty on the PMT relative inter-calibration, mostly sourced at the fibre correction procedure and at the channel-level readout, is evaluated. To do so, an indirect comparison between the responses to caesium source and laser,
Figure 10: Ratio between the caesium calibration constants (\(f_{\rm{Cs}}\)) and the laser calibration constants calculated with Combined method (\(f_{\rm{Las}}\)) for channels in Layer A, B/BC and D in the (a) Long Barrel and (b) Extended Barrel. Special channels not calibrated by the caesium system, such as the E-cells, are not included.
Figure 9: Response variation (in %) measured by caesium (y-axis) and by laser employing the Combined method (x-axis) between July 17 and November 3, 2015 for (a) all TileCal channels and (b) the channels in the A-, B/BC- and D-layers. Special channels not calibrated by the caesium system, such as the E-cells, are not included.
measured with left and right PMTs reading the same cell, is performed evaluating the following observable:
\[\Delta f_{\text{Cs/Las}}^{\text{L-R}}=\left(\frac{f_{\text{Cs}}^{\text{L}}}{f_{ \text{Las}}^{\text{L}}}-\frac{f_{\text{Cs}}^{\text{R}}}{f_{\text{Las}}^{\text{ R}}}\right) \tag{39}\]
where \(f_{\text{Las}}^{\text{L(R)}}\) and \(f_{\text{Cs}}^{\text{L(R)}}\) are the calibration constants corresponding to the cell relative response to laser and caesium source measured by the left (right) channel. With this quantity, the scintillator effects common to both left/right readouts are cancelled out. Assuming that the WLS fibre response from the left and right sides of the cell has a similar behaviour, the width of the distribution of \(\Delta f_{\text{Cs/Las}}^{\text{L-R}}\) is driven by the uncertainties of the laser measurement and caesium measurements. The inter-calibration systematic uncertainty on the laser calibration was then determined by disentangling the contributions from the caesium uncertainty and constraining with measurements of \(f_{\text{Cs}}^{\text{L}}-f_{\text{Cs}}^{\text{R}}\) and \(f_{\text{Las}}^{\text{L}}-f_{\text{Las}}^{\text{R}}\). The results obtained with 2018 data are shown in Figure 11. A dependence of the systematic uncertainty on the integrated luminosity, more pronounced for the extended barrel, is observed. The effect is due to a correlation between the integrated PMT charge and the response down-drift, with consequent increase in the spread of the response for a given PMT sample.
The total uncertainty on the laser calibration of a PMT, corresponding to the quadratic sum of the 0.4% scale systematic and the luminosity-dependent inter-calibration uncertainty, in the Long Barrel (\(\sigma_{\text{Las,tot}}^{\text{LB}}\)) and in the Extended Barrel (\(\sigma_{\text{Las,tot}}^{\text{EB}}\)) yields:
\[\begin{split}\sigma_{\text{Las,tot}}^{LB}[\%]&=0.4 \oplus(0.3+0.0016\times L)\ [\%]\\ \sigma_{\text{Las,tot}}^{EB}[\%]&=0.4\oplus(0.3+0.0 032\times L)\ [\%]\end{split} \tag{40}\]
Figure 11: Uncertainties on the PMT inter-calibration with the Laser II system using the Combined method as a function of the integrated luminosity for the Long Barrel and for the Extended Barrel. The results are obtained using laser and caesium calibration data collected in 2018. The two points at 63.3 fb\({}^{-1}\) result from two successive caesium scans without LHC beam. The uncertainty is parametrised as a function of the luminosity by fitting the data points with a linear function. A global scale systematic resulting from direct comparison between laser and caesium data was found to be 0.4%. This value should be summed in quadrature to obtain the total laser uncertainty. In 2018, three caesium scans were performed in LB (red points) and four in EB (blue points).
### Overview of the PMT response
The laser system is used to measure the evolution of the PMT response as a function of time. The Combined method, discussed in Section 4.2, is utilised to calculate the response variation with respect to a set of reference runs. In particular, the Equation 4.3 is used to obtain the response variation for each PMT. Channels marked with bad data quality status, unstable high voltage or flagged as problematic by any calibration system are discarded. For each cell type, the average response is obtained by a Gaussian fit to the distribution of PMT response variation. The \(\chi^{2}\) fit method is applied. The Gaussian approximation is used in order to obtain the average variation that is not affected by outliers.
A sample of the mean response variation in the PMTs for each cell type averaged over \(\phi\), measured with the laser system during the entire \(pp\) collisions data-taking period in 2018, is shown in Figure 12. The most affected cells are those located at the inner radius and in the gap and crack region with down-drift up to 4.5% and 6%, respectively. Those cells are the most irradiated and their readout PMTs experience the largest anode current.
Figure 13 shows the average response variation of the channels per layer and along the azimuthal angle \(\phi\) for the same period in 2018. Each \(\phi\) bin corresponds to one LB/EB module averaged over the A and C sides. Channels with low signal amplitude, bad data quality status or unstable high voltage are discarded in the average response calculation. It can be seen that PMTs reading the
Figure 12: The mean response variation in the PMTs for each cell type, averaged over \(\phi\), observed during the entire \(pp\) collisions data-taking period in 2018 (between laser calibration runs taken on 18 April 2018 and 22 October 2018) calculated using the Combined method. For each cell type, the response variation is defined as the mean of a Gaussian fit to the response variations in the channels associated with given cell type. A total of 64 modules in \(\phi\) were used for each cell type, with the exclusion of known pathological channels.
cells in layers closest to the beam axis, composed of A cells, are the most affected. Next layers, formed of the BC and D cells are significantly less affected. We observe larger uniformity across the modules in \(\phi\) in layers with a larger number of channels (eg. 40 channels in the LB A layer, see Figure 12), where the effect of discarding one bad quality channel has less impact. On the other hand, layers for which the spread in the response of the channels is larger (eg. EB layer A against LB layer A, see Figure 12) are more affected in the \(\phi\) uniformity with bad channel removal.
Figure (a)a shows the time evolution of the mean response variation in the PMTs for each layer observed during the entire Run 2. The PMT response variation strongly depends on the delivered luminosity by the LHC. Therefore, the delivered luminosity is also shown for comparison. The observed PMTs response variation is the result of three competing factors: i) the constant up-drift observed when PMTs are in rest; ii) the down-drift during high instantaneous luminosity period when PMTs are under stress; iii) the fast partial recovery after stress observed during technical stops. These effects result in 6% accumulated mean response variation in the PMTs for the cells located at the inner layer at the end of Run 2. For the B/BC and D layers, the average PMT response degradation during \(pp\) collisions was almost totally recovered in technical stops, resulting in \(-1.5\%\) accumulated PMT response variation at the end of Run 2 for the layer B/BC and even in +0.5% balance for the layer D.
Figure (b)b shows the Gaussian width distribution as a function of time observed for each layer during the entire Run 2. The Gaussian width for all layers increases with time during high instantaneous luminosity period when PMTs are under stress. It is caused by the different behaviour of different PMTs over time which are at different \(|\eta|\) positions. During technical stops, when PMTs are at rest, some inversion of this effect is observed resulting from the recovery of the most affected
Figure 13: The mean response variation in the PMTs for each cell type, averaged over \(\eta\), observed during the entire \(pp\) collisions data-taking period in 2018 (between laser calibration runs taken on 18 April 2018 and 22 October 2018) in LB (a) and EB (b), calculated using the Combined method. For each cell type, the response variation is defined as the mean of a Gaussian fit to the response variations in the channels associated with given cell type. Known pathological channels were excluded.
PMTs to the average response in a given layer or cell type.
## 5 Monitoring with Laser during physics runs
### Time monitoring
The TileCal does not only provide a measurement of the energy that is deposited in the calorimeter, but it also measures the time when particles and jets hit the calorimeter cell. This information is particularly utilised in the removal of signals which do not originate from \(pp\) collisions along with the time-of-flight measurements of hypothetical heavy slow particles that would reach the calorimeter.
The time calibration is also important for the energy reconstruction itself. As explained in Section 2.2, physics collision events are reconstructed with the OF algorithm (see Eq. (1)), whose weights depend on the expected phase. If the real signal phase significantly differs from the expected one, the reconstructed amplitude is underestimated. Consequently, the time synchronisation of all calorimeter channels represents an important issue. While the final time calibration is performed with \(pp\) collision data, laser data are extensively used to check its stability and to spot eventual problems.
Laser calibration events are shot during empty bunch-crossings of physics runs with a frequency of about 1 Hz. These events, also referred to as laser-in-gap events, were originally proposed for the
Figure 14: The mean response variation in the PMTs (a) and Gaussian width (b) for each layer, as a function of time, observed during the entire Run 2 (between stand-alone laser calibration runs taken on 17 July 2015 and 22 October 2018). For each layer, the response variation is defined as the mean of a Gaussian fit to the variations in the channels associated with given layer. Known pathological channels are excluded. The laser calibration runs were not taken during the ATLAS end-of-year technical stops. Moreover, the laser system was not operational due to technical problems in the period September 10–27, 2016. Thus, no laser data can be seen in the plots for these time intervals. The LHC delivered luminosity is shown for comparison in grey. The vertical dashed lines show the start of \(pp\) collisions in respective years.
PMT response monitoring. However, they are also extensively used for the time calibration stability monitoring.
The monitoring tool creates a 2D histogram for each channel and fills them with the reconstructed time (\(t_{\mathrm{chan}}^{\mathrm{laser}}\)) and luminosity block for each event. These histograms are stored and automatically examined for anomalies, which include average \(t_{\mathrm{chan}}^{\mathrm{laser}}\) being off zero in at least few consecutive luminosity blocks, unstable \(t_{\mathrm{chan}}^{\mathrm{laser}}\) or fraction of events off zero by more than 20 ns.
The former feature typically indicates a sudden change of the timing settings of the corresponding digitiser, so-called timing jump. These timing jumps are corrected by adjusting the associated time constant in the affected period, as shown in Figure 15. While the timing jumps were very frequent during Run 1 [4] and a lot of effort was invested into their correction, they appeared very rarely during Run 2 due to improved stability of the electronics. This allowed us to focus on other problems observed with the monitoring tool.
Few channels suffer from \(t_{\mathrm{chan}}^{\mathrm{laser}}\) sometimes off by 1 or 2 bunch-crossings, i.e. \(\pm 25\) or \(\pm 50\) ns. This feature affects all three channels managed by the same Data Management Unit (TileDMU) [7]. The problem is intermittent, with a rate at a percent-level; nevertheless, the observed bunch-crossing offset and affected events are fully correlated across the three channels. An example is shown in Fig. 15(a). Such events also occur in physics collision data at a very similar rate as in the laser data. Studies have shown that a difference of 25 ns between the actual and supposed time phases degrades the reconstructed energy by 35%. For this reason, a dedicated software tool was developed to detect cases affected by the bunch-crossing offset in physics data on-the-fly and prevent them from propagation to subsequent object reconstruction. Figure 15(b) compares the reconstructed time in affected channels before and after this tool is applied. The affected events close to +25 ns are clearly
Figure 15: An example of the timing jump of +15 ns in EBC09 channels 30–35 before (a) and after (b) the time constant correction as identified with the laser-based monitoring tool. The dashed line indicates the expected mean time value.
reduced.
### Dependence of the PMT response on the anode current
The assumption of a linear relationship between the PMT signal amplitude and the cell energy deposits requires the PMT response to be independent of current. For most cells, the range of currents is small enough that any non-linearity is negligible. In contrast, highly exposed cells, such as the E cells, experience a large current range between low and high luminosity runs, and between a caesium calibration run and a physics run. Therefore, those cells provide the necessary data to investigate such effects and are used to study the PMT response as a function of the anode current. Particular attention is paid to the difference between response of the E1 and E2 cells with respect to the E3 and E4 cells. The latter are the most exposed TileCal cells, where the larger particle fluence result in larger PMT currents. PMTs with active HV dividers [21] are installed in the readout of these cells to mitigate the current dependence, in principle affording larger stability. How well they do so must be understood when using the cells across a wide current range.
The measurement of the anode current comes from data of the TileCal readout of minimum-bias events that are collected during each run. These minimum-bias data are read out via slow current integrators which were installed for the readout of low signals from the radioactive caesium source used in the calorimeter calibration. The integrators average the current in each cell over
Figure 16: The reconstructed time of laser events as a function of luminosity blocks (a): three channels belonging the the same TileDMU are superimposed. The majority of events, centred around zero, are well timed-in. The events with the bunch-crossing offset are centred at +25 ns and these events are fully correlated across the three channels. The reconstructed time in physics events in the same three channels before (original) and after (corrected) the algorithm mitigating the bunch-crossing offset events applied (b): the algorithm significantly reduces events centred around +25 ns.
a long time window of 10-20 ms to suppress fluctuations in event-to-event energy deposition, diverting only a small fraction of each PMT's output from the primary signal. Large depositions from hard-scattering are also suppressed on long time scales.
Laser-in-gap data and the current measurements of minimum-bias events were analysed for three runs taken in 2018. The particular set of runs were selected to explore a wide current range while minimising the overlap of currents. The PMT signal amplitudes caused by the laser pulses are first subtracted from the pedestal, primarily present due to electronics noise, which comes mostly from the front-end electronics used to shape the signal for the ADC, as well as from the presence of beam-induced and other non-collision background. The signal amplitude is not normalised to the reference diode, as done to determine the PMT calibration described in Section 4.2, since the small instability associated to this monitoring device is often larger than the effects being studied, and so are the uncertainties associated with its correction. Instead, a cleaner approach to minimise the impact of laser intensity fluctuations is adopted, normalising the measurements of the channels from E cells of interest to a reference TileCal PMT with negligible current range on the same module. In this study, the left PMT of the D6 cell is used as the reference. The E cell normalised response \(R^{\rm E/D6_{L}}\) is defined per module as the ratio between the signal amplitudes of the E cell PMT (\(A^{\rm E}\)) and D6 left PMT (\(A^{\rm D6_{L}}\)):
\[R^{\rm E/D6_{L}}=\frac{A^{\rm E}}{A^{\rm D6_{L}}} \tag{5.1}\]
The minimum-bias current decays throughout a physics run as the proton beams decay, so the normalised E cell response changes as well if there is a dependence on the current. To determine this dependence, the actual cell response at any given current is compared to the nominal response at zero current. Therefore, the measurement in any given luminosity block is normalised to the mean measurement in the zero current period, i.e. before collisions begin:
\[\frac{R^{\rm E/D6_{L}}}{R^{\rm E/D6_{L}}_{\rm current=0}} \tag{5.2}\]
The baseline used for normalisation is the average E cell response ratio before stable beam declaration, where the first luminosity block with non-zero luminosity appears in the collision run. For each luminosity block of the chosen run, the average and RMS of the E/D6 cell response ratio to laser-in-gap pulses is calculated and normalised to the equivalent quantity at zero current. This is plotted as a function of the average anode current measured with the integrator readout of physics signals during the same luminosity block as shown in Figure 17 for the combination of the three selected runs.
Data are pruned by luminosity block if the laser pedestal is not stable or if the number of measurements in the luminosity block is less than 100. Luminosity blocks with a small number of measurements typically overlap with emittance scans or beam adjustments performed by ATLAS and in which the laser is disabled, so these are discarded. To further smooth the data, the measurements are averaged every 50 \(\mu\)A. The minimum current is chosen to avoid using data from fluctuations in the zero current measurement. The data are adjusted with a piecewise pair of linear fits:
* A low current fit from 0.08 \(\mu\)A until the transition current
* A high current fit from the transition current using all data up through 10 \(\mu\)A
The transition current is chosen by calculating the combined \(\chi^{2}/n_{\text{DoF}}\) for the two linear fits with possible transition currents in steps of 0.05 \(\mu\)A up to a maximum possible transition current value of 2.30 \(\mu\)A. The current yielding the minimum \(\chi^{2}/n_{\text{DoF}}\) is chosen as the endpoint of the first linear fit and the beginning of the second fit, with piecewise continuity enforced. The procedure is also shown in Figure 17. The transition current between low and high current regimes ranges between 0.8 and 2.1\(\mu\)A. The PMT response dependence on current is stronger for E1 and E2 cells than for E3 and E4 cells where the active dividers were installed, especially in the high current regime. These results bring evidence that the active dividers are effectively stabilising the PMT response across a wide range of current operation.
Such a study can be used to determine a correction to calibrate the PMT response over current from the normalised PMT response ratio fitted function. In Figure 17, it can be seen that a maximum 2-3% correction would be necessary at extreme high current for E1 and E2 cells, respectively. This requires a precise current measurement throughout the range of currents that may be experienced by the different cells. This study demonstrates that such correction should be more important for cells with passive dividers experiencing higher currents, including A cells.
Figure 17: Normalised E/D6 cell response ratio as a function of current for an example channel from each family. The low current fit is in blue, while the high current fit is in red. “Switch lines” indicates the transition current between the low and high current fits. “Fit = 1” indicates the current at which the normalised E/D6 cell response ratio intercepts 1, with a negative value resulting in a non-physical intercept at 0 current. The fitted ratio can be applied as a function of current to correct for the current-dependence of the PMT response.
Channel monitoring and PMT linearity
### Automated channel monitoring
Channels having pathological problems need to be promptly identified during the data taking periods. Therefore, an automated daily monitoring utilising the laser system is setup in order to identify and diagnose the channels' issues. This is achieved by analysing the recent laser calibration runs. After each laser run, several monitoring figures are produced by an automated software in order to control the PMT stability and provide the list of problematic channels with possible source of issues. All TileCal channels, including the masked channels after data quality checks, are analysed and flagged according to the algorithm described below.
The automated laser monitoring algorithm is based on the analysis of the PMT response variation, measured for each channel using the Direct method (explained in Section 4.2), and the comparison with other data (applied HV, global behaviour of group of channels associated to the same cell type).
The time evolution of the channels' response is daily monitored using LG and HG laser runs taken in 15 preceding days, and three categories of channels are defined:
* **Normal channels:** Channels that have no deviation or a deviation compatible with the mean deviation of similar cells. These channels can be calibrated safely and do not require a special attention.
* **Suspicious channels:** Channels with a deviation slightly higher than the mean deviation of similar cells or a deviation compatible with the one expected from the variation of the HV supply. These channels can be calibrated safely but some follow up may be needed.
* **Channels to be checked:** Channels with large deviations (>10%) that cannot be explained by the mean deviation observed in cells of similar type nor by HV changes; or channels having a non linear behaviour during the 15 preceding days (jumps or fast drifts). These channels should not be calibrated unless the origin of the effect is understood. In most of the cases, especially in case of a fast drift, the channels need to be masked.
While this categorisation assists during channel calibration, the automated monitoring of laser data also identifies channels with pathological behaviour and determines the source of the issues encountered, complementing data quality assessment activities. During the data taking period, laser runs are chosen with approximately 10 days interval. For each chosen date, both HG and LG runs are analysed and channels are classified according to their problems as follows:
* **Bad channels:** Channels with large PMT drift (>10%) or having a wrong behaviour during the 15 preceding days.
* **HV unstable:** Channels with large PMT drift (>10%) but compatible with HV variation.
* **No laser data:** Channels with low laser signal amplitude (e.g. caused by a laser fibre problem).
* **Bad laser data:** Channels with corrupted laser calibration data or having problematic reference.
A channel is reported to be problematic if it manifests any type of issue listed above. Figure 18 shows the fraction of problematic channels, observed in 2017 and 2018, as a function of time. The maximum number of such channels did not exceed 4 and 3%, respectively.
### PMT linearity monitoring
The TileCal PMT calibration is performed using laser light with a constant intensity. This procedure contributes to ensure that the calorimeter measures the same output over time for the same input energy deposition, i.e. that its response is stable. However, to guarantee that the calibration factors are accurate across the entire dynamic range of the PMT response and the output signal is directly proportional to the energy deposit one needs to assess the PMT linearity.
The linearity of the TileCal PMT channels was monitored during the Run 2 operation with laser calibration data acquired between 2016 and 2019. The dataset corresponded to a combination of standard laser calibration low gain runs using different filter wheel positions and laser intensity varied in the range of 12k to 18k in DAC counts. The linearity of a given PMT channel is evaluated by comparing the PMT signal to the signal of the reference photodiode D6 of the Laser II system. The response of the channels should increase linearly with the light intensity, in the same way that the PMT channels respond to increases in the energy deposited in the calorimeter.
PMTs lose linearity shortly before the saturation point. In addition, the TileCal ADCs saturate at an upper limit of 1023 counts. The saturation amplitude is given by \(A_{\mathrm{max}}\)\([\mathrm{pC}]=(1023-p)/f_{\mathrm{ADC\to pC}}\), where \(p\) and \(f_{\mathrm{ADC\to pC}}\) are the pedestal and CIS constant values, respectively. Values above the ADC saturation amplitude are not reliable and any non-linear behaviour above it should not be related exclusively to the PMT. Typically, TileCal readout channels start to loose linearity above \(\sim 750\) pC and reach saturation at \(\sim 850\) pC. This behaviour can be observed for the TileCal
Figure 18: The fraction of the problematic channels identified by the laser monitoring algorithm in 2017 (a) and 2018 (b).
channels that receive enough light. To avoid this issue, amplitudes above \(A_{\rm max}-\sigma_{A}\), where \(\sigma_{A}\) is the standard deviation of the amplitudes, are excluded from the analysis.
The PMT signal amplitude versus the reference diode signal is plotted for a set of runs of varied
Figure 19: (a) Signal in EBA01 PMT 5 (channel 4) versus signal in photodiode 6 using the dataset taken on 2016-07-10. The red line shows the obtained fit. The inset shows the magnified distribution of the low laser-intensity region. (b) Schematic representation of the area obtained by intercepting the joint data points and the fit function, in grey. The deviation from linearity is defined as the ratio between the grey area and the fit function integral between the first (\(x_{0}\)) and last (\(x_{1}\)) points.
Figure 20: The percentage of PMTs within 1 % and 2 % deviation from linearity as a function of time for the weekly calibration runs taken between 2016-07-10 and 2019-01-18 are shown.
light output taken in the same day. A linear fit is performed iteratively for each data point and all the points with smaller amplitudes. The fit comprising more points within one standard deviation of the fitted line is chosen as final for further analysis. An example can be seen in Figure 19a.
The deviation from linearity, in percentage, is defined as the ratio between the area delimited by data points intercepted by the linear fit, and the integral of the linear fit, as shown in Figure 19b.
Figure 20 shows the percentage of PMTs within 1 % and 2 % deviation from linearity as a function of time for the weekly calibration runs taken between 2016 and 2019. The percentage of channels with deviation from linearity less than 1% is \((99.66\pm 0.11)\%\) and less than 2% is \((99.72\pm 0.09)\%\) considering this period.
## 7 Conclusion
The Laser II calibration system of the ATLAS Tile Calorimeter probes individually the 9852 PMTs of the detector, together with the readout electronics of each channel. It is one of the three dedicated systems ensuring the calibration of the full calorimeter response. The Laser II system has undergone a substantial upgrade during the LHC Long Shutdown 1 that improved its stability for the calibration of the calorimeter in Run 2. The readout electronics was renewed and a new light splitter was installed; the system now includes ten photodiodes to monitor the light across its path in the Optics box.
Laser runs were used to regularly determine the PMT response and calculate the calibration constants \(f_{\mathrm{Las}}\), which were updated weekly for the cell energy reconstruction. The PMT response fluctuations are highly correlated with the LHC operations, with the response decreasing with integrated current and recovering in following technical stops of the collider. The calibrations obtained with the laser and caesium source are consistent within 0.4%, dominating the systematic uncertainty on the PMT calibration scale. In addition, a sub-dominant uncertainty on the PMT relative inter-calibration was found to have a luminosity dependence. The linearity of the PMTs was studied to ensure that the calibration factors are accurate across the dynamic range of the PMT response.
Laser events were also used to evaluate the timing of the readout electronics, monitor the stability of time calibration and detect pathological behaviours in the calorimeter channels in data quality activities, contributing to the high TileCal performance in Run 2.
The authors would like to acknowledge the entire TileCal community for their contribution with the discussions related to this work, the operations acquiring laser calibration data, the input from the data quality activities, and the careful review of this report.
|
2308.04035 | Cross-Dataset Adaptation for Instrument Classification in Cataract
Surgery Videos | Surgical tool presence detection is an important part of the intra-operative
and post-operative analysis of a surgery. State-of-the-art models, which
perform this task well on a particular dataset, however, perform poorly when
tested on another dataset. This occurs due to a significant domain shift
between the datasets resulting from the use of different tools, sensors, data
resolution etc. In this paper, we highlight this domain shift in the commonly
performed cataract surgery and propose a novel end-to-end Unsupervised Domain
Adaptation (UDA) method called the Barlow Adaptor that addresses the problem of
distribution shift without requiring any labels from another domain. In
addition, we introduce a novel loss called the Barlow Feature Alignment Loss
(BFAL) which aligns features across different domains while reducing redundancy
and the need for higher batch sizes, thus improving cross-dataset performance.
The use of BFAL is a novel approach to address the challenge of domain shift in
cataract surgery data. Extensive experiments are conducted on two cataract
surgery datasets and it is shown that the proposed method outperforms the
state-of-the-art UDA methods by 6%. The code can be found at
https://github.com/JayParanjape/Barlow-Adaptor | Jay N. Paranjape, Shameema Sikder, Vishal M. Patel, S. Swaroop Vedula | 2023-07-31T18:14:18Z | http://arxiv.org/abs/2308.04035v1 | # Cross-Dataset Adaptation for Instrument Classification in Cataract Surgery Videos
###### Abstract
Surgical tool presence detection is an important part of the intra-operative and post-operative analysis of a surgery. State-of-the-art models, which perform this task well on a particular dataset, however, perform poorly when tested on another dataset. This occurs due to a significant domain shift between the datasets resulting from the use of different tools, sensors, data resolution etc. In this paper, we highlight this domain shift in the commonly performed cataract surgery and propose a novel end-to-end Unsupervised Domain Adaptation (UDA) method called the Barlow Adaptor that addresses the problem of distribution shift without requiring any labels from another domain. In addition, we introduce a novel loss called the Barlow Feature Alignment Loss (BFAL) which aligns features across different domains while reducing redundancy and the need for higher batch sizes, thus improving cross-dataset performance. The use of BFAL is a novel approach to address the challenge of domain shift in cataract surgery data. Extensive experiments are conducted on two cataract surgery datasets and it is shown that the proposed method outperforms the state-of-the-art UDA methods by 6%. The code can be found at [https://github.com/JayParanjape/Barlow-Adaptor](https://github.com/JayParanjape/Barlow-Adaptor)
Keywords:Surgical Tool Classification Unsupervised Domain Adaptation Cataract Surgery Surgical Data Science.
## 1 Introduction
Surgical instrument identification and classification are critical to deliver several priorities in surgical data science [21]. Various deep learning methods have been developed to classify instruments in surgical videos using data routinely generated in institutions [2]. However, differences in image capture systems and protocols lead to nontrivial dataset shifts, causing a significant drop in performance of the deep learning methods when tested on new datasets [13]. Using cataract surgery as an example, Figure 1 illustrates the drop in accuracy of existing methods to classify instruments when trained on one dataset and tested on
another dataset [19, 28]. Cataract surgery is one of the most common procedures [18], and methods to develop generalizable networks will enable clinically useful applications.
Domain adaptation methods aim to attempt to mitigate the drop in algorithm performance across domains [13]. Unsupervised Domain Adaptation (UDA) methods are particularly useful when the source dataset is labeled and the target dataset is unlabeled. In this paper, we describe a novel end-to-end UDA method, which we call the Barlow Adaptor, and its application for instrument classification in video images from cataract surgery. We define a novel loss function called the Barlow Feature Alignment Loss (BFAL) that aligns the features learnt by the model between the source and target domains, without requiring any labeled target data. It encourages the model to learn non-redundant features that are domain agnostic and thus tackles the problem of UDA. BFAL can be added as an add-on to existing methods with minimal code changes. The contributions of our paper are threefold:
1. We define a novel loss for feature alignment called BFAL that doesn't require large batch sizes and encourages learning non-redundant, domain agnostic features.
2. We use BFAL to generate an end-to-end system called the Barlow Adaptor that performs UDA. We evaluate the effectiveness of this method and compare it with existing UDA methods for instrument classification in cataract surgery images.
3. We motivate new research on methods for generalizable deep learning models for surgical instrument classification using cataract surgery as the test-bed. Our work proposes a solution to the problem of lack of generalizability of deep learning models that was identified in previous literature on cataract surgery instrument classification.
Figure 1: Dataset shift between the CATARACTS dataset (CAT) [6] and D99 [7, 9] dataset. Results for models trained on one dataset and tested on another show a significant drop in performance.
## 2 Related Work
**Instrument Identification in Cataract Surgery Video Images.** The motivation for instrument identification is its utility in downstream tasks such as activity localization and skill assessment [3, 22, 8]. The current state-of-the-art instrument identification method called Deep-Phase [28] uses a ResNet architecture to identify instruments and then to identify steps in the procedure. However, a recent study has shown that while these methods work well on one dataset, there is a significant drop in performance when tested on a different dataset [16]. Our analyses reiterate similar findings on drop in performance (Figure 1) and highlight the effect of domain shift between data from different institutions even for the same procedure.
**Unsupervised Domain Adaptation.** UDA is a special case of domain adaptation, where a model has access to annotated training data from a source domain and unannotated data from a target domain [13]. Various methods have been proposed in the literature to perform UDA. One line of research involves aligning the feature distributions between the source and target domains. Maximum Mean Discrepancy (MMD) is commonly used as a distance metric between the source and target distributions [15]. Other UDA methods use a convolutional neural network (CNN) to generate features and then use MMD as an additional loss to align distributions [11, 12, 1, 27, 20, 25]. While MMD is a first-order statistic, Deep CORAL [17] penalizes the difference in the second-order covariance between the source and target distributions. Our method uses feature alignment by enforcing a stricter loss function during training.
Another line of research for UDA involves adversarial training. Domain Adaptive Neural Network (DANN) [5] involves a minimax game, in which one network minimizes the cross entropy loss for classification in the source domain, while the other maximizes the cross entropy loss for domain classification. Few recent methods generate pseudo labels on the target domain and then train the network on them. One such method is Source Hypothesis Transfer (SHOT) [10], which performs source-free domain adaptation by further performing information maximization on the target domain predictions. While CNN-based methods are widely popular for UDA, there are also methods which make use of the recently proposed Vision Transformer (ViT) [4], along with an ensemble of the above described UDA based losses. A recent approach called Cross Domain Transformer (CDTrans) uses cross-domain attention to produce pseudo labels for training that was evaluated in various datasets [24]. Our proposed loss function is effective for both CNN and ViT-based backbones.
## 3 Proposed Method
In the UDA task, we are given \(n_{s}\) observations from the source domain \(\mathcal{D}_{S}\). Each of these observations is in the form of a tuple \((x_{s},y_{s})\), where \(x_{s}\) denotes an image from the source training data and \(y_{s}\) denotes the corresponding label, which is the instrument index present in the image. In addition, we are given
observations from the target domain \(\mathcal{D}_{T}\). Each of these can be represented by \(x_{t}\), which represents the image from the target training data. However, there are no labels present for the target domain during training. The goal of UDA is to predict the labels \(y_{t}\) for the target domain data.
**Barlow Feature Alignment Loss (BFAL).** We introduce a novel loss, which encourages features between the source and target to be similar to each other while reducing the redundancy between the learnt features. BFAL works on pairs of feature projections of the source and target. More specifically, let \(f_{s}\in\mathbb{R}^{BXD}\) and \(f_{t}\in\mathbb{R}^{BXD}\) be the features corresponding to the source and target domain, respectively. Here \(B\) represents the batch size and \(D\) represents the feature dimension. Similar to [26], we project these features into a \(P\) dimensional space using a fully connected layer called the Projector, followed by a batch normalization to whiten the projections. Let the resultant projections be denoted by \(p_{s}\in\mathbb{R}^{BXP}\) for the source and \(p_{t}\in\mathbb{R}^{BXP}\) for the target domains. Next, we compute the correlation matrix \(\mathbb{C}_{1}\in\mathbb{R}^{PXP}\). Each element of \(\mathbb{C}_{1}\) is computed as follows
\[\mathbb{C}_{1}^{ij}=\frac{\sum_{b=1}^{B}p_{s}^{bi}p_{t}^{bj}}{\sqrt{\sum_{b=1}^ {B}(p_{s}^{bi})^{2}}\sqrt{\sum_{b=1}^{B}(p_{t}^{bj})^{2}}}. \tag{1}\]
Finally, the BFAL is computed using the L2 loss between the elements of \(\mathbb{C}_{1}\) and the identity matrix \(\mathbb{I}\) as follows
\[\mathbb{L}_{BFA}=\underbrace{\sum_{i=1}^{P}(1-\mathbb{C}_{1}^{ii})^{2}}_{feature \,alignment}\ +\ \ \underbrace{\mu\sum_{i=1}^{P}\sum_{j\neq i}(\mathbb{C}_{1}^{ij})^{2}}_{ redundancy\,reduction}\, \tag{2}\]
where \(\mu\) is a constant. Intuitively, the first term of the loss function can be thought of as a feature alignment term since we push the diagonal elements in the covariance matrix towards 1. In other words, we encourage the feature projections between the source and target to be perfectly correlated. On the other hand, by pushing the off-diagonal elements to 0, we decorrelate different components of the projections. Hence, this term can be considered a redundancy reduction term, since we are pushing each feature vector component to be independent of one another.
BFAL is inspired by a recent technique in self-supervised learning, called the Barlow Twins [26], where the authors show the effectiveness of such a formulation at lower batch sizes. In our experiments, we observe that even keeping a batch size of 16 gave good results over other existing methods. Furthermore, BFAL does not require large amounts of data to converge.
**Barlow Adaptor.** We propose an end-to-end method that utilizes data from the labeled source domain and the unlabeled target domain. The architecture corresponding to our method is shown in Figure 2.
There are two main sub-parts of the architecture - the Feature Extractor \(F\), and the Source Classifier \(C\). First, we divide the training images randomly into batches of pairs \(\{x_{s},x_{t}\}\) and apply \(F\) on them, which gives us the features
extracted from these sets of images. For the Feature Detector, we show the effectiveness of our novel loss using ViT and ResNet50 both of which have been pre-trained on ImageNet. The features obtained are denoted as \(f_{s}\) and \(f_{t}\) for the source and target domains, respectively. Next, we apply \(C\) on these features to get logits for the classification task. The source classifier is a feed forward neural network, which is initialized from scratch. These logits are used, along with the source labels \(y_{s}\) to compute the source cross entropy loss as \(\mathbb{L}_{CE}=\frac{-1}{B}\sum_{b=1}^{B}\sum_{m=1}^{M}y_{s}^{bm}\log(p_{s}^{bm})\),
where \(M\) represents the number of classes, \(B\) represents the total mini-batches, while \(m\) and \(b\) represent their respective indices.
The features \(f_{s}\) and \(f_{t}\) are further used to compute the Correlation Alignment(CORAL) loss and the BFAL, which enforce the feature extractor to align its weights so as to learn features that are domain agnostic as well as non-redundant. The BFAL is calculated as mentioned in the previous subsection. The CORAL loss is computed as depicted in Equation 4, following the UDA method Deep CORAL [17]. While the BFAL focuses on reducing redundancy, CORAL works by aligning the distributions between the source and target domain data. This is achieved by taking the difference between the covariance matrices of the source and target features - \(f_{s}\) and \(f_{t}\) respectively. The final loss is the weighted sum of the three individual losses as follows:
\[\mathbb{L}_{final}=\mathbb{L}_{CE}+\lambda(\mathbb{L}_{CORAL}+\mathbb{L}_{BFA}), \tag{3}\]
Figure 2: Architecture corresponding to the Barlow Adaptor. Training occurs using pairs of images from the source and target domain. They are fed into the feature extractor, which generates features used for the CORAL loss. Further, a projector network \(P\) projects the features into a \(P\) dimensional space. These are then used to calculate the Barlow Feature Alignment Loss. One branch from the source features goes into the source classifier network that is used to compute the cross entropy loss with the labeled source data. [Backprop = backpropagation; src = source dataset, tgt = target dataset]
where
\[\mathbb{L}_{CORAL}=\frac{1}{4D^{2}}\|\mathbb{C}_{s}-\mathbb{C}_{t}\|_{F}^{2}, \quad\mathbb{C}_{s}=\frac{1}{B-1}(f_{s}^{T}f_{s})-\frac{1}{B}(\mathbf{1}^{T}f_{ s})^{T}(\mathbf{1}^{T}f_{s}), \tag{4}\]
\[\mathbb{C}_{t}=\frac{1}{B-1}(f_{t}^{T}f_{t})-\frac{1}{B}(\mathbf{1}^{T}f_{t})^{ T}(\mathbf{1}^{T}f_{t}). \tag{5}\]
Each of these three losses plays a different role in the UDA task. The cross entropy loss encourages the model to learn discriminative features between images with different instruments. The CORAL loss pushes the features between the source and target towards having a similar distribution. Finally, the BFAL tries to make the features between the source and the target non-redundant and same. BFAL is a stricter loss than CORAL as it forces features to not only have the same distribution but also be equal. Further, it also differs from CORAL in learning independent features as it explicitly penalizes non-zero non-diagonal entries in the correlation matrix. While using BFAL alone gives good results, using it in addition to CORAL gives slightly better results empirically. We note these observations in our ablation studies. Between the cross entropy loss and the BFAL, an adversarial game is played where the former makes the features more discriminative and the latter tries to make them equal. The optimal features thus learnt are different in aspects required to identify instruments but are equal for any domain-related aspect. This property of the Barlow Adaptor is especially useful for surgical domains where the background has similar characteristics for most of the images within a domain. For example, for cataract surgery images, the position of the pupil or the presence of blood during the usage of certain instruments might be used by the model for classification along with the instrument features. These features depend highly upon the surgical procedures and the skill of the surgeon, thus making them highly domain-specific and possibly unavailable in the target domain. Using BFAL during training attempts to prevent the model from learning such features.
## 4 Experiments and Results
We evaluate the proposed UDA method for the task of instrument classification using two cataract surgery image datasets. In our experiments, one dataset is used as the source domain and the other is used as the target domain. We use micro and macro accuracies as our evaluation metrics. Micro accuracy denotes the number of correctly classified observations divided by the total number of observations. In contrast, macro accuracy denotes the average of the classwise accuracies and is effective in evaluating classes with less number of samples.
**Datasets.** The first dataset we use is CATARACTS [6], which consists of 50 videos with framewise annotations available for 21 surgical instruments. The dataset is divided into 25 training videos and 25 testing videos. We separate 5 videos from the training set and use them as the validation set for our experiments. The second dataset is called D99 in this work [7, 9], which consists of 105 videos of cataract surgery with annotations for 25 surgical instruments. Of the
105 videos, we use 65 videos for training, 10 for validation and 30 for testing. We observe a significant distribution shift between the two datasets as seen in Figure 1. This is caused by several factors such as lighting, camera resolution, and differences in instruments used for the same steps. For our experiments in this work, we use 14 classes of instruments that are common to both datasets. Table 1 shows a mapping of instruments between the two datasets. For each dataset, we normalize the images using the means and standard deviations calculated from the respective training images. In addition, we resize all images to \(224\times 224\) size and apply random horizontal flipping with a probability of 0.5 before passing them to the model.
**Experimental Setup.** We train the Barlow Adaptor for multi-class classification with the above-mentioned 14 classes in Pytorch. For the Resnet50 backbone, we use weights pretrained on Imagenet [14] for initialization. For the ViT backbone, we use the base-224 class of weights from the TIMM library [23]. The Source Classifier \(C\) and the Projector \(P\) are randomly initialized. We use the validation sets to select the hyperparameters for the models. Based on these empirical results, we choose \(\lambda\) from Equation 3 to be 0.001 and \(\mu\) from Equation 2 to be 0.0039. We use SGD as the optimizer with momentum of 0.9 and a batch size of 16. We start the training with a learning rate of 0.001 and reduce it by a factor of 0.33 every 20 epochs. The entire setup is trained with a single NVIDIA Quatro RTX 8000 GPU. We use the same set of hyperparameters for the CNN and ViT backbones in both datasets.
**Results.** Table 2 shows results comparing the performance of the Barlow Adaptor with recent UDA methods. We highlight the effect of domain shift by comparing the source-only models and the target-only models, where we observe a significant drop of 27% and 43% in macro accuracy for the CATARACTS dataset and the D99 dataset, respectively. Using the Barlow Adaptor, we observe an increase in macro accuracy by 7.2% over the source only model. Similarly, we observe an increase in macro accuracy of 9% with the Barlow Adaptor when the source is CATARACTS and the target is the D99 dataset compared with the source only model. Furthermore, estimates of macro and micro accuracy are larger with the Barlow Adaptor than those with other existing methods. Finally,
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline CATARACTS & D99 & CATARACTS & D99 \\ \hline Secondary Incision Knife & \multicolumn{2}{c||}{Parcantesis Blade} & \multicolumn{1}{c|}{Bonn Forceps} & 0.12 Forceps \\ Charleux Cannula & Anterior Chamber Cannula & \multicolumn{1}{c|}{Irrigation} & \multicolumn{1}{c|}{Irrigation} \\ Capsulorhexis Forceps & \multicolumn{1}{c|}{Utrata Forceps} & \multicolumn{1}{c|}{Cotton} & \multicolumn{1}{c|}{Weckcell Sponge} \\ Hydrodissection Cannula & \multicolumn{1}{c|}{Hydrodissection Cannula} & \multicolumn{1}{c|}{Implant Injector} & \multicolumn{1}{c|}{IOL Injector} \\ Phacoemulsifier Handpiece & \multicolumn{1}{c|}{Phaco Handpiece} & \multicolumn{1}{c|}{Suture Needle} & \multicolumn{1}{c|}{Suture} \\ Capsulorhexis Cystotome & \multicolumn{1}{c|}{Cystotome} & \multicolumn{1}{c|}{Needle Holder} & \multicolumn{1}{c|}{Needle Driver} \\ Primary Incision Knife & \multicolumn{1}{c||}{Keratome} & \multicolumn{1}{c|}{Micromanipulator} & \multicolumn{1}{c|}{Chopper} \\ \hline \end{tabular}
\end{table}
Table 1: Mapping of surgical tools between CATARACTS(L) and D99(R)
improved accuracy with the Barlow Adaptor is seen with both ResNet and ViT backbones.
**Ablation Study.** We tested the performance gain due to each part of the Barlow Adaptor. Specifically, the Barlow Adaptor has CORAL loss and BFAL as its two major feature alignment losses. We remove one component at a time and observe a decrease in performance with both ResNet and ViT backbones (Table 3). This shows that each loss has a part to play in domain adaptation. Further ablations are included in the supplementary material.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{D99 \(\rightarrow\) CAT} & \multicolumn{2}{c|}{CAT \(\rightarrow\) D99} \\ \hline Method & Macro Acc & Micro Acc & Macro Acc & Micro Acc \\ \hline Source Only (ResNet50 backbone) & 27.9\% & 14.9\% & 14.25\% & 16.9\% \\ MMD with ResNet50 backbone[15] & 32.2\% & 15.9\% & 20.6\% & 24.3\% \\ \hline Source Only (ViT backbone) & 30.43\% & 14.14\% & 13.99\% & 17.11\% \\ MMD with ViT backbone[15] & 31.32\% & 13.81\% & 16.42\% & 20\% \\ CORAL with ViT backbone[17] & 28.7\% & 16.5\% & 15.38\% & 18.5 \\ \hline DANN[5] & 22.4\% & 11.6\% & 16.7\% & 19.5\% \\ Deep CORAL[17] & 32.8\% & 14\% & 18.6\% & 22 \\ CDTrans[24] & 29.1\% & 14.7\% & 20.9\% & 24.7\% \\ \hline Barlow Adaptor with ResNet50 (Ours) & **35.1\%** & **17.1\%** & **24.62\%** & **28.13\%** \\ Barlow Adaptor with ViT (Ours) & 31.91\% & 12.81\% & 17.35\% & 20.8\% \\ \hline Target Only (ResNet50) & 55\% & 67.2\% & 57\% & 62.2\% \\ Target Only (ViT) & 49.80\% & 66.33\% & 56.43\% & 60.46\% \\ \hline \end{tabular}
\end{table}
Table 2: Macro and micro accuracies for cross domain tool classification. Here, source-only denotes models that have only been trained on one domain and tested on the other. Similarly, target-only denotes models that have been trained on the test domain and thus act as an upper bound. Deep CORAL [17] is similar to using CORAL with ResNet backbone, so we don’t list the latter separately. Here, CAT represents the CATARACTS dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{ViT Feature Extractor} & \multicolumn{2}{c|}{ResNet50 Feature Extractor} \\ \hline Method & D99 \(\rightarrow\) CAT & CAT \(\rightarrow\) D99 & D99 \(\rightarrow\) CAT & CAT \(\rightarrow\) D99 \\ \hline Source Only(\(\mathbb{L}_{CE}\)) & 30.43\% & 16.7\% & 27.9\% & 14.9\% \\ Only CORAL(\(\mathbb{L}_{CORAL}\)) & 28.7\% & 15.38\% & 32.8\% & 18.6\% \\ Only BFAL(\(\mathbb{L}_{BFA}\)) & 29.8\% & 17.01\% & 32.3\% & 24.46\% \\ Barlow Adaptor(Eq 3) & **32.1**\% & **17.35**\% & **35.1\%** & **24.62**\% \\ \hline \end{tabular}
\end{table}
Table 3: Findings from ablation studies to evaluate the Barlow Adaptor. Here, Source Only is the case where neither CORAL nor BFAL is used. We use Macro Accuracy for comparison. Here, CAT represents the CATARACTS dataset.
## 5 Conclusion
Domain shift between datasets of cataract surgery images limits generalizability of deep learning methods for surgical instrument classification. We address this limitation using an end-to-end UDA method called the Barlow Adaptor. As part of this method, we introduce a novel loss function for feature alignment called the BFAL. Our evaluation of the method shows larger improvements in classification performance compared with other state-of-the-art methods for UDA. BFAL is an independent module and can be readily integrated into other methods as well. BFAL can be easily extended to other network layers and architectures as it only takes pairs of features as inputs.
## 6 Acknowledgement
This research was supported by a grant from the National Institutes of Health, USA; R01EY033065. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
2309.09638 | Neural Network-Based Rule Models With Truth Tables | Understanding the decision-making process of a machine/deep learning model is
crucial, particularly in security-sensitive applications. In this study, we
introduce a neural network framework that combines the global and exact
interpretability properties of rule-based models with the high performance of
deep neural networks.
Our proposed framework, called $\textit{Truth Table rules}$ (TT-rules), is
built upon $\textit{Truth Table nets}$ (TTnets), a family of deep neural
networks initially developed for formal verification. By extracting the set of
necessary and sufficient rules $\mathcal{R}$ from the trained TTnet model
(global interpretability), yielding the same output as the TTnet (exact
interpretability), TT-rules effectively transforms the neural network into a
rule-based model. This rule-based model supports binary classification,
multi-label classification, and regression tasks for tabular datasets.
Furthermore, our TT-rules framework optimizes the rule set $\mathcal{R}$ into
$\mathcal{R}_{opt}$ by reducing the number and size of the rules. To enhance
model interpretation, we leverage Reduced Ordered Binary Decision Diagrams
(ROBDDs) to visualize these rules effectively.
After outlining the framework, we evaluate the performance of TT-rules on
seven tabular datasets from finance, healthcare, and justice domains. We also
compare the TT-rules framework to state-of-the-art rule-based methods. Our
results demonstrate that TT-rules achieves equal or higher performance compared
to other interpretable methods while maintaining a balance between performance
and complexity. Notably, TT-rules presents the first accurate rule-based model
capable of fitting large tabular datasets, including two real-life DNA datasets
with over 20K features. Finally, we extensively investigate a rule-based model
derived from TT-rules using the Adult dataset. | Adrien Benamira, Tristan Guérand, Thomas Peyrin, Hans Soegeng | 2023-09-18T10:13:59Z | http://arxiv.org/abs/2309.09638v1 | # Neural Network-Based Rule Models With Truth Tables
###### Abstract
Understanding the decision-making process of a machine/deep learning model is crucial, particularly in security-sensitive applications. In this study, we introduce a neural network framework that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks.
Our proposed framework, called _Truth Table rules_ (TT-rules), is built upon _Truth Table nets_ (TTnets), a family of deep neural networks initially developed for formal verification. By extracting the set of necessary and sufficient rules \(\mathcal{R}\) from the trained TTnet model (global interpretability), yielding the same output as the TTnet (exact interpretability), TT-rules effectively transforms the neural network into a rule-based model. This rule-based model supports binary classification, multi-label classification, and regression tasks for tabular datasets. Furthermore, our TT-rules framework optimizes the rule set \(\mathcal{R}\) into \(\mathcal{R}_{opt}\) by reducing the number and size of the rules. To enhance model interpretation, we leverage Reduced Ordered Binary Decision Diagrams (ROBDDs) to visualize these rules effectively.
After outlining the framework, we evaluate the performance of TT-rules on seven tabular datasets from finance, healthcare, and justice domains. We also compare the TT-rules framework to state-of-the-art rule-based methods. Our results demonstrate that TT-rules achieves equal or higher performance compared to other interpretable methods while maintaining a balance between performance and complexity. Notably, TT-rules presents the first accurate rule-based model capable of fitting large tabular datasets, including two real-life DNA datasets with over 20K features. Finally, we extensively investigate a rule-based model derived from TT-rules using the Adult dataset.
## 1 Introduction
Deep Neural Networks (DNNs) have been widely and successfully employed in various machine learning tasks, but concerns regarding their security and trustworthiness persist. One of the primary issues associated with DNNs, as well as ensemble ML models in general, is their lack of explainability and the challenge of incorporating human knowledge into them due to their inherent complexity [40, 41]. Therefore, there is a significant research focus on achieving global and exact interpretability for these systems, especially in safety-critical applications [4, 20].
In contrast, rule-based models [23], including tree-based models [11], are specifically designed to offer global and exact explanations, providing insights into the decision-making process that yields the same output as the model. However, they generally exhibit lower performance compared to other models like DNNs or ensemble ML model [28]. Additionally, they encounter scalability issues when dealing with large datasets and lack flexibility in addressing various types of tasks, often limited to binary classification [14].
To the best of our knowledge, there is currently no family of DNNs that possesses both global and exact interpretability akin to rule-based models, while also demonstrating scalability on real-life datasets without the need for an explainer. This limitation is significant since explainer methods often provide only local, inexact, and potentially misleading explanations [40, 41, 43].
**Our approach.** This paper introduces a novel neural network framework that effectively combines the interpretability of rule-based models with the high performance of DNNs. Our framework, called TT-rules, builds upon the advancements made by Benamira _et al._[10] and Agarwal _et al._[3]. The latter proposed a neural network architecture that achieves interpretability by utilizing several DNNs, each processing a single continuous input feature, and a linear layer for merging them. The effectiveness of aggregating local features on image datasets to achieve high accuracy has been demonstrated by Brendel _et al._[15]. Similarly, Agarwal _et al._[3] showed that aggregating local features on tabular datasets can yield high accuracy. Furthermore, Benamira _et al._[10] introduced a new Convolutional Neural Network (CNN) filter function called the Learning Truth Table (LTT) block. The LTT block has the unique property of its complete distribution being computable in constant and practical time, regardless of the architecture. This allows the transformation of the LTT block from weights into an exact mathematical Boolean formula. Since an LTT block is equivalent to a CNN filter, the entire neural network model, known as Truth Table Net (TTnet), can itself be represented as a Boolean formula.
To summarize, while Agarwal _et al._[3] focused on continuous inputs, and Benamira _et al._[10] focused on discrete inputs, our approach leverages the strengths of both works to achieve high accuracy while maintaining global and exact interpretability.
**Our contributions.** To optimize the rule set \(\mathcal{R}\), our TT-rules framework employs two post-training steps. Firstly, we automatically integrate _"Don't Care Terms"_ (\(DCT\)), utilizing human logic, into the truth tables. This reduces the size of each rule in the set \(\mathcal{R}\). Secondly, we introduce and analyze an inter-rule correlation score to decrease the number of rules in \(\mathcal{R}\). These optimizations, specific to the TT-rules framework, automatically and efficiently transform the set \(\mathcal{R}\) into an optimized set \(\mathcal{R}opt\) in constant time. We also quantify the trade-offs among performance, the number of rules, and their sizes. At this stage, we obtain a rule-based model from the trained DNN TTnet, which can be used for prediction by adding up the rules in \(\mathcal{R}opt\) according to the binary of floating linear layer. To enhance the interpretability of the model, we convert all rule equations into
Our claims.A) The TT-rules framework demonstrates versatility and effectiveness across various tasks, including binary classification, multi-classification, and regression. A-1) Our experiments encompass five machine learning datasets: Diabetes [22] in healthcare, Adult [22], HELOC [1], and California Housing [34] in finance, and Compas [7] in the justice domain. The results clearly indicate that the TT-rules framework surpasses most interpretable models in terms of Area Under Curve/Root Mean Square Error (AUC/RMSE), including linear/logistic regression, decision trees, generalized linearized models, and neural additive models. A-2) On two datasets, the TT-rules framework performs comparably to XGBoost and DNN models. A-3) We conducted a comparative analysis of the performance-complexity tradeoff between our proposed TT-rules framework and other state-of-the-art rule-based models, such as generalized linearized models [47], RIPPER [18, 19], decision trees (DT)[36], and ORS[48], specifically focusing on binary classification tasks. Our findings demonstrate that the TT-rules framework outperforms all the aforementioned models, except for the generalized linearized models, in terms of the performance-complexity tradeoff.
B) Scalability is a key strength of our model, enabling it to handle large datasets with tens of thousands of features, such as DNA datasets [44, 37, 33], which consist of over 20K features. Our model not only scales efficiently but also performs feature reduction, compressing the initial 20K features of the first DNA datasets [44, 37] into 1K rules, and reducing the 23K features of the second DNA dataset [33] into 9K rules.
C) A distinctive feature of our framework lies in its inherent global and exact interpretability. C-1) To showcase its effectiveness, we provide a concrete use case with the Adult dataset and thoroughly investigate its interpretability. C-2) We explore the potential for incorporating human knowledge into our framework. C-3) Additionally, we highlight how experts can leverage the rules to detect concept shifts, further emphasizing the interpretability aspect of our framework.
Outline.This paper is structured as follows. Section 2 presents a comprehensive literature review on rule-based models. In Section 3, we establish the notations and fundamental concepts that will be utilized throughout the paper. Section 4 offers a detailed analysis of the TT-rules framework, exploring its intricacies and functionalities. In Section 5, we present the experimental results obtained and compare them with the current state-of-the-art approaches. Additionally, we showcase the scalability of our framework and illustrate its applicability through a compelling case study. The limitations of the proposed approach are discussed in Section 6, followed by the concluding remarks in Section 7.
## 2 Related work
### Classical rule-based models
Rule-based models are widely used for interpretable classification and regression tasks. This class encompasses various models such as decision trees [11], rule lists [42, 6, 21], linear models, and rule sets [30, 18, 19, 38, 47]. Rule sets, in particular, offer high interpretability due to their straightforward inference process [30]. However, traditional rule sets face limitations when applied to large tabular datasets, binary classification tasks, and capturing complex feature relationships. These limitations result in reduced accuracy and limited practicality in real-world scenarios [48, 46]. To overcome these challenges, we leverage the recent work of Benamira _et al._[10], who proposed an architecture specifically designed to be encoded into CNF formulas [12]. This approach has demonstrated scalability on large datasets like ImageNet and can be extended to multi-label classification tasks. In this study, our objective is to extend Benamira's approach to handle binary and multi-class classification tasks, as well as regression tasks, across a wide range of tabular datasets ranging from 17 to 20K features.
### DNN-based rule models
There have been limited investigations into the connection between DNNs and rule-based models. Two notable works in this area are DNF-net [2] and RRL [46]. DNF-net focuses on the activation function but lacks available code, while RRL specifically addresses classification tasks. Although RRL achieved high accuracy on the Adult dataset, its interpretability raises concerns due to its complex nature, involving millions of terms, and its training process that is time-consuming [46]. Neural Additive Models (NAMs) [3] represent another type of neural network architecture that combines the flexibility of DNNs with the interpretability of additive models. While NAMs have demonstrated superior performance compared to traditional interpretable models, they do not strictly adhere to the rule-based model paradigm and can pose challenges in interpretation, especially when dealing with a large number of features. In this paper, we conduct a comparative analysis to evaluate the performance and interpretability of our TT-rules framework in comparison to NAMs [3].
## 3 Background
### Rule-based models
#### 3.1.1 Rules format : DNF and ROBDD
Rule-based models are a popular method for generating decision predicates expressed in DNF. For instance, in the Adult dataset [22], a rule for determining whether an individual would earn more than 50KS/year might look like:
\[((\text{Age}>34)\wedge\text{Maried})\vee(\text{Male}\wedge(\text{Capital Loss }<1\text{k/year}))\]
Although a rule is initially expressed in DNF format, a decision tree format is often preferred. To achieve this, the DNF is transformed into its equivalent Reduced Ordered Binary Decision Diagram (ROBDD) graph: a directed acyclic graph used to represent a Boolean function [31, 5, 16, 8].
#### 3.1.2 Infer a set of rule-based model
In a binary classification problem, we are presented with a set of rules \(\mathcal{R}\) and a corresponding set of weights \(\mathcal{W}\). These rules and weights can be separated into two distinct sets, namely \(\mathcal{R}+\) and \(\mathcal{W}+\) for class 1, and \(\mathcal{R}-\) and \(\mathcal{W}-\) for class 0. Given an input \(I\), we can define the rule-based model as follows:
\[Classifier(I,\mathcal{R})=\left\{\begin{array}{ll}1&\text{if }S_{+}(I)-S_{-}(I)>0\\ 0&\text{otherwise.}\end{array}\right.\]
Here, \(S_{+}(I)\) and \(S_{-}(I)\) denote the scores for class 1 and class 0, respectively. These scores are calculated using the following equations:
\[\left\{\begin{array}{l}S_{+}(I)=\sum_{(r_{+},w_{+})\in(\mathcal{R}+,\mathcal{W}+ )}w_{+}\times\mathbb{I}_{\tau+(I)\text{ is True}}\\ S_{-}(I)=\sum_{(r_{-},w_{-})\in(\mathcal{R}-,\mathcal{W}-)}w_{-}\times\mathbb{I }_{\tau-(I)\text{ is True}}\end{array}\right.\]
where \(\mathbb{I}_{\tau(I)\text{ is True}}\) represents the binary indicator that is equal to 1 if the input \(I\) satisfies the rule \(r\), and 0 otherwise. This rule-based model can be easily extended to multi-class classification and regression tasks.
#### 3.1.3 Comparing rule-based models
When comparing rule-based models, it is common to evaluate their quality based on three main criteria. The first is their performance, which can be measured using metrics such as AUC, accuracy, or RMSE. The second criterion is the number of rules used in the model. Finally, the overall complexity of the model is also taken into account, which is given as the sum of the size of each rule, for all rules in the model [23].
### Truth Table net (TTnet)
The paper [10] proposed a new CNN filter function called Learning Truth Table (LTT) block for which one can compute the complete distribution in practical and constant time, regardless of the network architecture. Then, this LTT block is inserted inside a DNN as CNN filters are integrated into deep convolutional neural networks.
#### 3.2.1 Overall LTT design
An LTT block must meet two essential criteria:
* (A) The LTT block distribution must be entirely computable in practical and constant time, regardless of the complexity of the DNN.
* (B) Once LTT blocks are assembled into a layer and layers into a DNN, the latter DNN should be scalable, especially on large-scale datasets such as ImageNet.
To meet these criteria, Benamira _et al._[10] proposed the following LTT design rules:
1. Reduce the input size of the CNN filter to \(n\leq 9\).
2. Use binary inputs and outputs.
3. Ensure that the LTT block function uses a nonlinear function.
As a result, each filter in our architecture becomes a truth table with a maximum input size of 9 bits.
Notations.We denote the \(f^{th}\)\(1\)D-LTT of a layer with input size \(n\), stride \(s\), and no padding as \(\Phi_{f}\). Let the input feature with a single input channel \(chn_{input}=1\) be represented as \((v_{0}\dots v_{L-1})\), where \(L\) is the length of the input feature. We define \(y_{i,f}\) as the output of the function \(\Phi_{f}\) at position \(i\):
\[y_{i,f}=\Phi_{f}(v_{i\times s},v_{i\times s+1},\dots,v_{i\times s+(n-1)})\]
Following the aforementioned rules (1) and (2), \(y_{i,f}\) and \((v_{i\times s},v_{i\times s+1},\dots,v_{i\times s+(n-1)})\) are binary values, and \(n\leq 9\). As a result, we can express the 1D-LTT function \(\Phi_{f}\) as a truth table by enumerating all \(2^{n}\) possible input combinations. The truth table can then be converted into an optimal (in terms of literals) \(\mathsf{DNF}\) formula using the Quine-McCluskey algorithm [13] for interpretation.
Example 1: From LTT weights to truth table and \(\mathsf{DNF}\).In this example, we consider a pre-trained 1D-LTT \(\Phi_{f}\) with input size \(n=4\), a stride of size \(1\), and no padding. The architecture of \(\Phi_{f}\) is given in Figure 0(b) composed of two CNN filter layers: the first one has parameters \(\mathtt{W}_{1}\) with (input channel, output channel, kernel size, stride) = \((1,4,3,1)\), while the second \(\mathtt{W}_{2}\) with \((4,1,2,1)\). The inputs and outputs of \(\Phi_{f}\) are binary, and we denote the inputs as [\(x_{0}\), \(x_{1}\), \(x_{2}\), \(x_{3}\)]. To compute the complete distribution of \(\Phi_{f}\), we generate all \(2^{4}=16\) possible input/output pairs, as shown in Figure 0(a), and obtain the truth table in Table 1. This truth table fully characterizes the behavior of \(\Phi_{f}\). We then transform the truth table into a \(\mathsf{DNF}\) using the Quine-McCluskey algorithm [13]. This algorithm provides an optimal (in terms of literals) \(\mathsf{DNF}\) formula that represents the truth table. The resulting \(\mathsf{DNF}\) formula for \(\Phi_{f}\) can be used to compute the output of \(\Phi_{f}\) for any input. Overall, this example demonstrates the applicability of LTT design rules in the construction of DNNs, as it meets both criteria of LTT blocks being computable in constant time and DNN scalability on large datasets.
#### 3.2.2 Overall TTnet design
We integrated LTT blocks into the neural network, just as CNN filters are integrated into a deep convolutional neural network: each LTT layer is composed of multiple LTT blocks and there are multiple LTT layers in total. Additionally, there is a pre-processing layer and a final layer. These two layers provide flexibility in adapting to different applications: scalability, formal verification, and logic circuit design.
## 4 Truth Table Rules (TT-rules)
The Truth Table rules framework consists of three essential components. The first step involves extracting the precise set of rules \(\mathcal{R}\) once the TTnet has been trained. Next, we optimize \(\mathcal{R}\) by reducing the rule's size through _Don't Care Terms_ (DCT) injection. At this point, \(\mathcal{R}\) is equivalent to the Neural Network model: inferring with \(\mathcal{R}\) is the same as inferring with the model. Last, we minimize the number of rules using the Truth Table correlation metric. Both techniques serve to enhance the model's complexity while minimizing any potential loss of accuracy.
### From LTT block to set of rules \(\mathcal{R}\)
General.We now introduce a method to convert \(\Phi_{f}\) from the general \(\mathsf{DNF}\) form into rule set \(\mathcal{R}\). In the previous section, we described the general procedure for transforming an LTT block into a \(\mathsf{DNF}\) logic gate expression. This expression is independent of the spatial position of the feature. This means that we have:
\[\left\{\begin{array}{l}y_{0,f}=\Phi_{f}(v_{0},v_{1},\dots,v_{(n-1)}))\\...\\ y_{i,f}=\Phi_{f}(v_{i\times s},v_{i\times s+1},\dots,v_{i\times s+(n-1)}))\\...\\ y_{\lfloor\frac{L-n}{s}\rfloor,f}=\Phi_{f}(v_{L-n},v_{L-n+1},\dots,v_{L-1})) \end{array}\right.\]
When we apply the LTT \(\mathsf{DNF}\) expression to a specific spatial position on the input, we convert the \(\mathsf{DNF}\) into a rule. To convert the general \(\mathsf{DNF}\) from into a set of rules \(\mathcal{R}\), we divide the input into patches and substitute the \(\mathsf{DNF}\) literals with the corresponding feature names. The number of rules for one filter corresponds to the number of patches: \(\lfloor\frac{L-n}{s}\rfloor\). An example of this process is given in Table 1 and one is provided below.
**Example 2: conversion of DNF expressions to rules.** We established the \(\Phi_{f}\) expression in DNF form as \(x_{3}\land\overline{x_{0}}\land\overline{x_{1}}\land\overline{x_{2}}\). To obtain the rules, we need to consider the padding and the stride of the LTT block. Consider the following 5-feature binary input (\(L=5\)): [Male, Go Uni., Married, Born in US, Born in UK]. In our case, with a stride at 1 and no padding, we get 2 patches: [Male, Go Uni., Married, Born US] and [Go Uni., Married, Born US, Born UK]. After the substitution of the literal by the corresponding feature, we get 2 rules \(\mathcal{R}=\{\text{Rule}_{0}^{\text{DNF}},\text{Rule}_{1}^{\text{DNF}}\}\):
\[\left\{\begin{array}{l}\text{Rule}_{0,f}^{\text{DNF}}=\text{Born US }\land\overline{\text{Male}}\land\bar{\text{Go Uni.}}\land\text{Married}\\ \text{Rule}_{1,f}^{\text{DNF}}=\text{Born UK}\land\bar{\text{Go Uni.}}\land\text{ Married}\land\text{Born US}\end{array}\right.\]
and therefore, the output of the LTT block \(\Phi_{f}\) becomes:
\[\left\{\begin{array}{l}y_{0,f}=\text{Rule}_{0,f}^{\text{DNF}}(v_{0},v_{1}, v_{2},v_{3})\\ y_{1,f}=\text{Rule}_{1,f}^{\text{DNF}}(v_{1},v_{2},v_{3},v_{4})\end{array}\right.\]
We underline the logic redundancy in Rule\({}_{1}^{\text{DNF}}\): if someone is born in the UK, he/she is necessarily not born in the US. We solve this issue by injecting _Don't Care Terms_ (\(DCT\)) into the truth table as we will see in the next section.
### Automatic post-training optimizations: from \(\mathcal{R}\) to \(\mathcal{R}_{opt}\)
In this subsection, we present automatic post-training optimizations that are unique to our model and require the complete computation of the LTT truth table.
#### 4.2.1 Reducing the rule's size with Don't Care Terms (\(Dct\)) injection
We propose a method for reducing the size of rules by injecting _Don't Care Terms_ (\(DCT\)) into the truth table. These terms represent situations where the LTT block output can be either 0 or 1 for a specific input, without affecting the overall performance of the DNN. We use the Quine-McCluskey algorithm to assign the optimal value to the \(DCT\) and reduce the DNF equations. These \(DCT\) can be incorporated into the model either with background knowledge or automatically with the one hot encodings and the Dual Step Function described in the TTnet paper [10].
To illustrate this method, we use Example 2 where we apply human common sense and reasoning to inject \(DCT\) into the truth table. For instance, since no one can be born in both the UK and the US at the same time, the literals \(x_{2}\) and \(x_{3}\) must not be 1 at the same time for the second rule. By injecting \(DCT\) into the truth table as \([0,0,1,DCT,0,0,0,DCT,0,0,0,DCT,0,0,DCT]\), we obtain the new reduced rule: \(\text{Rule}_{1,reduced}^{\text{DNF}}=\text{Born UK}\land\bar{\text{Go Uni.}}\land\bar{\text{Married}}\). This method significantly decreases the size of the rules while maintaining the same accuracy, as demonstrated in Table 4 in Section 5.
#### 4.2.2 Reducing the number of rules with Truth Table Correlation metric
To reduce the number of rules obtained with the TT-rules framework, we introduce a new metric called Truth Table Correlation (\(TTC\)).
\begin{table}
\begin{tabular}{|c|c|c|c||c|} \hline \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(\Phi_{f}\) \\ \hline
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 & 0 \\
0 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Truth Table of the LTT block \(\Phi_{f}\) characterized by the weights \(\mathcal{W}_{1}\) and \(\mathcal{W}_{2}\) with \(L=5\) and binary input feature names [Is the Sex Male? (Male), Did the person go to University? (Go Uni.), Is the person married? (Married), Is the person born in the US? (Born US), Is the person born in the UK? (Born UK)].
Figure 1: A Learning Truth Table (LTT) filter example in one dimension.
This metric addresses the issue of rule redundancy by measuring the correlation between two different LTT blocks, which may learn similar rules since they are completely decoupled from each other. The idea is to identify and remove redundant rules and keep only the most relevant ones.
The \(TTC\) metric is defined as follows:
\[TTC(y_{1},y_{2})=\left\{\begin{array}{ll}\frac{HW(y_{1},\overline{y_{2}})}{ \frac{HW(y_{1},\overline{y_{2}})}{|y_{1}|}}-1&\text{if}abs(\frac{HW(y_{1}, \overline{y_{2}})}{|y_{1}|}-1)>\frac{HW(y_{1},\overline{y_{2}})}{|y_{1}|}\\ \frac{HW(y_{1},\overline{y_{2}})}{|y_{1}|}&\text{otherwise.}\end{array}\right.\]
Here, \(y_{1}\) and \(y_{2}\) are the outputs of the LTT blocks, \(\overline{y_{2}}\) is the negation of \(y_{2}\), \(|y_{1}|\) represents the number of elements in \(y_{1}\), and \(HW\) is the Hamming distance function. The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are not equal. The \(TTC\) metric varies from -1 to 1. When \(TTC=-1\), the LTT blocks are exactly opposite, while they are the same if \(TTC=1\). We systematically filter redundant rules with a threshold correlation of \(\pm 0.9\). If the correlation is positive, we delete one of the two filters and give the same value to the second filter. If the correlation is negative, we delete one of the two filters and give the opposite value to the second filter. By using this metric, we can reduce the number of rules and optimize the complexity of the model while minimizing accuracy degradation.
### Overall TT-rules architecture
#### 4.3.1 Pre-processing and final layer
To maintain interpretability, we apply batch normalization and a step function layer consisting of a single linear layer. The batch normalization allows to learn the thresholds for the continuous features (such as the condition \(\text{YoE}>11\) in Fig. 2). We propose two types of training for the final linear layer. The first uses a final sparse binary layer, which forces all weights to be binary and sparse according to a BinMask as in [27]. In order to train without much loss in performance when using the Heaviside step function, Benamira _et al._[10] adopted the Straight-Through Estimator (STE) proposed by [26]. The second is designed for scalability and employs floating-point weights, which allows to extend the model to regression tasks. To reduce overfitting, a dropout function is applied in the second case.
#### 4.3.2 Estimating complexity before training
In our TT-rules framework, the user is unable to train a final rule-based model with a fixed and pre-selected complexity. However, the complexity can be estimated. The number of rules is determined by multiplying the number of filters \(F\) by the number of patches \(\lfloor\frac{L-n}{s}\rfloor\). The complexity of each rule is based on the size of the function \(n\), and on average, we can expect \(n2^{n-1}\) Boolean gates per rule, before \(DCT\) injection. Therefore, the overall complexity is given by \(n\times 2^{n-1}\times\lfloor\frac{L-n}{s}\rfloor\times F\).
#### 4.3.3 Training and extraction time
Training.Compared to other rule-based models, our architecture scales well in terms of training time. The machine learning tabular dataset can be trained in 1-5 minutes for 5-fold cross-validation. For large DNA tabular datasets, our model can be trained in 45 minutes for 5-fold cross-validation, which is not possible with other rule-based models such as GL and RIPPER.
Extraction time for \(\mathcal{R}_{opt}\).Our model is capable of extracting optimized rules at a fast pace. Each truth table can be computed in \(2^{n}\) operations, where \(n\leq 9\) is in terms of complexity. In terms of time, our model takes 7 to 17 seconds for Adult [22], 7 to 22 seconds for Compas [7], and 20 to 70 seconds for Diabetes [22].
## 5 Results
In this section, we present the results of applying the TT-rules framework to seven datasets, which allow us to demonstrate the effectiveness of our approach and provide evidence for the three claims stated in the introduction.
### Experimental set-up
Evaluation measures and training conditions.We used RMSE, AUC, and accuracy for the evaluation of the regression, binary classification, and multi-class classification respectively. Rules and complexity are defined in Section 3.1.3. All results are presented after grid search and 5-fold cross-validation. All the training features are detailed in the supplementary Section. We compare the performance of our method with that of several other algorithms, including Linear/Logistic Regression [36], Decision Trees (DT)[36], Generalized Linear Models (GL)[47], Neural Additive Models (NAM)[3], XGBoost[17], and Deep Neural Networks (DNNs) [36]. The supplementary materials provide details on the training conditions used for these competing methods. Experiments are available on demand. Our workstation consists of eight cores Intel(R) Core(TM) i7-8650U CPU clocked at 1.90 GHz, 16 GB RAM.
\begin{table}
\begin{tabular}{l|c|c c c|c} \hline \hline & **Regression** (RMSE) & \multicolumn{3}{c|}{**Binary classification** (AUC)} & **Multi-classification** (Accuracy) \\ \hline & California Housing & Compas & Adult & HELOC & Diabetes \\ continous/binary \# & 8/144 features & 9/17 features & 14/100 features & 24/330 features & 43/296 features \\ \hline Linear/ log & 0.728 \(\pm\) 0.015 & 0.721 \(\pm\) 0.010 & 0.883 \(\pm\) 0.002 & 0.798 \(\pm\) 0.013 & 0.581 \(\pm\) 0.002 \\ DT & 0.514 \(\pm\) 0.017 & 0.731 \(\pm\) 0.020 & 0.872 \(\pm\) 0.002 & 0.771 \(\pm\) 0.012 & 0.572 \(\pm\) 0.002 \\ GL & 0.425 \(\pm\) 0.015 & 0.735 \(\pm\) 0.013 & 0.904 \(\pm\) 0.001 & 0.803 \(\pm\) 0.001 & NA \\ NAM & 0.562 \(\pm\) 0.007 & 0.739 \(\pm\) 0.010 & - & - & - \\ TT-rules (Ours) & 0.394 \(\pm\) 0.017 & 0.742 \(\pm\) 0.007 & 0.906 \(\pm\) 0.005 & 0.800 \(\pm\) 0.001 & 0.584 \(\pm\) 0.003 \\ \hline XGBoost & 0.532 \(\pm\) 0.014 & 0.736 \(\pm\) 0.001 & 0.913 \(\pm\) 0.002 & 0.802 \(\pm\) 0.001 & 0.591 \(\pm\) 0.001 \\ DNNs & 0.492 \(\pm\) 0.009 & 0.732 \(\pm\) 0.004 & 0.902 \(\pm\) 0.002 & 0.800 \(\pm\) 0.010 & 0.603 \(\pm\) 0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison machine learning dataset of our method to Linear/Logistic Regression) [36], Decision Trees (DT) [36], GL [47], NAM [3], XGBoost [17] and DNNs. Results are obtained with a large TT-rules model, without optiizations. Means and standard deviations are reported from 5-fold cross validation.
Machine learning datasets.We utilized a variety of healthcare and non-healthcare datasets for our study. For multi-classification, we used the Diabetes 130 US-Hospitals dataset1 from the UCI Machine Learning Repository [22]. For binary classification tasks, we used two single-cell RNA-seq analysis datasets, one for head and neck cancer2[37] and another for melanoma3[44], as well as the TCGA lung cancer dataset4[33] for regression. For binary classification tasks, we used the Adult dataset5 from the UCI Machine Learning Repository [22], the Compas dataset6 introduced by ProPublica [7], and the HELOC dataset [1]. We also employed the California Housing dataset7[34] for the regression task. Supplementary details regarding each of the datasets can be found in the supplementary Section materials.
Footnote 1: [https://bit.ly/diabetes_130_uci](https://bit.ly/diabetes_130_uci)
Footnote 2: [https://bit.ly/acck_head_rna](https://bit.ly/acck_head_rna)
Footnote 3: [https://hyl.melamona_rna](https://hyl.melamona_rna)
Footnote 4: [https://bit.ly/cega_lung_rna](https://bit.ly/cega_lung_rna)
Footnote 5: [https://bit.ly/Adult_uci](https://bit.ly/Adult_uci)
Footnote 6: [https://bit.ly/Compas_data](https://bit.ly/Compas_data)
Footnote 7: [https://bit.ly/california_statlib](https://bit.ly/california_statlib)
Footnote 8: [https://github.com/](https://github.com/)
DNA datasets.Our TT-rules framework's scalability is demonstrated using two DNA datasets, namely the single-cell RNA-seq analysis datasets for head neck, and melanoma cancer [37, 44] for binary classification and the TCGA lung cancer[33] for regression. These datasets contain 23689 and 20530 features, respectively, and are commonly used in real-life machine learning applications [32, 25, 39, 45].
### Performances comparison - Claim A)
#### 5.2.1 AUC/RMSE/Accuracy - Claim A-1) & A-2)
First, Table 2 demonstrates that our method can handle all types of tasks, including regression, binary classification, and multi-class classification. Moreover, it outperforms most of the other interpretable methods (decision tree, RIPPER, linear/log, NAM) in various prediction tasks, except for GL [47], which performs better than our method on the HELOC dataset. It is worth noting that GL does not support multi-class classification. Additionally, our method shows superior performance to more complex models such as XGBoost and DNNs on California Housing and Compas datasets. Therefore, our method can be considered comparable or superior to the current state-of-the-art methods while providing global and exact interpretability, which will be demonstrated in Section 5.4.
#### 5.2.2 Complexity - Claim A-3)
Impact of post-training optimization.The optimizations proposed in Section 4.2 succeeded to reduce the complexity of our model as defined in Section 3.1.3 at a cost of little accuracy loss as seen in Table 4. The complexity went down by a factor of \(1.35\times\), \(2.22\times\), and \(1.47\times\) on the Adult, Compas, and Diabetes datasets respectively. The accuracy went down for the Adult and Diabetes datasets by \(0.004\) and \(0.009\) respectively and stayed the same for Compas.
Comparison with rule-based models.Table 3 presents a comparison of various rule-based models, including ours, on the Compas, Adult, and HELOC datasets, in terms of accuracy, number of rules, and complexity. We note that we report accuracy and AUC for binary classification tasks, as RIPPER and ORS do not provide probabilities. We proposed two TT-rules models: our model for high performances, as shown in Table 2, with floating weights, and a small model with sparse binary weights, which is also our most compact model in terms of the number of rules and complexity. Our proposed model outperforms the others in terms of accuracy on the Compas dataset and has similar performances to GL [47] on the Adult and HELOC datasets. Although GL provides a better tradeoff between performance and complexity, we highlight that GL does not support multi-class classification tasks and is not scalable for larger datasets such as DNA datasets, as shown in the next section. We also propose a small model as an alternative to our high-performing model. Our small model achieves accuracy that is \(0.023\), \(0.009\), and \(0.006\) lower than our best model but requires only \(3.2\times\), \(2.2\times\), and \(9.8\times\) fewer rules on the Compas, Adult, and HELOC datasets, respectively. We successfully reduce the complexity of our model by \(14.3\times\), \(34\times\), and \(180\times\) on these three datasets.
### Scalability - Claim B)
Our TT-rules framework demonstrated excellent scalability to real-life datasets with up to 20K features. This result is not surprising, considering the original TTnet paper [10] showed the architecture's ability to scale to ImageNet. Furthermore, our framework's superiority was demonstrated by outperforming other rule-based models that failed to converge to such large datasets (GL [47], RIPPER [18, 19]). NAMs were not trained as we considered investigating the 20K
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c|}{TT-rules \(\mathcal{R}\)} & \multicolumn{2}{c}{TT-rules \(\mathcal{R}_{opt}\)} \\ \hline & Acc. & Complexity & Acc. & Complexity \\ \hline
**Adult** & \(0.846\pm 0.003\) & \(909\pm 212\) & \(0.842\pm 0.003\) & \(673\pm 145\) \\
**Compas** & \(0.664\pm 0.013\) & \(343\pm 41\) & \(0.664\pm 0.013\) & \(155\pm 22\) \\
**Diabetes** & \(0.574\pm 0.008\) & \(22\mathrm{K}\pm 2800\) & \(0.565\pm 0.009\) & \(15\mathrm{k}\pm 2225\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Reduction of the complexity of some TT-rules models after applying optimizations from Section 4.2 on Adult [22], Compas [7] and Diabetes [22] datasets.
\begin{table}
\begin{tabular}{l|c c|c c c|c c c} \hline \hline & \multicolumn{2}{c|}{**Compas**} & \multicolumn{2}{c|}{**Adult**} & \multicolumn{2}{c}{**HELOC**} \\ \hline & Accuracy & Rules & Complexity & Accuracy & Rules & Complexity & Accuracy & Rules & Complexity \\ \hline GL & \(0.685\pm 0.012\) & \(16\pm 2\) & \(20\pm 6\) & \(0.852\pm 0.001\) & \(16\pm 1\) & \(23\pm 1\) & \(0.732\pm 0.001\) & \(104\pm 5\) & \(104\pm 5\) \\ RIPPER & \(0.560\pm 0.006\) & \(12\pm 2\) & \(576\pm 48\) & \(0.833\pm 0.009\) & \(43\pm 15\) & \(14154\pm 4937\) & \(0.691\pm 0.019\) & \(17\pm 4\) & \(792\pm 186\) \\ DT & \(0.673\pm 0.015\) & \(78\pm 1\) & \(12090\pm 155\) & \(0.837\pm 0.004\) & \(398\pm 5\) & \(316410\pm 3975\) & \(0.709\pm 0.011\) & \(70\pm 1\) & \(9522\pm 136\) \\ ORS & \(0.670\pm 0.015\) & \(11\pm 1\) & \(460\pm 42\) & \(0.844\pm 0.006\) & \(9\pm 3\) & \(747\pm 249\) & \(0.704\pm 0.012\) & \(16\pm 6\) & \(1888\pm 708\) \\ \hline TT-rules big (Ours) & \(0.687\pm 0.005\) & \(42\pm 3\) & \(4893\pm 350\) & \(0.851\pm 0.003\) & \(288\pm 12\) & \(22896\pm 954\) & \(0.733\pm 0.010\) & \(807\pm 30\) & \(103763\pm 3857\) \\ TT-rules small (Ours) & \(0.664\pm 0.013\) & \(13\pm 2\) & \(155\pm 22\) & \(0.842\pm 0.003\) & \(130\pm 10\) & \(673\pm 145\) & \(0.727\pm 0.010\) & \(82\pm 30\) & \(574\pm 210\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy and complexity on the Compas, Adult and HELOC datasets for different methods. All the TT-rules are computed with our automatic post-training optimizations as described in Section 4.2. TT-rules big refers to a TTnet trained with a final linear regression with weights as floating points, whereas TT-rules small refers to a TTnet trained with a sparse binary linear regression.
graphs to be barely interpretable. Regarding performance, the TT-rules framework achieved an impressive RMSE of 0.029 on the DNA single-cell regression problem, compared to 0.092 for linear models, 0.028 for DNNs, and 0.42 for Random Forests. On the DNA multi-cross dataset, the TT-rules framework achieved an accuracy of 83.48%, compared to 83.33% for linear models, outperforming DNNs and Random Forests by 10.8% and 10.4%, respectively. Our approach not only scales but also reduces the input feature set, acting as a feature selection method. We generated a set of 1064 rules out of 20530 features for the regression problem, corresponding to a drastic reduction in complexity. For the binary classification dataset, we generated 9472 rules, which more then halved the input size from 23689 to 9472.
### TT-rules application case study - Claim C)
In this section, we present the results of applying the TT-rules framework on the Adult dataset [22], for a specific trained example on Figure 2.
Exact and global interpretability - Claim C-1).For global and exact interpretability, we first apply TT-rules framework to obtain \(\mathcal{R}\) and \(\mathcal{R}_{opt}\). Then we transform the rules in \(\mathcal{R}_{opt}\) into their equivalent ROBDD representation. This transformation is fast and automatic and can be observed in Figure 2: the resulting decision mechanism is small and easily understandable. In the Adult dataset, the goal is to predict whether an individual \(I\) will earn more than $50K per year in 1994. Given an individual's feature inputs \(I\), the first rule of Figure 2 can be read as follows: if \(I\) has completed more than 11 years of education, then the rule is satisfied. If not, then the rule is satisfied if \(I\) earns more than $54,200 in investments per month or loses more than $228. If the rule is satisfied, \(I\) earns one positive point. If \(I\) has more positive points than negative points, the model predicts that \(I\) will earn more than $50K per year.
Human knowledge injection - Claim C-2).Figure 2 illustrates our model's capability to incorporate human knowledge by allowing the modification of existing rules. However, it is important to note that we do not claim to achieve automatic human knowledge injection. The illustration simply highlights the possibility of manual rule modification in our framework.
Mitigating contextual drift in DNN through global and exact interpretability - Claim C-3).It is essential to recognize that machine learning models may not always generalize well to new data from different geographic locations or contexts, a phenomenon known as "contextual drift" or "concept drift" [24]. The global and exact interpretation of DNNs is vital in this regard, as it allows for human feedback on the model's rules and the potential for these rules to be influenced by contextual drift. For example, as depicted in Figure 2, this accurate model trained on US data is highly biased towards the US and is likely to perform poorly if applied in South America due to rule number 3. This highlights once again the significance of having global and exact interpretability of DNNs, as emphasized by recent NIST Artificial Intelligence Risk Management Framework [4].
## 6 Limitations and future works
Although our TT-rules framework provides a good balance between interpretability and accuracy, we observed that the generalized linear model (GL) offers a better trade-off. Specifically, for approximately the same performance, GL offers significantly less complexity. As such, future work could explore ways to identify feature interactions that work well together, similar to what GL does. Exploring automatic rule addition as an alternative to the human-based approach used in our work could also be a fruitful direction for future research.
Another interesting avenue is to apply TT-rules to time series tasks, where the interpretable rules generated by our model can provide insights into the underlying dynamics of the data. Finally, another promising area for future work would be to propose an agnostic global explainer for any model based on the TT-rules framework.
## 7 Conclusion
In conclusion, our proposed TT-rules framework provides a new and optimized approach for achieving global and exact interpretability in regression and classification tasks. With its ability to scale on large
Figure 2: Our neural network model trained on Adult dataset in the form of Boolean decision trees: the output of the DNN and the output of these decision trees are the same, reaching 83.6% accuracy. Added Features are represented in orange rectangles. By modifying existing rules and incorporating \(r_{5}\), the **Human Added Rule**, we reach 84.6% accuracy. On the same test set, Random Forest reaches 85.1% accuracy and Decision Tree 84.4% with depth 10. There is no contradiction to the rules: one person can not be born in both Mexico and Nicaragua. The term YoE refers to the Years of Education and the Capital Gains (Losses) refer to the amount of capital gained (lost) over the year. Each rule \(r_{i}\) is a function \(r_{i}:\{0,1\}^{n}\mapsto\{-1,0,1\}\), i.e for each data sample I we associate for each rule \(r_{i}\) a score which is in \(\{-1,0,1\}\). The prediction of our classifier is then as stated above.
datasets and its potential for feature reduction, the TT-rules framework appears as a valuable tool towards explainable artificial intelligence.
|
2309.11077 | Weak Supervision for Label Efficient Visual Bug Detection | As video games evolve into expansive, detailed worlds, visual quality becomes
essential, yet increasingly challenging. Traditional testing methods, limited
by resources, face difficulties in addressing the plethora of potential bugs.
Machine learning offers scalable solutions; however, heavy reliance on large
labeled datasets remains a constraint. Addressing this challenge, we propose a
novel method, utilizing unlabeled gameplay and domain-specific augmentations to
generate datasets & self-supervised objectives used during pre-training or
multi-task settings for downstream visual bug detection. Our methodology uses
weak-supervision to scale datasets for the crafted objectives and facilitates
both autonomous and interactive weak-supervision, incorporating unsupervised
clustering and/or an interactive approach based on text and geometric prompts.
We demonstrate on first-person player clipping/collision bugs (FPPC) within the
expansive Giantmap game world, that our approach is very effective, improving
over a strong supervised baseline in a practical, very low-prevalence, low data
regime (0.336 $\rightarrow$ 0.550 F1 score). With just 5 labeled "good"
exemplars (i.e., 0 bugs), our self-supervised objective alone captures enough
signal to outperform the low-labeled supervised settings. Building on
large-pretrained vision models, our approach is adaptable across various visual
bugs. Our results suggest applicability in curating datasets for broader image
and video tasks within video games beyond visual bugs. | Farrukh Rahman | 2023-09-20T06:00:02Z | http://arxiv.org/abs/2309.11077v1 | # Weak Supervision for Label Efficient Visual Bug Detection
###### Abstract
As video games evolve into expansive, detailed worlds, visual quality becomes essential, yet increasingly challenging. Traditional testing methods, limited by resources, face difficulties in addressing the plethora of potential bugs. Machine learning offers scalable solutions; however, heavy reliance on large labeled datasets remains a constraint. Addressing this challenge, we propose a novel method, utilizing unlabeled gameplay and domain-specific augmentations to generate datasets & self-supervised objectives used during pre-training or multi-task settings for downstream visual bug detection. Our methodology uses weak-supervision to scale datasets for the crafted objectives and facilitates both autonomous and interactive weak-supervision, incorporating unsupervised clustering and/or an interactive approach based on text and geometric prompts. We demonstrate on first-person player clipping/collision bugs (FPPC) within the expansive Giantmap game world, that our approach is very effective, improving over a strong supervised baseline in a practical, very low-prevalence, low data regime (0.336 \(\rightarrow\) 0.550 F1 score). With just 5 labeled "good" exemplars (i.e., 0 bugs), our self-supervised objective alone captures enough signal to outperform the low-labeled supervised settings. Building on large-pretrained vision models, our approach is adaptable across various visual bugs. Our results suggest applicability in curating datasets for broader image and video tasks within video games beyond visual bugs.
Weak Supervision for Label Efficient Visual Bug Detection
## 1 Background & Introduction
Visual quality in video games is one of the key drivers of satisfaction with customers. With modern games transitioning towards expansive, open worlds with intricate visuals and systems, the potential for bugs rapidly grows. Traditional manual testing methods, constrained by time and resources, grapple with these challenges. Advances in Computer Vision (CV) and Machine Learning (ML) present promising alternatives, offering automated and scalable visual testing solutions, thereby reallocating resources to explore other game dimensions [2]. Notably, the success of deep learning in CV is largely credited to extensive labeled datasets [11, 2], often curated from the vast quantitites of digital content on the web. However, curating these massive labeled datasets for a single game is impractical. Manual capturing and labeling of visual bugs at scale would render detection methods redundant, more so given the rarity of such bugs. Computer vision based methods recently proposed facilitate automated visual testing at scale by **1.** leveraging game engines to increase data availability amenable to deep learning approaches [2, 3, 4, 5, 6] and/or **2.** using anomaly detection based approaches treating bugs as out of distribution (OOD) occurrences from normal frames
###### Abstract
We present a novel approach to the detection of a novel object-level object detection in the context of a novel object detection system. We propose a novel detection method for detecting a class of objects in a given class of objects in a given class of objects. We propose a novel detection method for detecting a class of objects in a given class of objects in a given class of objects. We propose a novel detection method for detecting a class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of a class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of objects in a given class of a given class class of objects in a given class of objects in a given class of given class class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of given class class of objects in a given class class of objects in a given class of objects in a given class of a given class class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of objects in a given class of objects in a given class class of objects in a given class of a class class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class of objects in a given class of given class class class of objects in a given class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class class of a given class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class of given class class class class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class class class of objects in a given class class class of objects in a given class class class of objects in a given class class of objects in a given class class of objects in a given class class of objects in a given class class class of objects in a given class class of objects in a given class class class of objects in a given class class class class of objects in a given class class class of objects in a given class class class of objects in a given class class class class of objects in a given class class class class of objects in a given class class class class of objects in a given
## 2 Approach
Several practical challenges arise in the domain of visual bug detection, which shape our objectives. Firstly, there is the issue of limited labeled data. The timeframes during which visual testing can be conducted are narrow, especially with fresh content. Methods amenable to low-data regimes and/or faster transfer learning are highly coveted. A second is access to source code; engines such as [12, 60] continue to integrate ML features increasing data for models to consume, yet this is impractical to scale across every game (eg. building hooks into every new sub-release of a given game). We seek methods that can be applied in scenarios where access to the source code is not guaranteed. Related to this is the notion of out-of-distribution (OOD) scenarios, namely that even if we could gather data at a given point during development, as new content is added we want our model to adapt to new scenarios with minimal new data. An additional point here is that our input data during test time is constrained to RGB frames. Moreover, a third practical constraint is the notion that bugs are often rare and performant methods in low-prevalence scenarios are valuable.
### Datasets
We use the Giantmap-5 (now GM4 as one object was removed) environment and active area as introduced in [10], developed in Unreal [11]. We further extend it by introducing 46 new objects of interest (OOI) shown in fig. 2. In this study, we treat the Giantmap environment as our target video game title for our chosen visual bug, first-person player clipping (FPPC). FPPC manifests when collision meshes for either the player or object are set incorrectly or naively creating visual aberrations that would not occur in the physical world, see fig. 2 of FPPC on the 4 objects on our GM4 environment. From this environment we create i.i.d. screenshots programmatically by first generating an object distribution over the map with a specified density, then spawning the player near objects within a certain distance from the
Figure 1: General overview of our method: **1. Segmentation Stage:** Given unlabeled gameplay video, we apply a geometric promptable segmentation model (SAM) to automatically extract masks. **2. Filtering Stage:** The obtained masks are then filtered either in an unsupervised manner and/or optionally via text-interactive filtering using text-image model (CLIP). **3. Augmentation Stage:** Labeled ‘_good_’ target instances, and/or unlabeled target instances, are augmented using the filtered masks producing samples used to train a surrogate objective.
center of the object to sample varied clipping and normal samples. This capability allows us to scale data generation significantly however we seek to push the boundaries of label efficiency treating Giantmap as our target title. How far can we push in-distribution performance and how does it fare in OOD scenarios? To this effect, we constrain training data to 15 total samples for GM4-tiny dataset and 156 samples for GM4-base dataset whilst generating 3k validation and test in-distribution sets. Moreover, we generate a low-prevalence (0.007) video _deployment_ set on GM50 (4 ID + 46 OOD objects) to evaluate our methods, in effort to mimic what a developer might collect from automated or human play testing. Additionally we gather separate human gameplay on GM50 to use with the small amount of labeled data generated. In summary, we are given a small amount of i.i.d screenshot in-distribution data, Unlabeled OOD video, and are expected to evaluate on an OOD, low-prevalence video.
### Method
Our method can be viewed as a self-supervised objective scaled through weak-supervision. As shown generally in fig.1, it consists of 3 main stages, and is described in more detail below. We use the first-person player clipping task to show the efficacy of the approach as it is a challenging visual bug.
**Segmentation Stage:** Given unlabeled gameplay video of a target video game, we apply a pre-trained, promptable segmentation model SAM [] to extract masks in an automated manner. SAM takes as input an image and one or more geometric prompts. In absence of any prompt, points are placed uniformly across the image which represents the automatic/zero-shot segmentation prompt. Priors can be injected into the prompt to guide SAM to ignore or further sample certain regions of the input frame.
**Filtering Stage:** Since the environment is an outdoor park set in the spring, certain semantic visual features are abundant, eg. trees, walking trails, or grass. We develop a filtering & deduplication step using CLIP [], a text-image model to extract embeddings of each masked region. For _autonomous filtering_, we first cluster embeddings using Hierarchical Agglomerative Clustering (HAC) [], [], then re-sample masks from each cluster aiming to balance the mask distribution. For _interactive filtering_, a user may apply prior knowledge to select for or against certain masks via a text prompt, after which we perform clustering. The text prompts are embedded using the CLIP text encoder and cosine distances are computed with each mask embedding. Text-prompting capability can autonomously incorporate prior knowledge; for instance, if prior knowledge indicates that foliage, trees, and grass aren't relevant, text-prompts around these semantics can be cached and applied as pre-processing prior to unsupervised clustering. The final set of masks represents the set of semantics on the playthrough/game-level expected to be observed in a scene, intrinsically making them good candidates for visual bug augmentation. Moreover, the policy under which the data is collected also contributes to the mask distribution; we make an explicit assumption that the semantics of a target game are captured in the unsupervised playthroughs.
**Augmentation Stage:** Masks along with target images are used to create a self-supervised
Figure 2: (left 4 images) in-distribution Clipping examples from GM-4 set. (Right image) 46 Out-of-distribution objects added in GM-50.
objective through domain-specific augmentation. Target images can be obtained from a small labeled set, or directly from the source unlabeled data. As the masks represent semantics of the target game we utilize them to create augmented positive examples denoting bugs and negative examples denoting _"normal"_ or _"no-bug"_. If variants of a particular bug exist (e.g., stretched vs low-res texture), multiple classes can be augmented. As the method is tailored to the downstream task, in certain scenarios, the source and target image can be identical. Our method is flexible and can be applied across a variety of visual bug types.
**First-person Weapon Clipping approach:** We instantiate our general method for First-Person (or egocentric) player clipping (FPPC), fig. 3. During segmentation prompting we prefer to ignore the bottom-right corner of the image typically where the weapon is placed; thus preventing saturating detected masks with weapon masks. From the unsupervised game-play video, first the video was down-sampled temporally as videos naturally have visual information redundancy among adjacent frames. Semantic redundancy however is useful as the same object viewed from a different view increases both the probability of acquiring a good mask, as well as instance diversity. From said subsample, two further sets are sampled, 300 frames to build a tiny dataset of 217 masks, and 20k frames to build a larger set of 17k masks. The filtering step is unchanged from fig. 1. For our specific setting, we elect to paste the mask _over_ the weapon in a given target image. This creates a _"peudo-clipping"_, or _"weapon obstruction"_ signal which we hypothesize is correlated with our target downstream clipping task. Conversely, the mask is copied _under_ the weapon (respecting the weapon's mask) to create a negative sample. In order to achieve this, we require labeled-_good_ images as targets. We label 5 random frames from the human gameplay video and use them as target images. Each target image is paired with each mask for 2 rounds of augmentation (pseudo-clip vs no-clip). During the augmentation, the source mask can be further augmented before it is pasted onto the target images. We apply random rotation and random horizontal flip augmentations. Post augmentation the tiny mask set generates 2.2k total samples, while large generates 170k; which are used to pre-train, multi-task and few-shot on our target task.
Figure 3: Our method from fig.1 instantiated w.r.t. first-person player clipping. From an unlabeled video, 5 target frames (2 shown) are labeled and processed by SAM (in dark blue) with geometric prompts. Source geometric prompts guide SAM to disregard the ’prior region’ (i.e. weapon region), while target prompts emphasize only that region. After filtering, source masks, along with with target masks and target images, proceed to the augmentation phase. Here, positives are created by overlaying the source mask _over_ the target image’s weapon area, while negatives are positioned _behind_ the weapon, respecting the target weapon mask. Classifying positives vs negatives serve as our self-supervised objective for FPPC.
## 3 Experiments & Results
**In-Distribution performance on GiantMap-4:** We report the in-distribution balanced test accuracy of the various architectures evaluated in tab. 1, 2. We evaluate ResNet [] variants and Vision Transformer (ViT) []. Within each architecture we further evaluate various pre-training methodology including supervised, weakly-supervised and self-supervised learning methods. Specifically, IN1k [] supervised pre-training using the traditional [] and A1 ResNet training recipe from [], [], DINOv1 [] self-supervised pretext task (for both ResNet and ViT) as well as weakly-supervised CLIP's [] ViT based image-encoder. We use a few-shot fine tuning approach given recent results indicating its superiority when training in these regimes [], []. Moreover, we evaluate using a crop prior compared with the full frame. Specifically regarding FPPC, given it mainly manifests with the weapon, we can ignore the other parts of the frame. Naturally, the prior is significantly more data efficient, see tab. 1. In parallel, treating the problem as an object detection problem was also explored however the crop prior approach shows greater data efficiency given no regression of bbox coordinates is required (ref. supplemental). Our results show **1.** few-shot fine tuning can be efficient and **2.** when pre-trained, Vision transformers seem to outperform traditional CNNs in low-labeled settings, similar to observations in other visual domains []. Moreover, we observe that self-supervised pretraining (DINOv1) is competitive or slightly surpasses supervised pretraining when transfer learning to our task. i.e., DINO is able to extract relevant features that transfer well into the low-data regime, tab. 2. Given our strong baseline for balanced low-labeled in-distribution performance, we select ViT pretrained on DINO as our backbone for all future experiments where we will evaluate in a challenging out of distribution (OOD), low-prevalence setting observed in practice. In this imbalanced setting, we use F1 score (harmonic mean of precision and recall) as our primary metric.
### Weak Supervision
Given the supervised fine-tuning (SFT) performance on our low-prevalence deployment tabs. 3, 5 we seek to improve it by applying our method from section 2.2.
**Mask Filtering:** To analyze the masks produced by SAM [], we sample 30k frames from an unlabeled human gameplay video from GM50, generate masks using SAM and label them. Our labeling scheme was a combination of GM50 Objects of Interest (OOI) along with other general semantic categories. As observed in fig. 3(a), firestand, pathway, ground, and trees dominate the distribution. The latter two are omnipresent in scenes and the former, due to the data gathering policy. This creates redundancy in the signal we inject via augmentation. To combat this, we use CLIP [] to extract embeddings and HAC [].
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model Architecture** & **Pertrain Method** & **Prior** & **Accuracy** \\ \hline ResNet-50 & Ink+sup & Copy & 0.831 \(\pm\) 0.06 \\ ResNet-50 & Ink+sup & Cip & 0.796 \(\pm\) 0.03 \\ ResNet-18 & Ink+sup & Copy & 0.753 \(\pm\) 0.04 \\ vv-base-16 & Ink+sup & Copy & 0.913 \(\pm\) 0.03 \\ vv-base-16 & CLIP & Cip & 0.949 \(\pm\) 0.03 \\ vv-base-16 & DINOv1 & Cip & **0.852** \(\pm\) 0.03 \\ vv-base-16 & Ink+sup & Cip & 0.852 \(\pm\) 0.03 \\ \hline ResNet-50 & Ink+sup & - & 0.733 \(\pm\) 0.05 \\ vv-base-16 & DINOv1 & - & 0.832 \(\pm\) 0.05 \\ vv-base-16 & CLIP & - & 0.655 \(\pm\) 0.02 \\ vv-base-16 & Ink+sup & - & 0.738 \(\pm\) 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 1: In-distribution test performance for training on GM4-Tiny Dataset (15 total samples). Results over 3 trials.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model Architecture** & **Pertrain Method** & **Prior** & **Accuracy** \\ \hline ResNet-50 & Ink+sup & Copy & 0.958 \\ vv-base-16 & Ink+sup & Cip & 0.9657 \\ vv-base-16 & CLIP & Cip & **0.979** \\ vv-base-16 & DINOv1 & Cip & 0.976 \\ vv-base-16 & Ink+sup & Cip & 0.9418 \\ vv-base-16 & Ink+sup & Cip & **0.966** \\ \hline ResNet-50 & Ink+sup & - & 0.922 \\ vv-base-16 & DINOv1 & - & 0.967 \\ vv-base-16 & CLIP & - & 0.89 \\ vv-base-16 & Ink+sup & - & 0.961 \\ \hline \hline \end{tabular}
\end{table}
Table 2: In-distribution test performance for training on GM5-base Dataset, 156 total samples. \(\pm\)0.02 over 3 trials.
(\(k=50\)) with cosine distance to cluster masks in an unsupervised manner, \(k\) was selected naively with _a priori_ knowledge of 50 OOI on the map. Realistically \(k>50\) as other non-OOI are contribute to visual semantics of GM50. We observe that resampling after using either the heuristic fig. 4b to select \(k\) or overclustering fig. 4c (\(k=100\)) somewhat ameliorates class imbalance. See fig. 5 for qualitative examples of our clusters. Interestingly, clusters capture multiple views of both OOI fig. 5 and also other map objects fig. 5d, the food stand is not an OOI yet it is captured, a promising sign for OOD generalization. Further, we observe that objects with overlapping visual semantics, especially fine-grained ones such as variants of statues fig. 5b, tend to cluster together. We explore explicit removal of non-relevant yet highly frequent masks such as sky, trees, pathways, in hopes to further increase signal in our weak dataset. As we are already using CLIP image-encoder to extract visual features, we can pair with text encoder embeddings that may be supplied interactively or stored as a priori knowledge. eg. Clipping with grass and foliage is near universally a non-issue. We filter via stored text prompts tailored towards pathways, trees, etc. resulting in a distribution fig. 4d. While the non-relevant masks have been filtered, overall class balance has gotten worse. By removing omnipresent non-relevant classes ( \(\sim\)50% of the masks), any remaining over-represented classes (fire stand) overwhelm the distribution. We rebalance by performing clustering and resampling post text filtering. There exist other interesting approaches not explored, eg. clustering followed by interactive labeling to prune away entire clusters.
**Self-supervision: Pre-training vs multi-task:** Given two mask sets, Tiny (217 masks) and Large (17k masks), we create multiple datasets to serve the self-supervised objective.
Figure 4: Mask label frequencies. **(a)** ground truth **(b)** 50 re-sampled from k=50 clusters, **(c)** 50 samples re-sampled from k=100 clusters, **(d)** text prompt based filtering on semantic categories trees, foliage, roads, sky
Figure 5: Mask clustering(k=50): **(a)** multiple views of objects are captured, **(b)** certain fine-grained objects tend to cluster together, **(c)** the sky, an "object" not relevant to our visual bug, **(d)** map object.
The first TinyAug and LargeAug consist of paired data with limited rotation augmentation of the individual masks. The second HeavyTiny and HeavyLarge consist of heavy rotations to influence diversity. We pair these objectives with labeled GM4-tiny and GM4-base in a sequential pre-training or simultaneous multi-task training setting. The multi-task objective is a weighted combination \(L=\lambda L_{w}+(1-\lambda)L_{t}\) where \(L_{w}\) denotes our SSL objective, \(L_{t}\) is the target objective. We evaluate our models in the low-prevalence OOD setting on GM50 across 3 settings, each denoting some amount of "real" labeled data available during training. **1.** only a few (5) labeled _"good"_ exemplars and 0 positives (i.e., 0 real bugs samples) trained with weak supervision only tab. 4, **2.** tiny amount of labeled data is available (15 examples total) tab. 3, and **3.** small amount of labeled data is available (156 samples total), tab. 5. Our results indicate that our self-supervision alone absent any positive (bugs) examples is sufficient to surpass the best fully supervised training in the low-labeled, low-prevalence regime, 0.529 vs 0.336 F1. Further fine-tuning on a small amount of labeled data tab. 5 enhances performance to 0.550. Overall both pre-training and multi-task are competitive with one another, however pre-training edges out. In addition, we observe that pre-training was simpler to optimize, as the loss weight (\(\lambda\)) is a sensitive hyperparameter. LargeAug, created from thousands of masks produces worse results overall than Tiny which has 217 masks. This is likely due to the aforementioned distribution imbalance in the masks producing information redundant samples, further exacerbated by scale. Similarly, for raw unfiltered masks, results indicate rebalancing and filtering as a progressive step; however with the right mask augmentations, sufficient diversity is introduced to make it competitive. We make similar observations with our method on texture bugs (ref. supplemental.)
\begin{table}
\begin{tabular}{l c} \hline \hline
**Dataset** & **F1** \\ \hline LargeAug-Raw & 0.054 \\ LargeAug & 0.429 \\ TinyAug-Raw & 0.296 \\ TinyLargeAug-Raw & 0.480 \\ TinyAug & 0.529 \\ TinyHeavyAug & 0.493 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Low-prevalence, OOD deployment F1 scores on GM50 in few-shot setting (ie. self-supervised objective only. 5 labeled negative examples, 0 positive examples). LargeAug=17k Masks, TinyAug=217 masks. Raw suffix denotes unfiltered.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Dataset** & **Train Method** & **F1** \\ \hline Supervised GM4-tiny & SFT & 0.153 \\ TinyAug + GM4-tiny & Pretrain + SFT & 0.479 \\ LargeAug + GM4-tiny & Pretrain + SFT & 0.397 \\ TinyAug + GM4-tiny & multi-task & 0.484 \\ LargeAug + GM4-tiny & multi-task & 0.484 \\ LargeAug + GM4-tiny & multi-task & 0.484 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Low-prevalence, OOD deployment F1 results on GM50. GM4-tiny training dataset (15 labeled examples). LargeAug=17k masks, TinyAug=217 masks.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Dataset** & **Train Method** & **F1** \\ \hline Supervised GM4-tiny & SFT & 0.336 \\ LargeAug + GM4-tiny & Pretrain + SFT & 0.419 \\ TinyAug + GM4-tiny & Pretrain + SFT & 0.516 \\ TinyHeavyAug-raw + GM4 & multi-task & 0.510 \\ LargeAug + GM4-tiny & multi-task & 0.533 \\ TinyHeavyAug + GM4 & Pretrain+SFT & 0.492 \\ TinyAug + GM4 & Pretrain+SFT & **0.550** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Low-prevalence, OOD deployment F1 results on GM-50. GM4-base training dataset (175 total labeled examples). Multi-task and pre-training on the self-supervised objective greatly increases performance over baseline 0.336 F1 score obtained from SFT. TinyAug = small mask set. Raw suffix = unfiltered.
## 4 Discussion, Limitations and Future work
Our method, which utilizes weak-supervision to scale up a self-supervised objective improves performance both through multi-task and pre-training. It consistently demonstrates superior performance compared to solely using a supervised low-labeled dataset. Our self-supervision however is domain-crafted in contrast with advances in recent general, less biased approaches [2, 3, 4, 5]; we only make use of unlabeled data as a means to obtain representative object centric masks. Additional information exists in unsupervised videos to be captured through general self-supervised objectives, for instance we can use rebalanced masks with DINO [2, 5] to adapt the backbone. Our GM environment has shared, yet inverted objectives to PUG [2]; [2] use interactive Unreal environments to serve as simulators to obtain photorealistic data in a controlled manner whereas our target distributions are the simulators themselves. A limitation of our approach is reliance on the policy under which data was gathered. The integration of Reinforcement Learning agents, such as [2], is an intriguing avenue for future research. Additionally, Fig. 4(b) highlights a challenge: our filtering approach allows text-prompts to specify preference-based semantics, yet it struggles when these semantics are fine-grained or not well-represented within the embedding. Thus, the text-image model has difficulty performing in a zero-shot context. Future work might consider advanced text-image models or exploring strategies that combine text-image prompting with other learning methods. Additionally, models adapted from SAM [2, 5] can be applied during segmentation stage to enhance extraction of semantic masks.
The rapid testing cycles and cadence of new content make traditional label-intensive learning impractical for visual bug detection. Despite game engines increasingly integrating ML capabilities, relying solely on integration isn't scalable; our work moves towards techniques not reliant on source code access. Further, new game content can be viewed as OOD data and we have taken steps towards methods that are robust and generalize to such scenarios, specifically objects. Future work may explore the scalability and generality of our methodology across various visual bug-types and OOD settings. What data requirements exist for domain adaptation to art styles (eg. non-photorealistic games), environments, lighting? Moreover, constraining ourselves to RGB-only for practical reasons fails to exploit the richness of multimodality, limiting the depth of visual cues our models may capture. Multimodal data can be used during training and constrained or estimated at test time, maintaining practicality. Further, our augmentation strategy uses traditional CV techniques, however other synthetic or generative methods may also be an interesting line of future work.
## 5 Conclusion
Visual bug detection poses unique challenges due to rapidly evolving content, constraints in labeled data availability, and generalization to out-of-distribution scenarios. In this study, we explored a weakly-supervised, three-staged approach to address these challenges, specifically targeting first-person player clipping (FPPC) within Giantmap. Our findings harness the potential of large-pretrained visual models to enhance our training data. Our approach allows for the injection of priors through prompting, both geometric and text-based. A significant advantage of promptable filtering is its simplicity, making it accessible for non-ML professionals, allowing them to integrate their expert knowledge into the self-supervised objective. Additionally, our framework shows promise in generating expansive, curated datasets within video games, with the potential to foster both, comprehensive understanding of video game scenes and developing visual bug detection models. |
2309.11711 | MoDA: Leveraging Motion Priors from Videos for Advancing Unsupervised
Domain Adaptation in Semantic Segmentation | Unsupervised domain adaptation (UDA) has been a potent technique to handle
the lack of annotations in the target domain, particularly in semantic
segmentation task. This study introduces a different UDA scenarios where the
target domain contains unlabeled video frames. Drawing upon recent advancements
of self-supervised learning of the object motion from unlabeled videos with
geometric constraint, we design a \textbf{Mo}tion-guided \textbf{D}omain
\textbf{A}daptive semantic segmentation framework (MoDA). MoDA harnesses the
self-supervised object motion cues to facilitate cross-domain alignment for
segmentation task. First, we present an object discovery module to localize and
segment target moving objects using object motion information. Then, we propose
a semantic mining module that takes the object masks to refine the pseudo
labels in the target domain. Subsequently, these high-quality pseudo labels are
used in the self-training loop to bridge the cross-domain gap. On domain
adaptive video and image segmentation experiments, MoDA shows the effectiveness
utilizing object motion as guidance for domain alignment compared with optical
flow information. Moreover, MoDA exhibits versatility as it can complement
existing state-of-the-art UDA approaches. Code at
https://github.com/feipanir/MoDA. | Fei Pan, Xu Yin, Seokju Lee, Axi Niu, Sungeui Yoon, In So Kweon | 2023-09-21T01:31:54Z | http://arxiv.org/abs/2309.11711v2 | MoDA: Leveraging Motion Priors from Videos for Advancing Unsupervised Domain Adaptation in Semantic Segmentation
###### Abstract
Unsupervised domain adaptation (UDA) is an effective approach to handle the lack of annotations in the target domain for the semantic segmentation task. In this work, we consider a more practical UDA setting where the target domain contains sequential frames of the unlabeled videos which are easy to collect in practice. A recent study suggests self-supervised learning of the object motion from unlabeled videos with geometric constraints. We design a motion-guided domain adaptive semantic segmentation framework (MoDA), that utilizes self-supervised object motion to learn effective representations in the target domain. MoDA differs from previous methods that use temporal consistency regularization for the target domain frames. Instead, MoDA deals separately with the domain alignment on the foreground and background categories using different strategies. Specifically, MoDA contains foreground object discovery and foreground semantic mining to align the foreground domain gaps by taking the instance-level guidance from the object motion. Additionally, MoDA includes background adversarial training which contains a background category-specific discriminator to handle the background domain gaps. Experimental results on multiple benchmarks highlight the effectiveness of MoDA against existing approaches in the domain adaptive image segmentation and domain adaptive video segmentation. Moreover, MoDA is versatile and can be used in conjunction with existing state-of-the-art approaches to further improve performance.
Unsupervised domain adaptation, Semantic Segmentation, Domain Adaptive Video Segmentation, Geometric Learning.
## I Introduction
Fully-supervised semantic segmentation [1, 2] is a data-hungry task requiring all training images' pixels to be assigned with a semantic label. However, annotating a dataset collected from real-world scenes [3] is expensive and time-consuming since human operators must manually label all pixels. Recent advancements in computer graphics have provided new solutions for the semantic segmentation community. For example, with synthesized rendering pipelines, we can quickly generate a labeled virtual dataset, such as GTA5 [4] and SYNTHIA [5] (called the source domain). To bridge the gap between the simulated and real scenes (called the target domain), domain adaptation techniques address domain shift or distribution change. This scenario is called unsupervised domain adaptation (UDA) because no labels are provided in the target domain.
Current UDA methods [6, 7, 8, 9, 10, 11, 12] for semantic segmentation assume that the target domain comprises non-sequential images. Specifically, they adopt Cityscapes [3] with unlabeled 2,975 non-sequential images as the target domain. However, we assume the sequential video frames are accessible in the target domain; One can easily collect a set of sequential video frames in a real scene. In this work, we consider a more practical UDA setting where the target domain consists of unlabeled sequential image pairs, with each pair consisting of an image and its adjacent frame, and the source domain follows the same setting as existing UDA approaches [6, 7, 9, 13, 14]. Under this setting, one method for domain alignment is to leverage the consistency across the sequential image pairs, as proposed in [15]. [15] involves computing the optical flows between the sequential image pairs in the target domain and using them to warp the predictions from
Fig. 1: We propose to use the object motion as complementary guidance for domain adaptation in semantic segmentation. This is different from temporal consistency regularization which cannot handle similar errors on the sequential target frames. The object motion is predicted by self-supervised learning from unlabeled target video frames with geometric constraints.
its sequential pair onto the current image. Based on this, the temporal consistency for the predictions made on the two images is established. However, using optical flow alone leads to limited performance gain as it fails in addressing similar prediction errors in sequential image pairs of the same object in the target domain, as shown in the truck in Fig. 1.
The recent trend in dynamic scene understanding involves learning the object motion and the camera's ego-motion from unlabeled sequential image pairs. Existing works [16, 17, 18] suggest learning a motion network to disentangle the local object motion from the global camera ego-motion in the static scenes with self-supervised geometrical constraints. A bird's eye view illustration from Fig. 2 (a) shows an example of the object motion and the camera's ego-motion. This separation helps to isolate the object motion from the camera's ego-motion. Regardless of the cameras' ego-motion, the object motion is capable of _segmenting_ the moving objects out from static scenes. We propose that the object motion learned from geometric constraints can be used as _complementary_ guidance to learn effective representations in the target domain. Specifically, the segmentation network suffers from the cross-domain gap due to the lack of semantic labels in the target domain. However, the motion network is trained separately from the segmentation network. Moreover, the motion network is trained only using target frames in the _self-supervised_ manner with geometric constraints (finding temporal correspondence of frames within the target domain), which _does not require_ any semantic labels. So the object motion from the motion network is not affected by the cross-domain gaps caused by the labeling issue.
In this work, we present a motion-guided unsupervised domain adaption (MoDA) method for the semantic segmentation task, which utilizes the self-supervised object motion from geometrical constraints as prior. MoDA divides all the categories into two sets: the foreground categories that can move and the background categories that cannot move. On this basis, MoDA considers foreground and background categories separately for domain alignment using different strategies. To bridge the domain gaps in the foreground categories, MoDA presents two novel and complementary modules, namely foreground object discovery (FOD) and foreground semantic mining (FSM), taking the instance-level guidance from motion to improve the pixel-wise semantic predictions in the target domain. Moreover, MoDA introduces background adversarial training (BAT), which includes a background category-specific discriminator for the domain alignment of the background categories. Experimental results on the benchmark datasets show the effectiveness of MoDA compared with state-of-the-art approaches on both the domain adaptive image segmentation and the domain adaptive video segmentation. Additionally, MoDA is versatile and can be incorporated with existing state-of-the-art approaches, for a further performance boost.
In summary, our contributions are listed as follows:
1. We consider a more practical unsupervised domain adaptation setting where the target domain contains sequential frames of the unlabeled videos which are easy to collect in practice. We apply the object motion learned by the self-supervised motion network based on geometric constraints (without extra labels) for semantic segmentation.
2. We propose a motion-guided domain adaptation (MoDA) method using the self-supervised object motion learned from geometrical constraints for semantic segmentation. MoDA handles the domain gaps in the foreground and background categories with different adaptation guidance. Specifically, MoDA contains FOD and FSM to align the foreground categories by taking the instance-level guidance from the _self-supervised_ object motion. Additionally, MoDA includes BAT to reduce the cross-domain discrepancy on the background categories.
3. MoDA shows superior performance on the benchmarks for the domain adaptive image segmentation and the domain adaptive video segmentation. and is complementary to state-of-the-art approaches with various architectures for further performance gain.
## II Related Works
**Domain adaptive image segmentation.** Domain adaptive image segmentation can be regarded as a practical application of transfer learning that leverages labeled data in the source domain to solve problems in a target domain. The goal [19, 20] is to align the domain shift between the labeled source and target domains. In this community, adversarial learning [10, 11, 21, 22, 23] and self-training [7, 24, 25, 26, 27, 28, 29] approaches are widely adopted, and demonstrate compelling adaption results. [21] designs two discriminator networks to implement the inter-domain and intra-domain alignment. [30] proposes to average the predictions from the source and the target domain to stabilize the self-training process and further incorporated the uncertainty [27] to minimize the prediction variance. Existing works consider the domain alignment on the category level [31] and instance level [32] to learn domain-invariant features. [33] proposes to handle more diverse data in the target domain. adopt image-level annotations from the target domain to bridge the domain gap. [34] propose combined learning of depth and segmentation for domain adaptation with self-supervision from geometric constraints. Despite the impressive progress, we consider using motion priors as guidance for domain alignment in this work. We deal separately with foreground categories and background categories with different alignment strategies. On this basis, we further deal separately with foreground categories and background categories with different alignment strategies.
**Self-training for domain alignment.** In self-training, the network is trained with pseudo-labels from the target domain, which can be pre-computed offline or calculated online during training [6, 7, 13, 14, 9] proposes to estimate category-level prototypes on-the-fly and refine the pseudo labels iteratively, to enhance the adaptation effect. [9] utilizes data augmentation and momentum updates to regulate cross-domain consistency. [35] proposes to align the cross-domain gaps on structural affinity space for the segmentation task. In this work, we introduce the motion masks that provide complementary object geometric information as prior. Specifically, we develop the motion-guided self-training and the moving
object label mining module to refine the target pseudo labels and thus improve the adaptation performance.
**Domain adaptive video segmentation.** Exploiting motion information like optical flow [36] to separate the objects in videos to regulate the segmentation training is well explored in the video segmentation field. In UDA, there are several attempts that introduce temporal supervision signals to enforce the domain alignment. [37] regulates the cross-domain temporal consistency with adversarial training to minimize the distribution discrepancy. [15] proposes to capture the spatiotemporal consistency in the source domain by data augmentation across frames. In this work, we propose to utilize the 3D object motion [16] of the target sequential image pairs, which provides rich information for localizing and segmenting the moving objects.
## III Preliminary
### _Notation and Overview_
We explore a new setting for UDA in semantic segmentation where the target domain contains sequential image pairs. We have a set of source images \(X^{\mathbb{S}}=\{x_{n}^{\mathbb{S}}\}_{n=1}^{N^{\mathbb{S}}}\) with the corresponding segmentation annotations \(Y^{\mathbb{S}}=\{y^{\mathbb{S}}\}_{n=1}^{N^{\mathbb{S}}}\), where \(N^{\mathbb{S}}\) is the number of the source images. We also have a set of unlabeled sequential image pairs \(X^{\mathbb{T}}=\{(x_{n,1}^{\mathbb{T}},x_{n,2}^{\mathbb{T}})\}_{n=1}^{N^{ \mathbb{T}}}\) in the target domain, where \(x_{n,2}^{\mathbb{T}}\) is an adjacent frame of \(x_{n,1}^{\mathbb{T}}\), and \(N^{\mathbb{T}}\) is the number of the target image pairs. Note that \(x^{\mathbb{T}}\in\mathbb{R}^{H,W,3}\), \(x^{\mathbb{S}}\in\mathbb{R}^{\bar{H},W,3}\), \(y^{\mathbb{S}}\in\mathbb{B}^{\bar{H},W,C}\) as pixel-wise one-hot vectors, and \(C\) is the number of all the categories \(\mathbb{C}\). We divide all the categories \(\mathbb{C}\) into the _foreground categories_ that can move and the _background categories_ that cannot move 1. Our objective is to train a cross-domain segmentation model \(G\) outputting the pixel-wise softmax prediction over all the categories \(\mathbb{C}\) in the target domain. For the geometric part, our motion network and the depth network are represented by \(G_{m}\) and \(G_{p}\). For the semantic part, our segmentation network is indicated by \(G_{e}\). Our objective is to adapt the segmentation model \(G_{e}\) onto the unlabeled.
Footnote 1: The foreground categories are those that are movable such as person, car, and motorcycle. The background categories are those unable to move by themselves such as building, road, and tree
In this section, we first train the motion network \(G_{m}\) to learn object motion prediction in the target domain illustrated in Sec. III-B. Then, we conduct motion mask preprocessing to obtain the instance-level motion masks in the target domain shown in Sec. III-C.
### _Target Domain Geometric Learning_
The joint acquisition of knowledge concerning the moving objects and the motion of the ego camera within a local static scene has obtained considerable attention within the area of dynamic scene understanding [16, 17, 18, 38]. Recent investigations have introduced a method to disentangle the local objects' independent motion (called _object motion_) from the global camera's motion (called _ego-motion_) in a self-supervised manner with geometric constraints [16, 17]. A bird's eye view illustration from Fig. 2 (a) shows an example of the object motion and the camera's ego-motion. We use the object motion to segment the moving objects out from the static scene.
Given an unlabeled target frame and its adjacent frame \(\{x_{1}^{\mathbb{T}},x_{2}^{\mathbb{T}}\}\), our depth network \(G_{p}\) is trained to estimate their depth maps \(\{d_{1}^{\mathbb{T}},d_{2}^{\mathbb{T}}\}\in\mathbb{R}^{H,W}\). Then these frames with their corresponding depth maps are concatenated as input \(\{x_{1}^{\mathbb{T}},d_{1}^{\mathbb{T}},x_{2}^{\mathbb{T}},d_{2}^{\mathbb{T}}\}\) and sent into the motion net \(G_{m}\). Then \(G_{m}\) is trained generates the camera's ego-motion \([R,t]\) (in 6 DoF) and the object motion \(\Psi\in\mathbb{R}^{H,W,3}\) (in 3D space: \(x\), \(y\), and \(z\)-axis), where \(R\in\mathbb{R}^{3}\) is the camera's ego rotation and \(t\in\mathbb{R}^{3}\) is the camera's ego translation. On this basis, we use the camera's ego-motion \([R,t]\), the object motion \(\Psi\), and the adjacent frame \(x_{2}^{\mathbb{T}}\) to reconstruct the original image \(x_{1}^{\mathbb{T}}\) with an inverse warping operation
\[\hat{x}_{1}^{\mathbb{T}}=\mathcal{F}(x_{2}^{\mathbb{T}},d_{1}^{\mathbb{T}},R,t,\Psi,K), \tag{1}\]
where \(\hat{x}_{1}^{\mathbb{T}}\) is the reconstructed image by warping the adjacent image \(x_{2}^{\mathbb{T}}\) (as reference), \(\mathcal{F}\) is the projection operation using camera geometry [17], and \(K\in\mathbb{R}^{3\times 3}\) is the camera's intrinsic parameters. We adopt the photometric loss and the regularization losses [16] to optimize the motion network \(G_{m}\) and the depth network \(G_{p}\) together shown in Fig. 2 (b).
For the photometric loss, we apply the occlusion-aware photometric reconstruction loss \(\mathcal{L}_{p}\) which is the sum of the SSIM [39] structural similarity loss and the L1 distance loss.
\[\mathcal{L}_{p}=(1-\text{SSIM}(x_{1}^{\mathbb{T}},\hat{x}_{1}^{\mathbb{T}}))+ \|x_{1}^{\mathbb{T}}-\hat{x}_{1}^{\mathbb{T}}\|_{1}1_{d_{1(u,v)}^{\mathbb{T}} >d_{1uarp(u,v)}^{\mathbb{T}}}, \tag{2}\]
where \(\mathbb{1}_{d_{1(u,v)}^{\mathbb{T}}>d_{1uarp(u,v)}^{\mathbb{T}}}\) is the occlusion-aware mask introduced by [40]. To generate the desired object motion map \(\Psi\) for the moving objects, we adopt two additional losses: the motion smoothness regularization loss \(\mathcal{L}_{sm}\) and the motion sparsity regularization loss \(\mathcal{L}_{sp}\) defined by
\[\mathcal{L}_{sm}=\sum_{c}\sum_{h,w}\sqrt{(\partial_{h}\Psi^{c})^{2}+(\partial _{w}\Psi^{c})^{2}}, \tag{3}\]
\[\mathcal{L}_{sp}=\sum_{c}2\sigma(\Psi^{c})\sum_{h,w}\sqrt{1+\frac{\Psi^{c}}{ \sigma(\Psi^{c})}}, \tag{4}\]
where \(\Psi^{c}\in\mathbb{R}^{H,W}\) is the \(c^{\text{th}}\) channel of \(\Psi\) (note that \(\Psi\) has 3 channels), and \(\sigma(\cdot)\) returns the mean value of the input tensor. Following [16], we also use the depth smoothness regularization loss \(\mathcal{L}_{d}\) and the cycle consistency regularization loss \(\mathcal{L}_{cy}\) formed by
\[\mathcal{L}_{d}=|\partial_{h}d_{1}^{\mathbb{T}}|e^{-|\partial_{h}x_{1}^{ \mathbb{T}}|}+|\partial_{w}d_{1}^{\mathbb{T}}|e^{-|\partial_{w}x_{1}^{\mathbb{ T}}|}, \tag{5}\]
\[\mathcal{L}_{yc}=\frac{\|RR^{\prime}-\mathbb{1}\|^{2}}{\|R-\mathbb{1}\|^{2}+\| R^{\prime}-\mathbb{1}\|^{2}}+\frac{\|R^{\prime}t+t^{\prime}\|^{2}}{\|\mathbb{1}\|^{2}+\|t^{ \prime}\|^{2}}, \tag{6}\]
where \(R^{\prime}\) and \(t^{\prime}\) are the inverse rotation and the translation of \(R\) and \(t\). Subsequently, we optimize \(G_{m}\) and \(G_{p}\) for unsupervised object motion learning through
\[\mathcal{L}=\mathcal{L}_{p}+\mathcal{L}_{sm}+\mathcal{L}_{sp}+\mathcal{L}_{d}+ \mathcal{L}_{cy}. \tag{7}\]
We categorize all the losses into two groups: the photometric loss \(\mathcal{L}_{p}\) and the regularization losses \(\{\mathcal{L}_{sm},\mathcal{L}_{sp},\mathcal{L}_{d},\mathcal{L}_{cy}\}\). The whole geometric learning process is shown in Fig. 2 (b).
### _Motion Mask Preprocessing in Target Domain_
Our motion mask preprocessing (MMP) aims at localizing the moving instances based on the object motion information in the target domain. An exemplary procedure of MMP is shown in Fig. 3 (a). Given the motion network \(G_{m}\) optimized by the photometric loss and the regularization losses, we first predict the object motion map \(\Psi\in\mathbb{R}^{H,W,3}\) from the adjacent frames \(x_{1}^{\mathbb{T}},x_{2}^{\mathbb{T}}\in\mathbb{R}^{H,W,3}\) as input. On this basis, we extract a binary motion mask \(\Psi^{\mathbb{M}}\in\mathbb{B}^{H,W}\) from \(\Psi\) via
\[\Psi^{\mathbb{M}}_{(i)}=\left\{\begin{array}{ll}1,&\text{if }|\Psi_{(i,d)}|>0, \forall d\in\{1,2,3\}\\ 0,&\text{otherwise}\end{array}\right., \tag{8}\]
where \(i\) indicate the pixel coordinate, \(\Psi^{\mathbb{M}}_{(i)}\) is the mask value on the image pixel \(x_{(i)}^{\mathbb{T}}\). Note that the object motion \(\Psi\) predicted \(G_{m}\) is relative to the scene, _e.g._, it is separated from the camera's ego-motion. Therefore, we use \(\Psi^{\mathbb{M}}\) to localize and segment all the moving objects in the scene.
The binary motion mask \(\Psi^{\mathbb{M}}\) could potentially include multiple moving instances, as it is common for multiple cars and motorcycles moving together within the same scenes. To differentiate various moving instances, we utilize a connected component labeling [41] which is to identify each moving instance from \(\Psi^{\mathbb{M}}\) with a unique label. The goal of the connected component labeling is to label each connected component (or blob) in the binary image with the same unique label. Because each blob will be labeled, we can infer the total number of individual blobs. Therefore, the connected component labeling helps us to differentiate and identify all the moving instances in the scene. We first run connected component labeling on \(\Psi^{\mathbb{M}}\) to get a "new" map \(\tilde{\Psi}^{\mathbb{M}}\) with unique labels
\[\tilde{\Psi}^{\mathbb{M}}=\Gamma(\Psi^{\mathbb{M}}), \tag{9}\]
where \(\Gamma\) is the connected component labeling process [41]. For each unique label \(m\) in \(\tilde{\Psi}^{\mathbb{M}}\), we extract a binary motion mask \(\psi^{m}\in\mathbb{B}^{H,W}\) for the \(m^{\text{th}}\) moving instance. In this regard, we generate a set of binary instance-level motion masks \(\{\psi^{m}\}_{m=1}^{M}\) via
\[\{\psi^{m}\}_{m=1}^{M}=\Delta(\tilde{\Psi}^{\mathbb{M}}), \tag{10}\]
where \(\Delta\) denotes the instance-level motion mask extraction mentioned above (an example is shown in Fig. 3 (a)), and \(M\) is the number of the moving instance masks in \(\Psi^{\mathbb{M}}\).
The object motion differs from the optical flow in two aspects. First, optical flow is a motion representation in 2D space, it is not accurate for detecting the motion pattern on the front-to-back axis (\(z\)-axis) which is a common motion pattern in the real world. But such an issue doesn't exist in object motion which lies in 3D space. Second, the optical flow is a motion representation of all the pixels with respect to the camera's movement. Therefore, optical flow is a mixed motion representation of the object motion and ego-motion. In contrast, the object motion [16, 17] represents the independent movement of the objects, which is disentangled
Fig. 3: (a) We provide an exemplary procedure for motion mask preprocessing (MMP). (b) The comparison of the optical flow and the object motion while a black car is moving along the \(z\)-axis in (b); note that the object motion maps here are projected into 2D space from the 3D space for visualization.
Fig. 2: We propose to learn object motion given unlabeled target video frames. (a) is the visualization of a bird’s eye view of the dynamic scene where a yellow bus is moving toward the camera; we visualize the _object motion_ of the yellow bus and the _ego motion_ from the camera itself. (b) shows the diagram for geometric training to learn the object motion from a pair of adjacent frames from the target domain; the motion network and the depth network are trained in a _self-supervised_ manner with geometric constraints; the blue arrows and \(\mathrm{green}\) arrows represent the gradients directions from the photometric loss and the regularization losses.
from the camera's ego-motion through the learning process in Sec. III-B. Provided a car moving on the \(z\)-axis, we visualize the optical flow and the object motion in Fig. 3 (b), where the object (black car's) motion and camera's ego-motion are similar (toward the \(z\)-axis with similar velocity). The object motion successfully detects the black car's motion at the \(z\)-axis. However, the optical flow cannot easily detect it because the black car's motion is similar to the camera's ego-motion.
## IV Methodology
### _Overview_
Initially, we generate the target pseudo labels using the segmentation network optimized with self-training in Sec. IV-B. The target pseudo labels contain noisy predictions on foreground and background categories due to the cross-domain gaps. To handle it, we present a motion-guided domain adaptation (MoDA) framework that aligns the domain gaps of the foreground and background categories separately with different strategies. For foreground category alignment, MoDA contains two newly designed modules, namely foreground object discovery (FOD) in Sec. IV-C and foreground semantic mining (FSM) Sec. IV-D by taking the instance-level moving object masks from Sec. III-C. Moreover, MoDA proposes to align the background categories with adversarial training using a background category-specific discriminator in Sec. IV-E. The overall training process is presented in Sec. IV-F.
### _Target Self-training_
Given a labeled source frame \(x^{\mathbb{S}}\) and its ground-truth map \(y^{\mathbb{S}}\), we first train the segmentation network \(G_{e}\) with the supervised segmentation loss
\[\mathcal{L}_{ce}^{\mathbb{S}}=-\sum_{i=1}^{\tilde{H},\tilde{W}}\sum_{c=1}^{C}y _{(i,c)}^{\mathbb{S}}\log(p_{(i,c)}^{\mathbb{S}}), \tag{11}\]
where the source prediction \(p^{\mathbb{S}}=G_{e}(x^{\mathbb{S}})\in\mathbb{R}^{\tilde{H},\tilde{W},C}\), \(p_{(i,c)}^{\mathbb{S}}\) denotes the predicted softmax possibility on \(c^{\text{th}}\) category on the pixel \(x_{(i)}^{\mathbb{S}}\), \(G_{e}\) is the segmentation network, and \(C\) is the total number of categories. The segmentation network trained solely on the source domain lacks generalization when applied to the target domain. To bridge the domain gap, current self-training methods [9, 26] optimize the cross-entropy loss iteratively with the target pseudo label. For simplicity, let the target pseudo label at the \(t^{\text{th}}\) iteration denote as \(\tilde{y}^{\mathbb{T}}\in\mathbb{B}^{H,W,C}\). The cross-entropy loss is defined by
\[\mathcal{L}_{ce}^{\mathbb{T}}=-\sum_{i=1}^{H,W}\sum_{c=1}^{C}\tilde{y}_{(i,c)}^ {\mathbb{T}}\log(p_{(i,c)}^{\mathbb{T}}), \tag{12}\]
where the target prediction \(p^{\mathbb{T}}=G_{e}(x^{\mathbb{T}})\in\mathbb{R}^{H,W,C}\), \(p_{(i,c)}^{\mathbb{T}}\) denotes the predicted softmax possibility on \(c^{\text{th}}\) category on the pixel \(x_{(i)}^{\mathbb{T}}\), and \(C\) is the total number of categories. We get the target pseudo label \(\tilde{y}^{\mathbb{T}}\) using the target prediction \(\tilde{p}^{\mathbb{T}}\) via
\[\tilde{y}_{(i,c)}^{\mathbb{T}}=\mathbb{1}(c=\operatorname*{arg\,max}_{c^{ \prime}}\tilde{p}_{(i,c^{\prime})}^{\mathbb{T}}),\forall c\in\mathbb{C}, \tag{13}\]
where \(\tilde{p}^{\mathbb{T}}=\tilde{G}_{e}(x^{\mathbb{T}})\) and \(\tilde{G}_{e}\) is the momentum network of \(G_{e}\). For simplicity, we denote the process of Eq. 13 as \(\tilde{y}^{\mathbb{T}}=\Gamma(\tilde{p}^{\mathbb{T}})\), where \(\Gamma\) is the operation of taking the most probable category from the softmax probability. Note that the pseudo labels are generated online [9, 26]. \(\tilde{G}_{e}\)'s network parameters \(\tilde{\theta}\) are updated by \(G_{e}\)'s parameters \(\theta\) via exponential moving average (EMA) [42] by \(\tilde{\theta}_{t+1}=0.99\tilde{\theta}_{t}+0.01\theta_{t}\). Following [9], we update the momentum network every 100 iterations, and we also adopt color jitter, Gaussian blur, and random flipping as data augmentation to increase the training stability.
### _Foreground Object Discovery (FOD) in Target Domain_
Our goal is to use motion as guidance to improve the quality of the target pseudo labels. However, applying the instance-level motion masks \(\{\psi^{m}\}_{m=1}^{M}\) for boosting the quality of target pseudo labels encounters two points of challenge. First, the instance-level motion masks provide a coarse segmentation of the moving objects due to the side effects of the motion regularization loss. Therefore, directly using the instance-level motion masks leads to noisy pseudo labels which might affect the performance of domain alignment. Second, there are some special cases where some moving instances might contain multiple objects. For example, a motorcycle and its rider (two objects) are bounded into one moving instance. We should take these special cases into consideration.
To this end, we propose a self-supervised foreground object discovery (FOD) to learn the accurate moving object masks from the instance-level motion masks \(\{\psi^{m}\}_{m=1}^{M}\). The diagram of FOD is shown in Fig. 4. In this step, the segmentation network pre-trained on the source domain (shown in Sec. IV-B) is employed to extract a dense feature map \(I\in\mathbb{R}^{H^{\prime},W^{\prime},V}\) given the target frame \(x^{\mathbb{T}}\) as input. Then we adopt a self-supervised design to promote the objectness in the features' attention. Given an instance-level motion mask \(\psi^{m}\in\mathbb{B}^{H,W}\) of \(x^{\mathbb{T}}\) (generated by Eq. 10), we bilinearly downsample \(\psi\) to the spatial size of \(I\) and select all the instance-level features that are covered by \(\psi^{m}\), denoted by \(I_{\psi}\in\mathbb{R}^{E,K}\), which is computed by
\[I_{\psi}=\Upsilon\big{(}I\odot\texttt{repmat}(\texttt{bd}(\psi^{m}))\big{)}, \tag{14}\]
where bd represents the linear downsampling, repmat indicates the repeating operation that makes the shape of \(\psi^{m}\) to be same as \(I\), \(\odot\) is the element-wise production, and \(\Upsilon\) is to select all non-zero feature vectors.
Fig. 4: The diagram of foreground object discovery (FOD). Given a target image and its instance-level motion masks, we compute an objectness score map by computing the similarity of each query with all the keys in Eq. 15 and the processing in Eq. 16.
To generate the binary moving object masks, we construct the queries and the keys from \(I_{\psi}\). Our queries \(Q\in\mathbb{R}^{F,V}\) are generated by a bilinear downsampling of \(I_{\psi}\) where \(F\) is the downsampled size, and our keys \(K\in\mathbb{R}^{E,V}\) are from \(I_{\psi}\) itself. Given a query \(Q_{e}\in\mathbb{R}^{V}\) in \(Q\), we calculate its cosine similarity with all keys in \(K\). Thus, we produce an _objectness_ score map \(S\in\mathbb{R}^{E,F}\) by
\[S_{e,f}=\texttt{cosim}(Q_{e},K_{f}), \tag{15}\]
where \(K_{f}\in\mathbb{R}^{K}\) is the \(f^{\text{th}}\) key of \(K\), and \(\texttt{cosim}\) represents the cosine similarity which is the dot product of two vectors with \(\mathcal{L}_{2}\) normalization. Next, the _objectness_ score map is transformed by a normalization into a soft map where the scores are adjusted into the range of \([0,1]\). To extract the binary moving object masks, a threshold value \(\tau\) is applied to the soft map. The resulting binary moving object masks are ranked based on their objectness scores, and any redundant masks are eliminated through non-maximum-suppression (NMS). The entire procedure to get the moving object masks \(\{o^{j}\}_{j=1}^{J}\) is represented by
\[\{o^{j}\}_{j=1}^{J}=\texttt{NMS}\big{(}\texttt{rank}(\texttt{norm}(S))\big{)}, \tag{16}\]
where \(o^{j}\in\mathbb{B}^{H,W}\) and \(J\) is the number of the object masks predicted from \(\psi^{m}\). The whole process of foreground object discovery is shown in Fig. 4.
### _Foreground Semantic Mining (FSM) in Target Domain_
To align the domain gaps, we want to utilize the moving object masks generated by foreground object discovery (in Sec. IV-C) to improve the quality of the target pseudo labels. Specifically, we propose a foreground semantic mining (FSM) that takes the moving object masks as guidance to refine the noisy target predictions on the foreground categories. Our FSM is based on the assumption of _rigidity of the moving objects_, _e.g._, vehicles, and motorbikes on traffic roads. For example, if a vehicle is moving, all parts of the vehicle are moving together. Based on the rigidity of the moving objects, all the pixels covered by a moving object mask in an image must have the same categorical label. Subsequently, we have the following remark:
**Remark 1**.: _If a moving object mask is present, then the image pixels that it covers should have a semantic label that corresponds to the same moving categorical label._
Given a target pseudo label \(\tilde{y}^{{}^{\prime}\mathbb{T}}\), we choose a dominant category \(c^{*}\) in \(\tilde{y}^{{}^{\prime}\mathbb{T}}\) that are covered by the moving object mask \(o^{j}\). Concretely, \(c^{*}\) is determined by the moving category with the highest occurrence. Based on **Remark 1**, we introduce a semantic mining weight \(w\in\mathbb{R}^{H,W,C}\) to update the target pseudo label via
\[w_{(i,c)}=\lambda o_{(i)}^{j}\mathbbm{1}(c=c^{*}), \tag{17}\]
where \(\mathbbm{1}(\cdot)\) is the indicator function, \(o_{(i)}^{j}\) is the object mask value on the pixel \(x_{(i)}^{\mathbb{T}}\), and \(\lambda\) is a non-negative hyperparameter for scaling. Then, we update the target pseudo label using the following equation
\[\tilde{y}^{{}^{\prime}\mathbb{T}}=\Gamma(\texttt{softmax}((w+1)\odot p^{ \mathbb{T}})), \tag{18}\]
where \(\tilde{y}^{{}^{\prime}\mathbb{T}}\) is the updated target pseudo label by our target semantic mining, and \(\Gamma\) is the process of taking the most probable category from \(p^{\mathbb{T}}\) similar to Eq. 13. We provide an illustration of using object motion masks to update noisy target pseudo labels shown in Fig. 5. We use the updated target pseudo label to update the segmentation network \(G_{e}\) by \(\mathcal{L}_{FSM}\) which is defined by
\[\mathcal{L}_{FSM}=\sum_{i=1}^{H,W}\sum_{c=1}^{C}\tilde{y}_{(i,c)}^{{}^{\prime} \mathbb{T}}\log\big{(}G_{e}(x^{\mathbb{T}})\big{)}. \tag{19}\]
### _Cross-domain Background Adversarial Training (BAT)_
We introduce a novel background domain alignment (BAT) method to align cross-domain background categories. We draw inspiration from the existing adversarial domain adaptation method [11] which aligns the domain gaps using the scene structure similarity between the source and target domain. However, our BAT differs in that we solely focus on aligning the background categories. Our assumption is that the scene structure of the source and the target domain mostly consist of background categories, _e.g._, the sky is usually at the top, the road is usually at the bottom, the buildings and the trees are usually at the side of the traffic road scene. This shows superior performance to [11] because the background categories in the source and target images share more structure similarity. The proposed BAT contains a background category-specific discriminator \(D\). The background adversarial training loss \(\mathcal{L}_{BAT}\) is defined by
\[\begin{split}\mathcal{L}_{BAT}=-\sum_{h,w}&\log \big{(}1-D([G_{e}(x^{\mathbb{S}})]_{bg})_{(h,w,0)}\big{)}\\ &+\log\big{(}D([G_{e}(x^{\mathbb{T}})]_{bg})_{(h,w,1)}\big{)}, \end{split} \tag{20}\]
where \([G_{e}(\cdot)]_{bg}\) is to take the predictions over the background categories from the segmentation network \(G_{e}\). Since target data doesn't have ground truth, we use the target pseudo labels to select background category predictions.
Fig. 5: An illustration of foreground semantic mining (FSM) that takes the moving object masks as guidance to refine the target predictions on the foreground categories.
### _Overall Framework_
The proposed MoDA consists of three modules including motion mask preprocessing, self-supervised object discovery, and target semantic mining. We provide an overview of MoDA in Fig 6. Our overall objective function \(\mathcal{L}_{MoDA}\) for training the segmentation network \(G_{e}\) is composed of
\[\mathcal{L}_{MoDA}=\mathcal{L}_{FSM}+\mathcal{L}_{BAT}. \tag{21}\]
The final loss function \(\mathcal{L}\) to optimize the segmentation network \(G_{e}\) is represented by
\[\mathcal{L}=\mathcal{L}_{ce}^{\text{S}}+\mathcal{L}_{ce}^{\text{T}}+\mathcal{ L}_{MoDA}. \tag{22}\]
#### Iv-F1 Combine with temporal consistency regularization
One additional baseline is to use the temporal consistency across the adjacent target frames. Specifically, we can propagate the prediction of the previous frame to the current frame using optical flow estimates between the frames and subsequently ensure the consistency between the prediction of the current frame and the propagated prediction from the previous frame. Given two adjacent frames \(\{x_{1}^{\text{T}},x_{2}^{\text{T}}\}\), we forward them as input to get the predictions \(\{p_{1}^{\text{T}},p_{2}^{\text{T}}\}\). Moreover, we use FlowNet [43] to estimate the optical flow \(f_{1\to 2}^{\text{T}}\) from \(x_{1}^{\text{T}}\) to \(x_{2}^{\text{T}}\). Then we warp the prediction \(p_{1}^{\text{T}}\) to the propagated prediction \(\vec{p}_{2}^{\text{T}}\). Then the optical flow regularization loss \(\mathcal{L}_{OFR}\) is formulated as
\[\mathcal{L}_{OFR}=\|p_{1}^{\text{T}}-\vec{p}_{2}^{\text{T}}\|_{2}. \tag{23}\]
We conduct an additional baseline method using temporal consistency regularization \(\mathcal{L}_{TCR}\) that considers the temporal consistency of the target frames which is shown in Table VII.
## V Experiment
In this work, We first describe the datasets and implementation details in Sec. V-A. Next, we report detailed domain adaption results for semantic segmentation and perform ablation studies to illustrate the effectiveness of our method. We evaluate the proposed MoDA on the domain adaptive image segmentation task in Sec. V-B1 and on the domain adaptive video segmentation task in Sec. V-B2. The ablation study and hyperparameter analysis are presented in Sec. V-C.
### _Experiment Setup_
#### V-A1 Datasets
For the task of domain adaptive image segmentation, we have GTA5 [4] and SYNTHIA [5] as the source domains. GTA5 contains 24,966 training images with the resolution \(2,048\times 1,024\) and \(19\) categories. SYNTHIA consists of 9,400 images with the resolution \(1,280\times 760\), and \(16\) categories. For the target domain, we use the 2,975 images and their adjacent frames collected from the videos in the Cityscapes dataset [3]. Thus, our newly created target dataset contains 5,950 training images and is named _Cityscapes-AF_. We also include 500 validation images from the Cityscapes dataset [3] for evaluation. For the task of domain adaptive video segmentation, we adopt VIPER [44] as the source domain and Cityscapes-AF as the target domain. VIPER [44] contains \(133,670\) synthetic frames with the corresponding pixel-wise annotations from \(77\) videos generated from game engines.
#### V-A2 Implementation details
We conduct experiments on two types of architectures: CNN-based architecture and Transformer-based architecture. For CNN-based architectures, we follow the setting in [9]. We first adopt DeepLab-V2 [2] with ReseNet-101 [45] for the segmentation network, pre-trained on ImageNet [46]. For Transformer-based architecture, we adopt MiT-B5 [47] as the encoder and incorporate our MoDA with existing state-of-the-art approaches [6, 13] by using the pre-trained weights from them. We train the network on the source data with ABN [48] and the target data with the self-training loss (Eq. 12), the foreground semantic mining loss (Eq. 19) and the background adversarial training loss (Eq. 20). Our batch size is 16 with 8 source and 8 target images with the resolution \(1,024\times 512\). We adopt color jitter,
Fig. 6: The overall framework of our motion-guided domain adaptation (MoDA) for semantic segmentation. MoDA proposes foreground object discovery (FOD) and foreground semantic mining (FSM) for cross-domain alignment on the foreground categories. Moreover, MoDA adopts background adversarial training (BAT) which contains a background category-specific discriminator for alignment on the background categories.
random blur, and greyscaling for data augmentation (without random crop and fusion). Threshold \(\tau\) is set with \(0.5\). The optimizer for segmentation is SGD [49] with learning rate of \(2.5e^{-4}\), momentum \(0.9\), and weight decay of \(5\times 10^{-4}\). We optimize the discriminator using Adam with the initial learning rate of \(10^{-4}\). For the momentum network, we set \(\lambda=0.99\). We implement MoDA with PyTorch and the training process is running on two Titan RTX A6000 GPUs.
### _Evaluation Results_
#### V-B1 Comparison with the state-of-the-art unsupervised domain adaption approaches
We evaluate the performance of MoDA in Table I and Table II. To make a fair comparison, all the unsupervised domain adaptation baselines are trained with GTA5/SYNTHIA as the source and Cityscapes-AF as the target domain. For CNN-based architecture, our _baseline_ model contains a segmentation network and a momentum network with ResNet101-based [45] DeepLab-v2 [2] architecture. To ensure a fair comparison, the baseline model is a vanilla self-training following [9]. The baseline is trained with source data using ABN [48] and with target data using data augmentation including color jittering, random blur, and random flipping. We also analyze MoDA's efficacy on the alignment of the foreground and background categories, respectively. The results of **MoDA-_fg_** are generated by aligning foreground categories only, using FOD & FSM while removing BAT. The results of **MoDA-_bg_** are obtained by aligning background categories using BAT solely, with FOD & FSM removed. Moreover, we provide qualitative results of MoDA in Fig. 7 and Fig. 8. Additionally, we demonstrate that MoDA complements existing UDA approaches by incorporating MoDA for the performance boost, denoted as **+MoDA**. The results of **+MoDA** are obtained by the model trained with the target pseudo labels generated from existing approaches.
We include existing ResNet-101 based approaches for comparison: AdaptSegNet [11], AdvEnt [10], IntraDA [21], SIM [50], CRST [24], CAG-UDA [51], IAST [52], DACS [26], ProDA [7], and SePiCo [14]. Additionally, we experiment with MoDA on two transformer-based UDA approaches, DAFormer [13] and HRDA [6]. Note that all the approaches above are trained with GTA5/SYNTHIA as the source and Cityscapes-AF as the target domain to make a fair evaluation.
**GTA5\(\rightarrow\)Cityscapes-AF.** Table I presents comparison results on the adaptation from GTA5 \(\rightarrow\) Cityscapes-AF. Our MoDA, MoDA-_fg_, and MoDA-_bg_ achieve \(51.0\%\), \(49.1\%\), and \(47.2\%\) of mIoU across the total 19 categories, respectively, which are all higher than the baseline, demonstrating the effectiveness of our proposed three modules FOD, FSM, and BAT. MoDA-_fg_ outperforms the baseline by \(3.2\%\) of mIoU. Notably, in the detailed category-level comparison, MoDA-_fg_ considerably improves the baseline in terms of the foreground categories, such as motorcycle, bike, train, bus, truck, etc. The improvement in the foreground categories is attributed to the use of the proposed FOD & FSM for domain alignment. Similarly, the score of MoDA-_bg_ reaches \(47.2\%\), outperforming the baseline by \(1.2\%\), where the performance increases noticeably on background categories like road, sidewalk, wall, and traffic light. This demonstrates the efficacy of BAT in bridging the gaps of the background categories. _+MoDA_ provides consistent performance improvement on ProDA [7] from \(57.9\%\) to \(61.3\%\), SePiCo [14] from \(59.6\%\) to \(62.0\%\), DAFormer [13] from \(68.3\%\) to \(73.4\%\), and HRDA [6] from \(73.9\%\) to \(75.2\%\). It shows the versatility of MoDA to be combined with state-of-the-art approaches with both CNN-based and transformer-based architecture.
Fig. 7: Qualitative results of MoDA utilizing the object motion as guidance for domain adaptation.
**SYNTHIA\(\rightarrow\)Cityscapes-AF**. In Table II, we compare domain adaptation results from SYNTHIA to Cityscapes-AF. In both the 16 and the 13 categories' settings, the results from MoDA, MoDA-_fg_, and MoDA-_bg_ achieve higher mIoU scores than the baseline. Under the 16 and 13 category settings, the mIoU scores of MoDA reach to \(46.9\%\) and \(55.7\%\), which are \(4.3\%\) and \(3.6\%\) higher than the baseline. On the foreground alignment, MoDA-_fg_ achieves \(45.6\%\) and \(52.5\%\) which are \(3.0\%\) and \(3.5\%\) higher than the baseline under the 16 and 13 category settings. MoDA-_bg_ reaches to \(43.5\%\) and \(49.8\%\), outperforming the baseline by \(0.9\%\) and \(0.9\%\) in the 16 and 13 category setting. Additionally, the use of _+MoDA_ consistently improve the performance of state-of-the-art approaches ProDA [7], SePiCo [14], DAFormer [13], and HRDA [6].
#### Iv-B2 Comparison with the state-of-the-art domain adaptive video segmentation approaches
We evaluate MoDA in the setting of domain adaptive video segmentation: VIPER\(\rightarrow\)Cityscapes-AF. We choose state-of-the-art domain adaptive video segmentation approaches PixMatch [53], DA-VSN [37], and TPS [15] as our baseline models. The experimental results are shown in Table III. According to Table III, MoDA outperforms TPS [15] by \(3.9\%\) mIoU, which shows the superiority of MoDA using object motion over existing domain adaptive video segmentation approaches.
### _Ablation Study_
#### Iv-C1 Ablation study on the components of MoDA
We conduct an ablation study on the effectiveness of foreground object discovery (FOD), foreground semantic mining (FSM), and background adversarial training (BAT) in Table IV. Based on evaluation on GTA5\(\rightarrow\)Cityscapes-AF, by only using FOD & FSM (without using BAT), the performance drops to \(49.1\%\) of mIoU. On the other hand, only utilizing BAT (without using FOD & FSM), MoDA experienced a more significant decline to \(47.2\%\) of mIoU. Moreover, using FSM & BAT (without using FOD) the performance drops to \(49.6\%\) of mIoU. Combining all three modules FOD, FSM, and BAT, MoDA achieves \(51.0\%\) of mIoU.
#### Iv-C2 Unreliable motion masks
We evaluate the effects of the unreliable motion mask from the motion network for MoDA. MoDA consists of foreground object discovery (FOD) which is to find the accurate moving object masks with a self-supervised attention mechanism. According to Table IV, by using FSM & BAT (without using FOD) the performance reaches \(49.6\%\) of mIoU. Adding FOD to the framework, the mIoU score increases to \(51.0\%\) mIoU. It shows the important role of FOD in handling unreliable motion masks from the motion network.
#### Iv-C3 Background adversarial training
We compare background adversarial training (BAT) with the standard adversarial training [11]. We present an ablation study on the mIoU performance of the background categories on GTA5\(\rightarrow\)Cityscapes-AF. Our BAT alignment achieves \(53.4\%\)
with a gain of \(2.7\%\), in comparison with the standard adversarial training. These results suggest that separating the foreground and background categories in BAT leads to better alignment on the background categories.
#### Iv-B4 Hyperparameter \(\lambda\)
We conduct an ablation study on the hyperparameter \(\lambda\) in foreground semantic mining (Eq. 17). We present different values of \(\lambda\) for the final performance in GTA5\(\rightarrow\)Cityscapes-AF in Table VI. The bigger value of \(\lambda\) put more weight on foreground semantic mining on target pseudo label updating. Our ablation results indicate that MoDA reaches the best performance when \(\lambda\) reaches \(0.8\). Additionally, the results suggest that the mIoU performance of MoDA is not significantly affected by the value of the hyperparameter \(\lambda\) when it is greater than \(0.8\).
#### Iv-B5 Temporal consistency regularization
We provide quantitative analysis on using object motion as guidance in comparison with the temporal consistency regularization (in Sec IV-F1). We show the evaluation of GTA5\(\rightarrow\)Cityscapes-AF in Table VII. We achieve \(47.4\%\) of mIoU by using temporal consistency regularization (TCR) which is \(1.4\%\) higher than the baseline model [9]. By using MoDA we produce \(51.0\%\) of mIoU which are much higher than using TCR. This shows the superiority of MoDA as guidance for domain alignment. We further combine MoDA with TCR to further boost the performance up to \(52.2\%\) of mIoU.
Iv-B6 What happens to the potentially movable, but static objects (e.g., parked cars, standing persons)
First of all, the overall training pipeline of our approach does not harm static objects like parked cars or standing pedestrians during the domain transfer. Since MoDA uses motion-guided object masks to update the noisy predictions of the pseudo labels,
Fig. 8: Comparison of the qualitative results generated from MoDA and the baseline model on GTA5\(\rightarrow\)Cityscapes-AF benchmark.
Fig. 9: The proposed MoDA can learn effective representations for the _parked_ objects.
the performance on static objects will also be upgraded by updating the segmentation net with these new pseudo labels. As an example in Fig. 9, MoDA generates more accurate predictions on the parked vehicle which is not moving in comparison with the baseline [9].
## VI Conclusion
This paper proposed a novel motion-guide domain adaption method, namely MoDA for the semantic segmentation task. MoDA addresses the domain alignment separately for the foreground and background categories using different strategies. For foreground categories, MoDA employs foreground object discovery (FOD) and foreground semantic mining (FSM), using motion as guidance at the object level. For background alignment, MoDA introduces background adversarial training (BAT) that includes a background category-specific discriminator. Our experiments on various benchmarks demonstrate MoDA's effectiveness compared to existing approaches. Furthermore, MoDA is adaptable and can be used alongside state-of-the-art methods to further enhance performance.
|
2301.02629 | Intersection theory on non-archimedean analytic spaces | We develop the intersection theory of non-archimedean analytic spaces and
prove the projection formula and the GAGA principle. As an application, we
naturally define the category of finite correspondences of analytic spaces. | Yulin Cai | 2022-10-31T06:45:49Z | http://arxiv.org/abs/2301.02629v2 | # Intersection theory on non-archimedean analytic spaces
###### Abstract
We develop the intersection theory of non-archimedean analytic spaces and prove the projection formula and the GAGA principle. As an application, we naturally define the category of finite correspondences of analytic spaces.
###### Contents
* 1 Introduction
* 2 Preliminary
* 3 Meromorphic functions and Cartier divisors
* 4 Cycles, flat pull-backs and proper push-forwards
* 5 Proper intersection and intersection multiplicities
* 6 Projection formula
* 7 GAGA
* 8 The category of finite correspondences
## 1 Introduction
The intersection theory of non-archimedean analytic spaces has been studied in [11, Section 2] and [1, Section 2.2], and the author believes that some experts have concrete idea about such a theory.
In [11], Gubler considers the Cartier divisors on rigid analytic spaces and formal schemes, and define their intersection with irreducible analytic subsets. This theory allows him to define the local height of subvarieties over non-archimedean fields.
In [1], Ayoub develops the theory of motives on rigid analytic spaces using homotopy theory. He uses the presheaves on the category of affinoid spaces to construct the category of finite correspondence (for rigid analytic space) \(\operatorname{RigCor}(K)\). Such construction avoids the intersection theory of analytic spaces.
In this paper, we will develop the intersection theory of non-archimedean analytic spaces following the idea similar to the case of algebraic varieties. We will show the flat base change formula, the projection formula and the GAGA principle to relate the intersection theories of analytic spaces and of algebraic varieties. As an application, we will give a direct construction of \(\operatorname{RigCor}(K)\) (simply denoted by \(\operatorname{Cor}_{K}\) in this paper) like [13, Lecture 1] does. In fact, we can define the higher Chow groups of analytic spaces as [4] for algebraic varieties, and this definition is different from Ayoub's in [1, Introduction generale].
In Section 2, we give some basic notion in the theory of Berkovich spaces, e.g. support of a coherent sheaf, Zariski image and codimension. We also extend [7, Proposition 4.12] into an abstract form, i.e. Lemma 2.15 which is a key lemma for this paper. With this lemma, we can solve the compatibility problems in our theory, e.g. see Lemma 4.6 and Lemma 5.4.
In Section 3, we define and study the Cartier divisors on an analytic space \(X\), which form a group \(\operatorname{Div}(X)\). The group of divisors up to linear equivalence is denoted by \(\operatorname{CaCl}(X)\). As in the theory of schemes, we have an injective homomorphism \(\operatorname{CaCl}(X)\hookrightarrow\operatorname{Pic}(X)\), and it is an isomorphism if \(X\) is reduced.
In Section 4, we give the notion of cycles, and associate a coherent sheaf with a cycle. In particular, we can associate a closed subspace with a cycle. As in the theory of algebraic varieties, the flat pull-backs and proper push-forwards of cycles are defined. We prove the following flat base change formula.
**Proposition 1.1** (Proposition 4.28).: _Let_
_be a Cartesian diagram of separated, strictly \(K\)-analytic spaces with \(f\) proper and \(g\) flat. Then \(f^{\prime}\) is proper, \(g^{\prime}\) is flat and \(g^{*}\circ f_{*}=f^{\prime}_{*}\circ g^{\prime*}\) on \(Z^{*}(Y)\)._
In Section 5, we define intersection product of proper intersection. We will give two definitions, meaning a local one using the scheme theory and a global using Tor formula. For a flat morphism \(f:Y\to X\) of \(K\)-analytic spaces of pure dimension, the pull-back \(f^{*}:Z^{*}(X)\to Z^{*}(Y)\) preserves intersection product.
Since we have the flat pull-backs, proper push-forwards and intersection products, the expected projection formula is proved in Section 6.
**Theorem 1.2** (Projection formula).: _Let \(f:Y\to X\) be a flat, proper morphism of regular, separated, strictly \(K\)-analytic spaces. Let \(\alpha\in Z^{*}(Y)\) and \(\beta\in Z^{*}(X)\). Assume that \(\alpha\) and \(f^{*}\beta\) intersect properly. Then \(f_{*}(\alpha)\) and \(\beta\) intersect properly and_
\[f_{*}(\alpha)\cdot\beta=f_{*}(\alpha\cdot f^{*}\beta).\]
In Section 7, we compare the intersection theories of algebraic varieties and of non-archimedean analytic spaces. We prove the GAGA principle, i.e. Proposition 7.3.
In Section 8, we define the category of finite correspondence \(\operatorname{Cor}_{K}\). This category is also defined by Ayoub [1] using another definition.
### Notation and terminology
Throughout this paper, we fix a complete non-archimedean field \(K\) with a non-trivial valuation. For a \(K\)-analytic space, we mean a Berkovich space over \(K\), see [3, Definition 1.2.3]. The structure sheaf on a \(K\)-analytic space \(X\) with respect to the G-topology is denoted by \(\mathcal{O}_{X}\). If it is necessary, we will use the notation \(X_{G}\) for the G-topology instead of the ordinary topology on \(X\). The (\(K\)-analytic) dimension of \(X\) is denoted by \(\dim_{K}X\), or \(\dim X\) when there is no confusion with the fields.
Given a point \(x\in X\), \(\mathscr{H}(x)\) denotes its complete residue field and \(\dim_{x}X\) denotes the local dimension of \(X\) at \(x\).
We shall simply say "coherent sheaf on \(X\)" for "coherent \(\mathcal{O}_{X}\)-module (with respect to G-topology)", and denote \(\operatorname{Pic}(X)\) for the group of invertible sheaves on \(X\). Assume that \(X\) is good, let \(\mathcal{F}\) be a coherent sheaf on \(X\) and \(x\in X\). We denote by \(\mathcal{F}_{x}\) the stalk at \(x\) of \(\mathcal{F}\) viewed as a sheaf of the underlying ordinary topology of \(X\), i.e.
\[\mathcal{F}_{x}:=\varinjlim_{U}\mathcal{F}(U)=\varinjlim_{V}\mathcal{F}(V).\]
where \(U\) runs through open neighborhoods of \(x\), and \(V\) runs through affinoid neighborhoods of \(x\).
We will write \(\operatorname{Irr}(X)\) for the set of all irreducible components of \(X\), and write \(\varinjlim(X)\) for the set of all irreducible Zariski-closed subsets of \(X\). Notice that \(\varinjlim(X)\) has a partial order: \(W\leq Z\) if \(W\subset Z\).
For an algebraic variety over \(K\), we mean a separated scheme of finite type over \(K\).
For a commutative ring \(A\), \(R(A)\) denotes the set of all regular elements of \(A\) and \(\operatorname{Frac}(A)=R(A)^{-1}A\), the maximal localization containing \(A\) as a subring.
## 2 Preliminary
For the convenience of the reader and further uses, in the section, we provide some basic concepts and results that are either given somewhere, or formulated easily.
### Support of a coherent sheaf
(cf. [8, Section 2.5])
**Definition 2.1**.: _Let \(X\) be a \(K\)-analytic space, \(\mathcal{F}\) be a coherent sheaf on \(X\), and \(\operatorname{Ann}(\mathcal{F})\) be the (coherent) annihilator ideal of \(\mathcal{F}\) (on the site \(X_{G}\)). The_ **support of \(\mathcal{F}\)** _is the closed analytic subspace of \(X\) defined by \(\operatorname{Ann}(\mathcal{F})\), denoted by \(\operatorname{Supp}(\mathcal{F})\)._
**Remark 2.2**.:
1. _Recall the annihilator_ \(\mathcal{I}\) _of_ \(\mathcal{F}\) _is defined as follows: for any analytic domain_ \(V\)_,_ \[\operatorname{Ann}(\mathcal{F})(V):=\{a\in\mathcal{O}_{X}(V)\mid a\cdot \mathcal{F}(V)=0\},\] _which is a coherent ideal. In particular, for any analytic domain_ \(V\)_, we have_ \(\operatorname{Ann}(\mathcal{F})|_{V}=\operatorname{Ann}(\mathcal{F}|_{V})\)_._
2. _If_ \(X=\mathcal{M}(A)\) _is affinoid and_ \(\mathcal{F}=\widetilde{M}\) _for some finitely generated_ \(A\)_-module, then it is easy to see that_ \[\operatorname{Ann}(\mathcal{F})=\widetilde{\operatorname{Ann}(M)}.\]
From the definition, we can easy deduce the following lemma.
**Lemma 2.3**.: _Let \(X\) be a \(K\)-analytic space, \(\mathcal{F}\) a coherent sheaf on \(X\), and \(Z=\operatorname{Supp}(\mathcal{F})\). Then there is a unique coherent sheaf \(\mathcal{G}\) on \(Z\) such that \(\mathcal{F}=i_{*}\mathcal{G}\), where \(i:Z\hookrightarrow X\) is the canonical immersion._
Proof.: By uniqueness, we can glue coherent sheaf \(\mathcal{G}\) from local parts, so we can assume that \(X=\mathcal{M}(A)\). It is not hard to see the lemma in this case.
### Zariski image of a morphism
As in the theory of schemes, we can define Zariski image of a morphism of analytic spaces, which has a natural structure of analytic spaces. We follow the idea in [14, Subsection 29.6].
**Lemma 2.4**.: _Let \(X\) be a \(K\)-analytic space, \(\mathcal{F}\) a coherent sheaf on \(X\), and \(\mathcal{G}\subset\mathcal{F}\) an \(\mathcal{O}_{X}\)-submodule. Then there is a unique coherent \(\mathcal{O}_{X}\)-submodule \(\mathcal{G}^{\prime}\subset\mathcal{G}\) with the following property: for any coherent \(\mathcal{O}_{X}\)-module \(\mathcal{H}\), the canonical map_
\[\operatorname{Hom}_{\mathcal{O}_{X}}(\mathcal{H},\mathcal{G}^{\prime})\to \operatorname{Hom}_{\mathcal{O}_{X}}(\mathcal{H},\mathcal{G})\]
_is bijective. In particular, \(\mathcal{G}^{\prime}\) is the largest coherent sheaf contained in \(\mathcal{G}\)._
Proof.: Let \(\{\mathcal{G}_{i}\}_{i\in I}\) be the set of coherent sheaves contained in \(\mathcal{G}\). We consider the morphism of \(\mathcal{O}_{X}\)-modules
\[\varphi:\bigoplus_{i\in I}G_{i}\to\mathcal{F}.\]
We claim its image \(\mathcal{G}^{\prime}\subset\mathcal{G}\) is coherent. Let \({}^{p}\mathcal{G}^{\prime}\subset\mathcal{G}\) be the image of \(\varphi\) as presheaves. Then \(\mathcal{G}^{\prime}\) is the sheafification of \({}^{p}\mathcal{G}^{\prime}\), and for any affinoid domain \(V=\mathcal{M}(V)\), \({}^{p}\mathcal{G}^{\prime}(V)=\sum\limits_{i}\mathcal{G}_{i}(V)\subset \mathcal{F}(V)\) is a finitely generated \(A\)-module. By Tate acyclic theorem, we have \(\mathcal{G}^{\prime}(V)={}^{p}\mathcal{G}^{\prime}(V)\). So \(\mathcal{G}^{\prime}\) is coherent. It is the largest coherent sheaf contained in \(\mathcal{G}\).
The map
\[\operatorname{Hom}_{\mathcal{O}_{X}}(\mathcal{H},\mathcal{G}^{\prime})\to \operatorname{Hom}_{\mathcal{O}_{X}}(\mathcal{H},\mathcal{G})\]
is obviously injective. For any homomorphism \(\psi:\mathcal{H}\to\mathcal{G}\subset\mathcal{F}\), the image \(\operatorname{Im}(\psi)\subset\mathcal{G}\) is a coherent sheaf, so \(\operatorname{Im}(\psi)\subset\mathcal{G}^{\prime}\), so \(f\) factor thorough \(\mathcal{G}^{\prime}\). This implies that \(\mathcal{G}^{\prime}\) is the one we want.
For the uniqueness, if \(\mathcal{G}^{\prime\prime}\) is another coherent \(\mathcal{O}_{X}\)-submodule with the universal property. Then the bijectivity of \(\operatorname{Hom}_{\mathcal{O}_{X}}(\mathcal{G}^{\prime},\mathcal{G}^{ \prime\prime})\to\operatorname{Hom}_{\mathcal{O}_{X}}(\mathcal{G}^{\prime}, \mathcal{G})\) implies that we have a homomorphism \(\mathcal{G}^{\prime}\to\mathcal{G}^{\prime\prime}\subset\mathcal{G}\), so \(\mathcal{G}^{\prime}\subset\mathcal{G}^{\prime\prime}\). Hence \(\mathcal{G}^{\prime}=\mathcal{G}^{\prime\prime}\).
**Proposition 2.5**.: _Let \(f:Y\to X\) be a morphism of \(K\)-analytic spaces. Then there is a closed analytic subspace \(Z\) of \(X\) such that_
1. _the morphism_ \(f\) _factors through_ \(Z\)_;_
2. _(Universal property) if_ \(f\) _factors through a closed analytic subspace_ \(Z^{\prime}\) _of_ \(X\)_, then_ \(Z^{\prime}\) _contains_ \(Z\) _as a closed analytic subspace._
_The closed analytic space \(Z\) of \(X\) is called the_ **Zariski image** _of \(f\), denoted by \(\operatorname{Im}_{\text{zar}}(f)\)._
Proof.: By (b), if \(Z\) exists, then it is unique. It remains to show the existence. Let \(\mathcal{I}:=\operatorname{Ker}(\mathcal{O}_{Y}\to f_{*}\mathcal{O}_{X})\). By Lemma 2.4, we take the largest coherent \(\mathcal{O}_{X}\)-submodule \(\mathcal{J}\subset\mathcal{I}\) and set \(Z=V(\mathcal{J})\). It remains to check (a) and (b).
(a) We have \(f(Y)\subset Z\). Indeed, for any affinoid domain \(V=\mathcal{M}(A)\subset X\) and any affinoid domain \(U=\mathcal{M}(B)\subset f^{-1}(V)\), we have \(\mathcal{J}(V)\subset\mathcal{I}(V)\subset\operatorname{Ker}(A\to B)\), so \(U\to V\) factors through \(\mathcal{M}(A/\mathcal{J}(V))=Z\cap V\) and \(f(U)\subset Z\). Hence \(f(Y)\subset Z\). We denote the map \(Y\to Z\) by \(\overline{f}\). We shall construct \(\overline{f}^{\#}:\mathcal{O}_{Z}(V\cap Z)\to\mathcal{O}_{X}(f^{-1}(V))\) for any affinoid domain \(V\subset X\). Since \(\mathcal{J}(V)\subset\mathcal{I}(V)\), the homomorphism \(\mathcal{O}_{X}(V)\to\mathcal{O}_{Y}(f^{-1}(V))\) factor through \(\mathcal{O}_{Z}(V\cap Z)=\mathcal{O}_{X}(V)/\mathcal{J}(V)\), we denote \(\mathcal{O}_{Z}(V\cap Z)\to\mathcal{O}_{X}(f^{-1}(V))\) by \(\overline{f}^{\#}\) which is compatible on intersections of affinoid domains. Hence we have a morphism \(\overline{f}:Y\to Z\) and \(f=i\circ\overline{f}\).
(b) If \(f\) factors through a closed subspace \(Z^{\prime}\) of \(X\) with \(Z^{\prime}=V(\mathcal{J}^{\prime})\), then \(\mathcal{J}^{\prime}\subset\mathcal{I}\). By the choice of \(\mathcal{J}\), we have \(\mathcal{J}^{\prime}\subset\mathcal{J}\), so \(Z^{\prime}\subset Z\).
**Remark 2.6**.: _(1) Locally, \(f:\mathcal{M}(B)\to\mathcal{M}(A)\) is given by \(\varphi:A\to B\), then \(\operatorname{Im}_{\text{zar}}(f)=\mathcal{M}(A/\operatorname{Ker}(\varphi))\)._
We may expect the Zariski image is exactly the usual image as sets. It is almost true if Y is reduced or \(f\) is quasi-compact.
**Lemma 2.7**.: _Let \(f:Y\to X\) be a morphism of \(K\)-analytic space. If \(Y\) is reduced, then the \(\operatorname{Im}_{\text{zar}}(f)=\overline{f(Y)}^{X_{\text{zar}}}\) with the reduce closed subspace structure._
Proof.: As a map, \(f\) factor through \(\overline{f(Y)}^{X_{\text{zar}}}\). Since \(Y\) is reduced, so \(f\) factors through \(\overline{f(Y)}^{X_{\text{zar}}}\) with the reduced structure, see [7, PROPOSITION 4.2 (iii)]. It remains to show the universal property of \(Y\to\overline{f(Y)}^{X_{\text{zar}}}\). If \(f\) factors through a closed subspace \(Z\) of \(X\), then \(\overline{f(Y)}^{X_{\text{zar}}}\subset Z\) as a subset. The containment is also a morphism of analytic spaces since \(\overline{f(Y)}^{X_{\text{zar}}}\) is endowed with the reduced structure.
**Lemma 2.8**.: _Let \(f:Y\to X\) be a morphism of \(K\)-analytic space. Assume that \(f\) is quasi-compact. Then the following hold._
1. \(\mathcal{I}=\operatorname{Ker}(\mathcal{O}_{X}\to f_{*}\mathcal{O}_{Y})\) _is coherent. In particular,_ \(\operatorname{Im}_{\text{zar}}(f)=V(\mathcal{I})\)_._
2. \(\overline{f(X)}^{X_{\text{zar}}}=\operatorname{Im}_{\text{zar}}(f)\)_. In other word,_ \(Y\to\operatorname{Im}_{\text{zar}}(f)\) _is dominant._
3. _For any analytic domain_ \(V\subset X\)_, the subspace_ \(\operatorname{Im}_{\text{zar}}(f)\cap V\) _is the Zariski image of_ \(f|_{f^{-1}(V)}:f^{-1}(V)\to V\)_._
Proof.: (1) Suppose \(X=\mathcal{M}(A)\) is affinoid. We take a G-covering \(Y=\bigcup\limits_{i=1}^{n}V_{i}\) by affinoid domains, and set \(Y^{\prime}=\coprod\limits_{i=1}^{n}V_{i}\), \(\pi:Y^{\prime}\to Y\) the canonical morphism which is surjective. For any analytic domain \(V\subset Y\), the map
\[\pi^{\#}:\mathcal{O}_{Y}(V)\to\mathcal{O}_{Y^{\prime}}(\pi^{-1}(V))=\bigoplus \limits_{i=1}^{n}\mathcal{O}_{Y}(V\cap V_{i})\]
is injective. We consider \(f^{\prime}:=f\circ\pi:Y^{\prime}\to X\). Then
\[\mathcal{I}=\operatorname{Ker}(\mathcal{O}_{X}\to f^{\prime}_{*}\mathcal{O}_{ Y^{\prime}}).\]
Since \(Y^{\prime}\) is affinoid, so \(\mathcal{I}=(\operatorname{Ker}(A\to\mathcal{O}_{Y^{\prime}}(Y^{\prime}))^{ \sim}\) which is coherent. This implies (1).
(3) This is from (1).
(2) By (3), suffices to assume that \(X=\mathcal{M}(A)\) is affinoid. We use the notations in (1). Notice that \(\overline{f(Y)}^{X_{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \
**Proposition 2.13**.: _Let \(X\) be a \(K\)-analytic space, and \(Z,Y\in\operatorname{\mathrm{Irr}}(X)\) with \(Z\subset Y\). Then_
\[\operatorname{\mathrm{codim}}(Z,Y)=\max\{m\mid Z=Y_{0}\subsetneq Y_{1}\subsetneq \dots\subsetneq Y_{m}=Y\},\]
_where \(Y_{i}\in\overline{\operatorname{\mathrm{Irr}}(X)}\). Moreover, each maximal chain has the same length, i.e. every \(K\)-analytic space is catenary with respect to the Zariski topology._
Proof.: Firstly, if \(Z\subsetneq Y\), then \(\operatorname{\mathrm{codim}}(Z,Y)\geq 1\). This can be seen locally. Hence "\(\geq\)" holds. Conversely, it suffices to show that if \(\operatorname{\mathrm{codim}}(Z,Y)\geq 2\), then there is \(W\in\overline{\operatorname{\mathrm{Irr}}(X)}\) such that \(Z\subsetneq W\subsetneq Y\). Indeed, we take an affinoid domain \(V\) of \(Y\) are affinoid, and \(V=\mathcal{M}(A)\), \(Z\cap V=\mathcal{M}(A/I)\). Then we know that
\[\operatorname{\mathrm{codim}}(Z,Y)=\operatorname{\mathrm{codim}}(\operatorname {\mathrm{Spec}}(A/I),\operatorname{\mathrm{Spec}}(A))\geq 2.\]
So we can find a prime ideal \(\mathfrak{p}\in\operatorname{\mathrm{Spec}}(A)\) such that \(W:=\overline{\mathcal{M}(A/\mathfrak{p})}^{Y_{\mathrm{aser}}}\) strictly contains \(Z\). Apply the same method, we can see that each maximal chain has the same length (this in fact due to the additivity of codimension).
**Remark 2.14**.:
1. _In particular, we see that the codimension is independent of the base field_ \(K\)_._
### A key lemma
For a set \(S\) satisfying certain conditions, we can determine if \(S\) satisfies a property \(P\) or not. In this case, we say that the property \(P\) is well-defined on \(S\). It is not well-defined if \(S\) does not satisfy these conditions at the beginning.
The following generalized result from [7, Proposition 4.12] is crucial for extending a local result on irreducible closed subsets to be global.
**Lemma 2.15**.: _Let \(X\) be a \(K\)-analytic space. Let \(P\) be a property on irreducible components satisfying the following properties:_
* _there is a_ \(G\)_-covering_ \(X=\bigcup\limits_{i\in I}V_{i}\) _by affinoid domain, the property_ \(P\) _is well-defined (this means that we can determine if_ \(P\) _is satisfied or not) on each irreducible component of_ \(V_{i}\) _(or simply say that_ \(P\) _is well-defined on_ \(V_{i}\)_);_
* _if_ \(P\) _is well-defined on an irreducible component_ \(Z\) _of an affinoid domain_ \(V\)_, then_ \(P\) _is well-defined on each irreducible component of_ \(W\) _for any affinoid domain_ \(W\subset V\)_. Moreover, in this case, for any irreducible component_ \(T\) _of_ \(W\cap Z\)_, we have_ \(T\) _satisfies_ \(P\iff Z\) _satisfies_ \(P\)_._
_Then there exist Zariski-closed subsets \(X_{P}^{+},X_{P}^{-}\) of \(X\) which are characterized by the following properties: for any affinoid domain \(V\) on which \(P\) is well-defined, we have_
\[X_{P}^{+}\cap V =\bigcup_{\begin{subarray}{c}T\in\operatorname{\mathrm{Irr}}(V), \\ T\text{ satisfies }P\end{subarray}}T,\] \[X_{P}^{-}\cap V =\bigcup_{\begin{subarray}{c}T\in\operatorname{\mathrm{Irr}}(V), \\ T\text{ doesn't satisfy }P\end{subarray}}T.\]
_Notice that \(X=X_{P}^{+}\cup X_{P}^{-}\)._
Proof.: For any affinoid domain \(V\) on which \(P\) is well-defined, set
\[\mathcal{C}^{+}(V) :=\{T\in\operatorname{\mathrm{Irr}}(V)\mid T\text{ satisfies }P\},\] \[\mathcal{C}^{-}(V) :=\{T\in\operatorname{\mathrm{Irr}}(V)\mid T\text{ doesn't satisfy }P\},\] \[\mathcal{E}^{+}(V) :=\bigcup_{T\in\mathcal{C}^{+}(V)}T,\] \[\mathcal{E}^{-}(V) :=\bigcup_{T\in\mathcal{C}^{-}(V)}T.\]
Let \(V\) be an affinoid domain on which \(P\) is well-defined, and \(W\subset V\) an affinoid domain. Let \(Z\) be an irreducible component of \(V\) and \(T\) an irreducible component of \(W\) containing \(Z\). By our assumption, \(T\in\mathcal{C}^{+}(W){\Longleftrightarrow}\)\(Z\in\mathcal{C}^{+}(V)\). By [7, COROLLAIRE 4.11], we have \(\mathcal{E}^{+}(W)=\mathcal{E}^{+}(V)\cap W\) and \(\mathcal{E}^{-}(W)=\mathcal{E}^{-}(V)\cap W\).
Let \(X_{p}^{+}\) (resp. \(X_{P}^{-}\)) be the union of \(\mathcal{E}^{+}(V)\) (resp. \(\mathcal{E}^{-}(V)\)) where \(V\) is an affinoid domain on which \(P\) is well-defined. Then for any affinoid domain \(V\) of \(X\) on which \(P\) is well-defined, we have \(X_{P}^{+}\cap V=\mathcal{E}^{+}(V)\) and \(X_{P}^{-}\cap V=\mathcal{E}^{-}(V)\). Since P is well-defined on \(V_{i}\) for some G-covering \(X=\bigcup\limits_{i\in I}V_{i}\) by affinoid domain, and \(\mathcal{E}^{+}(V_{i}),\mathcal{E}^{-}(V_{i})\subset V_{i}\) are Zariski-closed, so \(X_{P}^{+},X_{P}^{-}\subset X\) are Zariski-closed.
## 3 Meromorphic functions and Cartier divisors
The sheaf of meromorphic functions and Cartier divisors are defined on a ringed space in [10, Section 20, Section 21]. On a G-ringed space, these definitions do not work since the restriction of a regular element is not necessarily regular. Fortunately, this can be remedied on analytic spaces (cf. [11, Section 2]). In this section and next section, we will following the idea in [10, Section 20, Section 21] to discuss meromorphic functions, Cartier divisors and cycles.
### Meromorphic functions
For a (commutative) ring \(A\), denote \(R(A)\subset A\) the set of all regular elements, i.e. non-zero divisors, we know \(R(A)\) is a multiplicative set, and the corresponding localization \(\operatorname{Frac}(A):=R(A)^{-1}A\) is the maximal localization containing \(A\) as a subring.
**Definition 3.1**.: _Let \(X\) be a \(K\)-analytic space. For any affinoid domain \(V=\mathcal{M}(A)\subset X\), we set \(K_{X}^{\prime}(V):=\operatorname{Frac}(A)\), this will defined a presheaf on affinoid domains on \(X\). The associated sheaf \(K_{X}\) with respect to the G-topology on \(X\) is called the_ **sheaf of meromorphic functions** _on \(X\). An element of \(K_{X}(X)\) is called a_ **meromorphic function** _on \(X\). The subsheaf of invertible elements of \(K_{X}\) is denoted by \(K_{X}^{*}\)._
**Remark 3.2**.:
1. _For affinoid domains_ \(U=\mathcal{M}(B)\subset V=\mathcal{M}(A)\) _of_ \(X\)_, and_ \(f\in R(A)\)_, the restriction of_ \(f\) _on_ \(U\) _is in_ \(R(B)\)_, this implies that our definition of_ \(K_{X}\) _is well-defined._ _Proof._ It is from the fact \(A\to B\) is flat, or we assume that \(B=\frac{A\{p_{1}^{-1}T_{1},\cdots,p_{n}^{-1}T_{n}\}}{(gT_{1}-f_{1},\cdots,gT_{n }-f_{n})}\).
2. _For any analytic domain_ \(V\subset X\)_, we have_ \[K_{X}(V)=\left\{\begin{aligned} &(s_{i})_{i}\in\prod\limits_{i}K_{X}^{ \prime}(V_{i})\left|\begin{aligned} V&=\bigcup \limits_{i}V_{i}\text{ is a G-covering of }V\text{ with }V_{i}\text{ affinoid }\\ &\text{ noid and }s_{i}|_{V_{ijk}}=s_{j}|_{V_{ijk}}\text{ for some G-covering}\\ V_{i}&\cap V_{j}=\bigcup\limits_{k}V_{ijk}\text{ with }V_{ijk}\text{ affinoid}\end{aligned}\right.\right\}\bigg{/} \sim,\] _where_ \((s_{i})_{i}\sim(s_{j}^{\prime})_{j}\) _if for any_ \(i,j\)_, there exists a G-covering_ \(V_{i}\cap V_{j}^{\prime}=\bigcup\limits_{k}V_{ijk}\) _with_ \(V_{ijk}\) _affinoid such that_ \(s_{i}|_{V_{ijk}}=s_{j}^{\prime}|_{V_{ijk}}\)_. If_ \(X\) _is separated, then it can be simplified as_ \[K_{X}(V)=\left\{\begin{aligned} &(s_{i})_{i}\in\prod\limits_{i}K_{X}^{ \prime}(V_{i})\left|\begin{aligned} V&=\bigcup \limits_{i}V_{i}\text{ is an G-covering of }V\text{ with }V_{i}\text{ affin-}\\ &\text{ noid and }s_{i}|_{V_{i}\cap V_{j}}=s_{j}|_{V_{i}\cap V_{j}} \end{aligned}\right.\right\}\bigg{/}\sim,\] _where_ \((s_{i})_{i}\sim(s_{j}^{\prime})_{j}\) _if for any_ \(i,j\)_,_ \(s_{i}|_{V_{i}\cap V_{j}^{\prime}}=s_{j}^{\prime}|_{V_{i}\cap V_{j}^{\prime}}\)_._
3. _For any affinoid domain_ \(V\subset X\)_, the canonical map_ \(K_{X}^{\prime}(V)\to K_{X}(V)\) _is injective. In particular,_ \(\mathcal{O}_{X}\subset K_{X}\)_._
Proof.: Given an affinoid domain \(V\) and any finite G-covering \(V=\bigcup\limits_{i=1}^{n}V_{i}\) by affinoid domains, let \(A=\mathcal{O}_{X}(V)\) and \(A_{i}=\mathcal{O}_{X}(V_{i})\). We consider the restriction map \(\operatorname{Frac}(A)\to\prod\limits_{i=1}^{n}\operatorname{Frac}(A_{i})\). Let \(a/b\in\operatorname{Frac}(A)\) be such that its restriction on \(\operatorname{Frac}(A_{i})\) is \(0\) for any \(i\), i.e. \(a=0\in A_{i}\). This implies that \(a=0\in A\) by Tate's acyclic theorem. Hence \(K^{\prime}_{X}(V)\hookrightarrow K_{X}(V)\). We take a G-covering \(X=\bigcup\limits_{i\in I}V_{i}\) by affinoid domains. Then the injective map \(\mathcal{O}_{X}(V_{i})\hookrightarrow K^{\prime}_{X}(V_{i})\) will induce \(\mathcal{O}_{X}\hookrightarrow K_{X}\).
**Definition 3.3**.: _Keep the notion in Definition 3.1. For an \(\mathcal{O}_{X}\)-module \(\mathcal{F}\), we call \(\mathcal{F}\otimes_{\mathcal{O}_{X}}K_{X}\) the sheaf of meromorphic sections of \(\mathcal{F}\), and we have a canonical map_
\[\operatorname{id}_{\mathcal{F}}\otimes i:\mathcal{F}\to\mathcal{F}\otimes_{ \mathcal{O}_{X}}K_{X}.\]
_The sheaf \(\mathcal{F}\) is called_ **strictly without torsion** _if \(\operatorname{id}_{\mathcal{F}}\otimes i\) is injective._
_A global section of \(\mathcal{F}\otimes_{\mathcal{O}_{X}}K_{X}\) is called a_ **meromorphic sections** _of \(\mathcal{F}\) on \(X\)._
_If \(\mathcal{F}\) is coherent on \(X\), we say a meromorphic section \(s\) on \(X\) is defined on a Zariski-open subset \(V\) if \(s|_{V}\) is in the image of \(\mathcal{F}(V)\) via \(\operatorname{id}_{\mathcal{F}}\otimes i\). If moreover, \(\mathcal{F}\) is strictly without torsion, then there is a maximal Zariski-open subset \(V\) on which \(s\) is defined, such \(V\) is called the_ **domain of definition** _of \(s\), denoted by \(\operatorname{dom}(s)\) (i.e. \(s\in\mathcal{F}(\operatorname{dom}(s))\))._
**Remark 3.4**.:
1. _Notice that_ \(\mathcal{F}\to\mathcal{F}\otimes_{\mathcal{O}_{X}}K_{X}\) _is the sheafification of the presheed given by_ \[V\mapsto\mathcal{F}(V)\otimes_{\mathcal{O}_{X}(V)}K^{\prime}_{X}(V)\] _for any affinoid domain_ \(V\)_. So for any analytic domain_ \(V\subset X\)_, we have_ \((\mathcal{F}\otimes_{\mathcal{O}_{X}}K_{X})|_{V}\simeq\mathcal{F}|_{V}\otimes_ {\mathcal{O}_{V}}K_{V}\)_. In particular,_ \(K_{X}|_{V}=K_{V}\)_._
2. _A locally free_ \(\mathcal{O}_{X_{G}}\)_-module_ \(\mathcal{F}\) _is strictly without torsion. Moreover,_ \(\mathcal{F}\otimes_{\mathcal{O}_{X}}K_{X}\) _is a_ \(K_{X}\)_-module, here, we view_ \((X_{G},K_{X})\) _as a G-ringed space._
For a good, strictly \(K\)-analytic space, the sheaf of meromorphic functions can be given in a similar way in [10, Section 20], and will have some good properties, i.e. properties for schemes can be extended to good analytic spaces.
If \(X\) is good, strictly \(K\)-analytic, and \(x\in X\) is rigid, we have that
\[\mathcal{O}_{X,x}=\lim_{V}\mathcal{O}_{X}(V)\]
where \(V\) runs through affinoid domains containing \(x\), see [2, Section 2.3]. In particular, it suffices that \(V\) runs through (strictly) affinoid neighborhoods of \(x\) in \(X\).
**Proposition 3.5**.: _Let \(X\) be a good, strictly \(K\)-analytic space. For any analytic domain \(V\subset X\), set_
\[\mathcal{R}(V):=\{s\in\mathcal{O}_{X}(V)\mid s_{x}\in R(\mathcal{O}_{X,x})\ \text{for any}\ x\in V\}\subset\mathcal{O}_{X}(V),\]
_which defines a sheaf on \(X\). Then the following statements hold:_
1. _For any affinoid domain_ \(V\subset X\)_, we have_ \(\mathcal{R}(V)=R(\mathcal{O}_{X}(V))\)_. In particular, and_ \(K_{X}\) _to be the sheafification of the following presheaf: for any analytic domain_ \(V\subset X\)_,_ \[V\mapsto\mathcal{R}(V)^{-1}\mathcal{O}_{X}(V).\]
2. _For any rigid point_ \(x\in X\)_, we have_ \(K^{\prime}_{X,x}\simeq\operatorname{Frac}(\mathcal{O}_{X,x})\)_. For any analytic domain_ \(V\subset X\)_, the canonical homomorphism_ \(K^{\prime}_{X}(V)\hookrightarrow\prod\limits_{x\in V\ rigid}K^{\prime}_{X,x}\) _is injective._
Proof.: Notice that the presheaf \(\mathcal{R}\) is a sheaf. Since \(\mathcal{R}\) is a subpresheaf of \(\mathcal{O}_{X}\), and if \(V=\bigcup\limits_{i\in I}V_{i}\) is a G-covering of an analytic domain \(V\), \(a_{i}\in\mathcal{R}(V_{i})\) such that \(a_{i}|_{V_{i}\cap V_{j}}=a_{j}|_{V_{i}\cap V_{j}}\) then there exists \(a\in\mathcal{O}_{X}(V)\) such that \(a|_{V_{i}}=a_{i}\), then \(a\in\mathcal{R}(V)\).
(1) For any affinoid domain \(V\subset X\) and \(a\in\mathcal{O}_{X}(V)\), we have \(a\) is regular \(\Longleftrightarrow a\in\mathcal{O}_{X,x}\) regular for any \(x\in V\). Indeed, "\(\Longrightarrow\)" is from the flatness, for "\(\Leftarrow\)", if \(a\in\mathcal{O}_{X,x}\) is regular, then there is an affinoid neighborhood \(V_{x}\) of \(x\) in \(V\) such that \(a\in R(\mathcal{O}_{X}(V_{x}))\) (since \(\operatorname{Ker}(\mathcal{O}_{X}(V)\stackrel{{\rightarrow}}{{ \rightarrow}}\mathcal{O}_{X}(V))\) is finitely generated). Then \(a\in R(\mathcal{O}_{X}(V))\) since \(V=\bigcup\limits_{x\in V}V_{x}\) is a G-covering. So \(\mathcal{R}(V)=R(\mathcal{O}_{X}(V))\). Hence \(K^{\prime}_{X}(V)=\operatorname{Frac}(\mathcal{O}_{X}(V))\).
(2) By definition, we have a map
\[\varinjlim_{V}K^{\prime}_{X}(V)\to\mathcal{R}_{x}^{-1}\mathcal{O}_{X,x}\]
which is surjective, where \(V\) runs through affinoid neighborhoods of \(x\). If \(a/b\in K^{\prime}_{X}(V)\) with \(V\) affinoid neighborhood of \(x\) such that \(a/b=0\in\mathcal{R}_{x}^{-1}\mathcal{O}_{X,x}\), i.e. there is \(c\in\mathcal{R}_{x}\) such that \(ac=0\). We can assume that \(c\in\mathcal{O}_{X}(V)\), then \(a/b=0\in K^{\prime}_{X}(V)\).
It remains to show that \(\mathcal{R}_{x}=R(\mathcal{O}_{X,x})\). We have an injective map \(\mathcal{R}_{x}\hookrightarrow R(\mathcal{O}_{X,x})\) by definition. Conversely, for \(a\in R(\mathcal{O}_{X,x})\), we consider an affinoid neighborhood \(V\) of \(x\) with \(A=\mathcal{O}_{X}(V)\) such that \(a\in A\), then
Since \(\operatorname{Ann}(a)\) is finitely generated and \(a\in R(\mathcal{O}_{X,x})\), so we can find an affinoid neighborhood \(U\subset V\) of \(x\) with \(B=\mathcal{O}_{X}(U)\) such that \(\operatorname{Ann}(a)\otimes_{A}B=0\). So \(a\in R(B)\). By (1), we know that \(\mathcal{R}_{x}=R(\mathcal{O}_{X,x})\).
If \(a/b\in K^{\prime}_{X}(V)\) such that \(0=a/b\in K^{\prime}_{X^{\prime}x}\) for any rigid \(x\in V\), then there exists an affinoid neighborhood \(V_{x}\) of \(x\) such that \(0=a/b\in K^{\prime}_{X}(V_{x})\). Since \(\mathcal{R}(V_{x})=R(\mathcal{O}_{X}(V_{x}))\), we have \(0=a\in\mathcal{O}_{X}(V_{x})\) and \(a=0\in K^{\prime}_{X}(V)\), \(a/b=0\).
### Cartier divisors
**Definition 3.6**.: _Let \(K\) be a complete non-archimedean field, and \(X\) a \(K\)-analytic space. We denote the group \(H^{0}(X_{G},K^{*}_{X}/\mathcal{O}^{*}_{X})\) by \(\operatorname{Div}(X)\). The elements of \(\operatorname{Div}(X)\) are called_ **Cartier divisors** _of \(X_{G}\)._
_Let \(f\in H^{0}(X_{G},K^{*}_{X})\), its image in \(\operatorname{Div}(X)\) is called a_ **principal Cartier divisor** _and denoted by \(\operatorname{div}(f)\)._
_We say that two Cartier divisor \(D_{1},D_{2}\) are_ **linearly equivalent** _if \(D_{1}-D_{2}\) is principal, write \(D_{1}\sim D_{2}\). We denote \(\operatorname{CaCl}(X)\) the group of equivalent class of Cartier divisors._
_A Cartier divisor \(D\) is called_ **effective** _if it is in the image of the canonical map \(H^{0}(X_{G},(\mathcal{O}_{X}\cap K^{*}_{X})/\mathcal{O}^{*}_{X})\to H^{0}(X_{G},K^{*}_{X}/\mathcal{O}^{*}_{X})\), write \(D\geq 0\). The set of effective Cartier divisors is denoted by \(\operatorname{Div}_{+}(X)\)._
**Remark 3.7**.:
1. _The exact sequence of sheaves_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will induce a long exact sequence_ _will_
_If_ \(D_{1}=\{(U_{i},f_{i})\}_{i\in I}\) _and_ \(D_{2}=\{(V_{j},g_{j})\}_{j\in J}\)_, then_ \(D_{1}+D_{2}=\{(W_{ijk},f_{i}g_{j})\}_{i\in I,j\in J}\)_, where_ \(U_{i}\cap V_{j}=\bigcup\limits_{k}W_{ijk}\) _is a G-covering by affinoid domains._ _In particular, if_ \(X=\mathcal{M}(A)\) _is affinoid, let_ \(\mathcal{X}=\operatorname{Spec}(A)\)_, then we have an injection_ \(\operatorname{Div}(\mathcal{X})\hookrightarrow\operatorname{Div}(X)\)_._
**Proposition 3.8**.: _Keep the notion in Definition 3.6._
1. _For any divisor_ \(D=\{(U_{i},f_{i})\}_{i\in I}\in\operatorname{CaCl}(X)\)_, we can associate a subsheaf_ \(\mathcal{O}_{X}(D)\subset K_{X}\) _defined by_ \(\mathcal{O}_{X}(D)|_{U_{i}}=f_{i}^{-1}\mathcal{O}_{X}|_{U_{i}}\)_, which is an invertible sheaf and independent of the choice of representative. Moreover,_ \(D\geq 0\iff\mathcal{O}_{X}(-D)\subset\mathcal{O}_{X}\)_._
2. _The construction above gives a homomorphism of groups_ \(\rho:\operatorname{Div}(X)\to\operatorname{Pic}(X),\quad D\mapsto\mathcal{O}_ {X}(D)\)_._
3. _The homomorphism_ \(\rho\) _induces an injective homomorphism_ \(\operatorname{CaCl}(X)\to\operatorname{Pic}(X)\) _with image_ \[\operatorname{Im}\rho=\{L\in\operatorname{Pic}(X)\mid L\hookrightarrow K_{X}\}.\]
4. _If_ \(X\) _is affinoid and reduced, then_ \(\rho:\operatorname{CaCl}(X)\to\operatorname{Pic}(X)\) _is an isomorphism._
Proof.: We follow the idea of the proof of [12, Proposition 7.1.18].
(1) Assume \(D=\{(V_{j},g_{j})\}_{j\in J}\) is another representative. Then
\[\mathcal{O}_{X}(D)|_{U_{i}\cap V_{j}}=f_{i}^{-1}\mathcal{O}_{X}|_{U_{i}\cap V_ {j}}=(g_{j}u)^{-1}\mathcal{O}_{X}|_{U_{i}\cap V_{j}}=g_{j}^{-1}\mathcal{O}_{X} |_{U_{i}\cap V_{j}}\]
where \(u\in\mathcal{O}_{X}(U_{i}\cap V_{j})^{*}\), this implies \(\mathcal{O}_{X}(D)\) is independent of the choice of representative. By construction, \(\mathcal{O}_{X}(D)\in\operatorname{Pic}(X)\), and \(D\geq 0\) if and only if \(\mathcal{O}_{X}(D)\subset\mathcal{O}_{X}\).
(2) The map is a homomorphism. Indeed, let \(D_{1}=\{(f_{i},U_{i})\}_{i\in I}\) and \(D_{2}=\{(g_{i},U_{i})\}_{i\in I}\), then
\[\rho(D_{1}+D_{2})|_{U_{i}}=f_{i}^{-1}g_{i}^{-1}\mathcal{O}_{X}|_{U_{i}}\simeq f _{i}^{-1}\mathcal{O}_{X}|_{U_{i}}\otimes_{\mathcal{O}_{X}|_{U_{i}}}g_{i}^{-1} \mathcal{O}_{X}|_{U_{i}},\]
and this isomorphism is compatible on the intersection \(U_{i}\cap U_{j}\).
(3) If \(D=\{(U_{i},f_{i})\}_{i\in I}=\operatorname{div}(f)\) is a principal divisor with \(f\in H^{0}(X_{G},K_{X}^{*})\) and \(f_{i}=f|_{U_{i}}\in K_{X}^{\prime}(U_{i})\), where \(X=\bigcup\limits_{i\in I}U_{i}\) is a G-covering of \(X\) by affinoid domains. Then \(f^{-1}\in\mathcal{O}_{X}(D)(X)\) because of the following exact sequence
So we can define the morphism \(\mathcal{O}_{X}\to\mathcal{O}_{X}(D),\ \ a\mapsto af^{-1}\). It is an isomorphism since it is an isomorphism on each \(U_{i}\). Hence we have a homomorphism \(\operatorname{CaCl}(X)\to\operatorname{Pic}(X)\).
If \(D=\{(U_{i},f_{i})\}_{i\in I}\in\operatorname{Div}(X)\) such that \(\mathcal{O}_{X}(D)\simeq\mathcal{O}_{X}\), then there is \(g\in\mathcal{O}_{X}(D)(X)\) such that the morphism \(\mathcal{O}_{X}\overset{\sim}{\to}\mathcal{O}_{X}(D),\ \ a\mapsto ag\) is an isomorphism. Since \(\mathcal{O}_{X}(D)|_{U_{i}}\simeq f_{i}^{-1}\mathcal{O}_{X}|_{U_{i}}=g|_{U_{i} }\mathcal{O}_{X}|_{U_{i}}\) and \(f_{i}^{-1}\in K_{X}^{\prime}(U_{i})\), \(g|_{U_{i}}=f_{i}^{-1}u_{i}\in K_{X}^{\prime}(U_{i})\subset K_{X}^{\prime}(U_{i})\) with \(u_{i}\in\mathcal{O}_{X}^{\prime}(U_{i})\), we have \(g\in K_{X}^{*}(X)\) and \(D=\{(U_{i},f_{i})\}_{i\in I}=\{(U_{i},g^{-1}|_{U_{i}})\}_{i\in I}\) is principal.
By definition, we know that \(\mathcal{O}_{X}(D)\subset K_{X}\). Conversely, for \(L\in\operatorname{Pic}(X)\) with \(L\subset K_{X}\), there is G-covering \(X=\bigcup\limits_{i\in I}U_{i}\) by affinoid domains such that \(\mathcal{O}_{X}|_{U_{i}}\simeq L|_{U_{i}}\). We take \(g_{i}\in L(U_{i})\) which is mapped to 1. Then \(g_{i}\in K_{X}(U_{i})\) and \(L|_{U_{i}}=g_{i}\mathcal{O}_{X}|_{U_{i}}\), moreover, there is \(f_{i}\in K_{X}^{*}(U_{i})\) such that \(f_{i}g_{i}=1\) because of the isomorphism. On \(U_{i}\cap U_{j}\), we have
\[L|_{U_{i}\cap U_{j}}=f_{i}^{-1}\mathcal{O}_{X}|_{U_{i}\cap U_{j}}=f_{j}^{-1} \mathcal{O}_{X}|_{U_{i}\cap U_{j}},\]
so there is \(u\in\mathcal{O}_{X}^{*}(U_{i}\cap U_{j})\) such that \(f_{i}^{-1}|_{U_{i}\cap U_{j}}=uf_{j}^{-1}|_{U_{i}\cap U_{j}}\). Then \(L=\mathcal{O}_{X}(D)\), where \(D=\{(U_{i},f_{i})\}_{i\in I}\in\operatorname{Div}(X)\).
(4) Let \(\mathcal{X}=\operatorname{Spec}(\mathcal{O}_{X}(X))\), then \(\operatorname{CaCl}(\mathcal{X})\simeq\operatorname{Pic}(\mathcal{X})\), see [12, Corollary 1.19]. We a commutative diagram
so our claim holds. The isomorphism \(\operatorname{Pic}(\mathcal{X})\simeq\operatorname{Pic}(X)\) is from \(\mathcal{C}\!oh(\mathcal{X})\simeq\mathcal{C}\!oh(X)\) and Tate's acyclic theorem, see the proof of [3, Proposition 1.3.4 (iii)].
**Remark 3.9**.:
1. _We know that_ \(H^{1}(X_{G},\mathcal{O}_{X}^{*})\simeq\operatorname{Pic}(X)\)_, then_ \(\rho\) _is the connecting map of the long exact sequence._
**Example 3.10**.: _Let \(L\) be a line bundle on a normal \(K\)-analytic space \(X\). Let \(s\in H^{0}(X,L\otimes_{\mathcal{O}_{X}}K_{X})\) be a rational section which is non-zero on each irreducible component. Let \(X=\bigcup\limits_{i\in I}U_{i}\) be a G-covering of \(X\) by integral affinoid domains such that \(L|_{U_{i}}\) is free and generated by an element \(e_{i}\). Then these exist \(f_{i}\in K_{X}^{*}(U_{i})\) such that \(s|_{V_{i}}=f_{i}e_{i}\). Moreover \(\operatorname{div}(s):=\{(U_{i},f_{i})\}_{i\in I}\) is a Cartier divisor such that \(\mathcal{O}_{X}(\operatorname{div}(s))\simeq L\)._
### Inverse image of a Cartier divisor
Next we consider the restriction of Cartier divisors on a closed analytic subspace.
**Definition 3.11**.: _Let \(D\in\operatorname{Div}(X)\), and \(Z\in\overline{\operatorname{Irr}(X)}\) with reduced analytic space structure.We say \(D\)_**intersects**_\(Z\)_**properly** _if there is a G-covering \(X=\bigcup\limits_{i\in I}U_{i}\) by affinoid domains such that \(D=\{(U_{i},a_{i}/b_{i})\}_{i\in I}\) with the images \(\overline{a_{i}},\overline{b_{i}}\in R(\mathcal{O}_{Z}(U_{i}\cap Z))\). The set of Cartier divisor intersecting \(Z\) properly is a subgroup of \(\operatorname{Div}(X)\), denoted by \(G_{Z/X}\)._
**Remark 3.12**.:
1. _There is a natural homomorphism_ \(G_{Z/X}\to\operatorname{Div}(Z)\) _denoted by_ \(D\mapsto D|_{Z}\)_, compatible with the homomorphism_ \(\mathcal{O}_{X}\to i_{*}\mathcal{O}_{Z}\)_. Moreover, we have a canonical isomorphism_ \(\mathcal{O}_{X}(D)|_{Z}\simeq\mathcal{O}_{Z}(D|_{Z})\)_._
## 4 Cycles, flat pull-backs and proper push-forwards
### Cycles
**Definition 4.1**.: _Let \(X\) be a \(K\)-analytic space. A_ **prime cycle** _on \(X\) is an element in \(\overline{\operatorname{Irr}(X)}\). A_ **cycle** _on \(X\) is a formal sum \(\alpha=\sum\limits_{Z\in\operatorname{Irr}(X)}n_{Z}[Z]\) with \(n_{Z}\in\mathbb{Z}\) which is G-locally finite, i.e. the set_
\[\{Z\in\overline{\operatorname{Irr}(X)}\,|\,\,Z\cap V\neq\emptyset,n_{Z}\neq 0\}\]
_is finite for any affinoid domain \(V\). The coefficient \(n_{Z}\) is called the_ **multiplicity of \(\alpha\) at \(Z\)**_, denoted by \(\operatorname{mult}_{Z}(\alpha)\). We say that a cycle \(\alpha\) is_ **positive** _if \(\operatorname{mult}_{Z}(\alpha)\geq 0\) for any \(Z\in\overline{\operatorname{Irr}(X)}\). The set of cycles (resp. positive cycles) is denoted by \(Z(X)\) (resp. \(Z_{+}(X)\))._
_The union of the \(Z\) such that \(n_{Z}\neq 0\) is called the_ **support of \(\alpha\)**_, denoted by \(\operatorname{Supp}(\alpha)\). It is a Zariski-closed subset of \(X\). By convention, \(\operatorname{Supp}(0)=\emptyset\)._
_A cycle \(\alpha\) is (purely)_ **of codimension \(r\)** _(resp._ **of dimension \(r\)**_) if any \(Z\in\overline{\operatorname{Irr}(X)}\) with \(n_{Z}\neq 0\) has codimension \(r\) (resp. dimension \(r\)). The cycles of codimension \(r\) (resp. of dimension \(r\)) form a subgroup \(Z^{r}(X)\) (resp. \(Z_{r}(X)\)) of the group of cycles on \(X\)._
**Remark 4.2**.:
1. _For a positive cycle_ \(\alpha=\sum\limits_{Z\in\operatorname{Irr}(X)}n_{Z}[Z]\) _and any_ \(Z\in\overline{\operatorname{Irr}(X)}\) _with_ \(n_{Z}\geq 1\)_, we can endow_ \(Z\) _with the reduced subscheme structure, then_ \(Z=V(\mathcal{I}_{Z})\) _is an integral closed analytic subspace of_ \(X\)_, where_ \(\mathcal{I}_{Z}\) _is the coherent sheaf of ideal defining_ \(Z\)_. We view_ \(\alpha\) _as a closed analytic subspace defined by the sheaf of ideal_ \(\mathcal{I}_{\alpha}:=\prod\limits_{Z\in\overline{\operatorname{Irr}(X)}} \underline{T}_{Z}^{n_{Z}}\) _and we have a
canonical closed immersion \(j:\alpha=V(\mathcal{I}_{\alpha})\hookrightarrow X\). This induces a homomorphisms of semigroups
\[Z_{+}(X)\to\{\text{closed analytic subspace of }X\}=\{\text{coherent sheaves of ideals on }X\}.\]
2. By Proposition 2.13, we know that \(Z^{\tau}(X)\) is not dependent of the base field K, but \(Z_{r}(X)\) is.
**Example 4.3**.: _Let \(X=\mathcal{M}(A)\) be a \(K\)-affinoid space. Set \(\mathcal{X}=\operatorname{Spec}(A)\). Then_
\[\operatorname{Div}(X) \hookrightarrow\operatorname{Div}(\mathcal{X}),\] \[Z^{*}(X) \simeq Z^{*}(\mathcal{X}).\]
_The first arrow is also an isomorphism if \(X\) is regular, see Proposition 4.13._
**Lemma 4.4**.: _Let \(X\) be a \(K\)-analytic space. Let \(\alpha\in Z_{+}(X)\) with associated sheaf of ideal \(\mathcal{I}_{\alpha}\). Then \(V(\mathcal{I}_{\alpha})=\operatorname{Supp}(\alpha)\) with \(\operatorname{Irr}(V(\mathcal{I}_{\alpha}))=\{\text{maximal elements in }\alpha\}\)._
Proof.: This is local, and we can deduce this lemma from the example above.
The following lemma is obvious.
**Lemma 4.5**.: _Let \(X=\bigcup\limits_{i\in I}V\) be a G-covering of by affinoid domains, and \(\alpha,\beta\in\operatorname{Div}(X)\) (resp. \(Z^{*}(X)\)). Then \(\alpha=\beta\iff\alpha|_{V_{i}}=\beta|_{V_{i}}\) for any \(i\in I\)._
Proof.: It suffices to show the "if" part. If \(\alpha,\beta\in\operatorname{Div}(X)\), then the result holds from the expression of Cartier divisors. If \(\alpha=\sum\limits_{Z}n_{Z}[Z],\beta=\sum\limits_{Z}m_{Z}[Z]\in Z^{k}(X)\) such that \(\alpha|_{V_{i}}=\beta|_{V_{i}}\) for any \(i\in I\), then \(n_{Z}[Z\cap V_{i}]=m_{Z}[Z\cap V_{i}]\) for any \(Z\in\operatorname{Irr}(X)\) with \(Z\cap V_{i}\neq\emptyset\), so \(n_{Z}=m_{Z}\).
### Cycle associated to a coherent sheaf
We will construct a homomorphism \(\operatorname{Div}(X)\to Z^{1}(X)\) as we do in algebraic geometry. Recall, for a Noetherian affine scheme \(\mathcal{X}=\operatorname{Spec}(A)\), a coherent sheaf \(\mathcal{F}=\widetilde{M}\) on \(\mathcal{X}\), and an irreducible component \(Z\) of \(\operatorname{Supp}(\mathcal{F})\), we set \(\operatorname{mult}_{Z}(\mathcal{F}):=\operatorname{length}_{A_{\mathfrak{p}} }(M_{\mathfrak{p}})\), called the multiplicity of \(Z\) in \(\mathcal{F}\), where \(\mathfrak{p}\in\mathcal{X}\) is the prime ideal corresponding to \(Z\). For a divisor \(D\in\operatorname{Div}(\mathcal{X})\) and a codimension one prime cycle \(Z=\overline{\{z\}}\in Z^{1}(\mathcal{X})\), we set \(\operatorname{mult}_{Z}(D):=\operatorname{mult}_{\mathcal{O}_{X,z}}(D_{z})\) the multiplicity of \(Z\) in \(D\). For an affinoid space \(\mathcal{M}(A)\), we have similar notation.
**Lemma 4.6**.: _Let \(X\) be a \(K\)-analytic space. Let \(\mathcal{F}\) be a coherent sheaf on \(X\). For any irreducible component \(Z\) of \(\operatorname{Supp}(\mathcal{F})\) with reduced analytic space structure, and an affinoid domain \(V\subset X\) with \(Z\cap V\neq\emptyset\), we set_
\[\operatorname{mult}_{Z}(\mathcal{F}):=\operatorname{mult}_{T}(\mathcal{F}|_{V})\]
_where \(T\) is an irreducible component of \(Z\cap V\) with \(\overline{T}^{\operatorname{Supp}(\mathcal{F})_{\operatorname{Zar}}}=Z\). Then \(\operatorname{mult}_{Z}(\mathcal{F})\) is a positive integer which is independent of the choice of \(T\) and \(V\). We call \(\operatorname{mult}_{Z}(\mathcal{F})\) the_ **multiplicity of \(Z\) in \(\mathcal{F}\)**_._
Proof.: For a fixed irreducible component \(Z\) of \(\operatorname{Supp}(\mathcal{F})\), and any affinoid domain \(V,W\subset X\) with \(W\subset V\), \(Z\cap W\neq\emptyset\), we claim that
\[\operatorname{mult}_{T}(\mathcal{F}|_{V})=\operatorname{mult}_{T^{\prime}}( \mathcal{F}|_{W})\]
where \(T\in\operatorname{Irr}(Z\cap V)\) (resp.\(T^{\prime}\in\operatorname{Irr}(Z\cap W)\)) with \(\overline{T}^{V_{\operatorname{Zar}}}=T\), \(\overline{T}^{X_{\operatorname{Zar}}}=Z\). Indeed, let \(V=\mathcal{M}(A),W=\mathcal{M}(B)\) and \(\mathcal{F}|_{V}=\widetilde{M}\). We shall show that
\[\operatorname{length}_{A_{\mathfrak{p}}}(M_{\mathfrak{p}})=\operatorname{ length}_{B_{\mathfrak{q}}}(M_{\mathfrak{p}}\otimes_{A_{\mathfrak{p}}}B_{\mathfrak{q}})\]
where \(\mathfrak{p}\subset A\) (resp. \(\mathfrak{q}\subset B\)) is the prime ideal corresponding to \(T\) (resp. \(T^{\prime}\)). Notice that the kernel \(W\to\operatorname{Spec}(B)\) is surjective, we can find a \(y\in W\) such that \(\operatorname{Ker}(|\cdot|_{x})=\mathfrak{q}\). Let \(x\in V\) be the image of \(y\), then \(\operatorname{Ker}(|\cdot|_{x})=\mathfrak{p}\). We have \(\mathscr{H}(x)=\mathscr{H}(y)\) and
\[\operatorname{length}_{A_{\mathfrak{p}}}(M_{\mathfrak{p}})=\dim_{k(\mathfrak{ p})}(M\otimes_{A}k(\mathfrak{p}))=\dim_{\mathscr{H}(x)}(M\otimes_{A}\mathscr{H}(x)),\]
it is similar for \(\operatorname{length}_{B_{\mathfrak{q}}}(M_{\mathfrak{p}}\otimes_{A_{\mathfrak{p}}}B_ {\mathfrak{q}})\). Hence our claim holds.
To show the lemma, we apply Lemma 2.15. Let \(Z\in Z^{1}(X)\) be a prime cycle, and \(m=\operatorname{mult}_{T}(\mathcal{F}|_{V})\) for some affinoid domain \(V\subset X\) with \(Z\cap V\neq\emptyset\), where \(T\in\operatorname{Irr}(Z\cap V)\) with \(\overline{T}^{X_{\operatorname{Zar}}}=Z\). For \(V\) given as before, we say an irreducible component \(T\in\operatorname{Irr}(Z\cap V)\) satisfies \(P\) if \(\operatorname{mult}_{T}(\mathcal{F}|_{V})=m\). After replacing \(X\) by \(Z\), from our claim, we see that \(P\) satisfies the hypothesis in Lemma 2.15. Then there are Zariski-closed subsets \(Z_{P}^{+},Z_{P}^{-}\) of \(Z\) such that
\[Z_{P}^{+}\cap V=\bigcup_{\begin{subarray}{c}T\in\operatorname{Irr}(Z\cap V), \\ T\text{\ satisfies }P\end{subarray}}T,\]
\[Z_{P}^{-}\cap V=\bigcup_{\begin{subarray}{c}T\in\operatorname{Irr}(Z\cap V),\\ T\text{\ doesn't satisfy }P\end{subarray}}T,\]
and \(Z=Z_{P}^{+}\cup Z_{P}^{-}\). Since \(Z\) is irreducible and there is some \(T\subset Z_{P}^{+}\), we have \(Z=Z_{P}^{+}\). This implies the lemma.
**Definition 4.7**.: _Keep the notion in Lemma 4.6. For a coherent sheaf \(\mathcal{F}\) on \(X\) with \(\operatorname{codim}(\operatorname{Supp}(\mathcal{F}),X)\geq k\), we set_
\[[\mathcal{F}]^{k}:=\sum_{Z\in\operatorname{Irr}(\operatorname{Supp}(\mathcal{ F}))^{k}}\operatorname{mult}_{Z}(\mathcal{F})[Z]\in Z^{k}(X),\]
_called the_ **cycle associated to \(\mathcal{F}\) with codimension \(k\)**_._
**Remark 4.8**.:
* _By Lemma_ 4.6_, it is hard to have the following result. Let_ \(V=\mathcal{M}(A)\subset X\) _is an affinoid domain, and_ \(\mathcal{F}\) _a coherent sheaf on_ \(X\)_. Set_ \(\mathcal{V}=\operatorname{Spec}(A)\) _and_ \(\mathcal{F}_{V}^{\operatorname{al}}\) _the coherent sheaf on_ \(\mathcal{V}\) _corresponding to_ \(\mathcal{F}|_{V}\)_. Then_ \[[\mathcal{F}|_{V}]^{k}=[\mathcal{F}_{V}^{\operatorname{al}}]^{k},\] _here we identify_ \(Z^{*}(V)\simeq Z^{*}(\mathcal{V})\)_._
**Definition 4.9**.: _Keep the notion in Lemma 4.6. For a closed analytic subspace \(Y\) of \(X\) with \(\operatorname{codim}(Y,X)\geq k\), we set_
\[\operatorname{mult}_{Z}(Y):=\operatorname{mult}_{Z}(\mathcal{O}_{Y}),\]
_for any \(Z\in\operatorname{Irr}(Y)\), called the_ **multiplicity of \(Z\) in \(Y\)**_, and set_
\[[Y]^{k}:=\sum_{\begin{subarray}{c}Z\in\operatorname{Irr}(Y)\\ Z\in Z^{k}(X)\end{subarray}}\operatorname{mult}_{Z}(Y)[Z]\in Z^{k}(X),\]
_called the_ **cycle associated to \(Y\) with codimension \(k\)**_._
### Weil divisors
**Definition 4.10**.: _Let \(X\) be a \(K\)-analytic spaces. An element in \(Z^{1}(X)\) is called a_ **Weil divisor** _on \(X\)._
**Lemma 4.11**.: _Let \(X\) be a \(K\)-analytic space. Let \(D\in\operatorname{Div}(X)\). For any prime cycle \(Z\in Z^{1}(X)\), and any affinoid domain \(V\subset X\) with \(Z\cap V\neq\emptyset\), \(D|_{V}\in K_{X}^{\prime}(V)\), we set_
\[\operatorname{mult}_{Z}(D):=\operatorname{mult}_{T}(D|_{V})\]
_where \(T\in\operatorname{Irr}(Z\cap V)\) with \(\overline{T}^{X_{\operatorname{Zar}}}=Z\). Then \(\operatorname{mult}_{Z}(D)\) is independent of the choice of \(T\) and \(V\). We call \(\operatorname{mult}_{Z}(D)\) the_ **multiplicity of \(Z\) for \(D\)**_._
Proof.: The proof is similar with the one of Lemma 4.6.
For any prime cycle \(Z\in Z^{1}(X)\) and any affinoid domain \(V,W\subset X\) with \(W\subset V\), \(Z\cap W\neq\emptyset\), \(D|_{V}\in K^{\prime}_{X}(V)\), we claim that
\[\operatorname{mult}_{T}(D|_{V})=\operatorname{mult}_{T^{\prime}}(D|_{W}),\]
where \(T\in\operatorname{Irr}(Z\cap V)\) (resp.\(T^{\prime}\in\operatorname{Irr}(Z\cap W)\)) with \(\overline{T}^{V^{\mathrm{Zar}}}=T\), \(\overline{T}^{\mathrm{X_{Zar}}}=Z\). Indeed, since both sides are additive, we can assume that \(D|_{V}=f\in R(\mathcal{O}_{X}(V))\). Let \(Y\subset V\) be a closed analytic subspace determined by \(f\in\mathcal{O}_{X}(V)\), then our claim is from Lemma 4.6.
To show the lemma, we apply Lemma 2.15. Let \(m=\operatorname{mult}_{T}(D|_{V})\) for some affinoid domain \(V\subset X\) with \(Z\cap V\neq\emptyset\), \(D|_{V}\in K^{\prime}_{X}(V)\), where \(T\in\operatorname{Irr}(Z\cap V)\) with \(\overline{T}^{\mathrm{X_{Zar}}}=Z\). For \(V\) given as before, we say an irreducible component \(T\in\operatorname{Irr}(Z\cap V)\) satisfies \(P\) if \(\operatorname{mult}_{T}(D|_{V})=m\). After replacing \(X\) by \(Z\), from our claim, we see that \(P\) satisfies the hypothesis in Lemma 2.15. Then there are Zariski-closed subset \(Z_{P}^{+},Z_{P}^{-}\) of \(Z\) such that
\[Z_{P}^{+}\cap V =\bigcup_{\begin{subarray}{c}T\in\operatorname{Irr}(Z\cap V),\\ T\text{ satisfies }P\end{subarray}}T,\] \[Z_{P}^{-}\cap V =\bigcup_{\begin{subarray}{c}T\in\operatorname{Irr}(Z\cap V),\\ T\text{ doesn't satisfy }P\end{subarray}}T,\]
and \(Z=Z_{P}^{+}\cup Z_{P}^{-}\). Since \(Z\) is irreducible, and there is some \(T\subset Z_{P}^{+}\), so \(Z=Z_{P}^{+}\). This implies the lemma.
**Definition 4.12**.: _Let \(X\) be a \(K\)-analytic space. For any \(D\in\operatorname{Div}(X)\), we set_
\[[D]:=\sum_{\begin{subarray}{c}Z\subset\operatorname{Irr}(X)\\ \operatorname{codim}(Z,X)=1\end{subarray}}\operatorname{mult}_{Z}(D)[Z]\in Z ^{1}(X),\]
_called the_ **Weil divisor associated to \(D\)**_. In particular, for any \(f\in K^{*}(X)\), we denote \((f):=[\operatorname{div}(f)]\in Z^{1}(X)\). Such a divisor \((f)\) is called a_ **principal divisor**_. The set of principal divisors \(\operatorname{Rat}^{1}(X)\) form a subgroup of \(Z^{1}(X)\). We denote the quotient of \(Z^{1}(X)\) by the subgroup of principal divisors by \(\operatorname{Cl}(X):=Z^{1}(X)/\operatorname{Rat}^{1}(X)\), called the_ **class group** _of \(X\). We say that two divisors \(Z,Z^{\prime}\) are_ **rationally equivalent** _and write \(Z\sim_{\operatorname{rat}}Z^{\prime}\) if they have the same class in \(\operatorname{Cl}(X)\)._
Recall, a \(K\)-analytic space \(X\) is regular at \(x\in X\) if there is a good analytic domain \(V\) of \(X\) containing \(x\) such that \(\mathcal{O}_{V,x}\) is regular. We say \(X\) is regular if \(X\) is regular at every point \(x\in X\). This is equivalent to that for any affinoid domain \(V\simeq\mathcal{M}(A)\subset X\), we have that \(A\) is regular, see [8, Lemma-Definition 2.4.1, Lemma 2.4.5].
**Proposition 4.13**.: _The map \([\cdot]:\operatorname{Div}(X)\to Z^{1}(X)\) a homomorphism which sends effective divisors to positive cycles. This induces a homomorphism_
\[[\cdot]:\operatorname{CaCl}(X)\to\operatorname{Cl}(X).\]
_If \(X\) is normal (resp. regular), then these two map are injective (resp. isomorphic)._
Proof.: It is easy to see that \([\cdot]:\operatorname{Div}(X)\to Z^{1}(X)\) is a homomorphism and induces \([\cdot]:\operatorname{CaCl}(X)\to\operatorname{Cl}(X)\). If \(X\) is normal, by Lemma 4.5, to show \([\cdot]:\operatorname{Div}(X)\to Z^{1}(X)\) is injective, we can assume \(X\) is affinoid. For \(D\in\operatorname{Div}(X)\) such that \(\operatorname{mult}_{Z}(D)=0\) for any \(Z\in Z^{1}(X)\), we take affinoid domain \(V\subset X\) with \(Z\cap V\neq\emptyset\) and \(D|_{V}\in K^{\prime}_{X}(V)\). Then \(D|_{V}\in\mathcal{O}_{X}^{*}(V)\) since \(\operatorname{mult}_{T}(D|_{V})=0\) for any \(Q\in Z^{1}(V)\). This implies that \(D=0\). As for the quotient, if \([D]=(f)\) for some \(f\in K^{*}_{X}(X)\), then \(D=\operatorname{div}(f)\), this implies that \([\cdot]:\operatorname{CaCl}(X)\to\operatorname{Cl}(X)\) is injective.
Assume that \(X\) is regular. To show that \([\cdot]:\operatorname{Div}(X)\to Z^{1}(X)\) is surjective, we firstly assume that \(X=\mathcal{M}(A)\) is affinoid and set \(\mathcal{X}=\operatorname{Spec}(A)\). In this case, \(\operatorname{Div}(\mathcal{X})\simeq Z^{1}(\mathcal{X})\), see [12, Proposition 7.2.16]. Hence, we have a commutative diagram
so our claim holds for affinoid spaces. We can glue Cartier divisors on affinoid domains together by injectivity of \([\cdot]\). Hence \([\cdot]:\operatorname{Div}(X)\to Z^{1}(X)\) is surjective.
### Rational equivalence of cycles
As in the classical definition of Chow group of an algebraic variety, we can extend the class group for any codimension.
**Definition 4.14**.: _Let \(X\) be a \(K\)-analytic space. For any \((k+1)\)-dimensional irreducible closed analytic subspace \(W\) of \(X\) and any \(f\in K_{W}^{*}(W)\), we have a \(k\)-cycle \([\operatorname{div}(f)]\in Z_{k}(W)\subset Z_{k}(X)\) given in Definition 4.12. A \(k\)-cycle \(\alpha\) is_ **rationally equivalent to zero**_, write \(\alpha\sim 0\), if there are a finite number of \((k+1)\)-dimensional subvarieties \(W_{i}\) of \(X\), and \(f_{i}\in K_{W_{i}}^{*}(W_{i})\) such that_
\[\alpha=\sum_{i}[\operatorname{div}(f_{i})].\]
_Since \([\operatorname{div}(f^{-1})]=-[\operatorname{div}(f)]\), the cycles rationally equivalent to zero form a subgroup \(\operatorname{Rat}_{k}(X)\subset Z_{k}(X)\)._
_The group of \(k\)-cycles modulo rational equivalence on \(X\) is the quotient_
\[A_{k}(X):=Z_{k}(X)/\operatorname{Rat}_{k}(X).\]
_Define \(Z_{*}(X)\) (resp. \(A_{*}(X)\)) to be the direct sum of the \(Z_{k}(X)\) (resp. \(A_{k}(X)\)) for \(k\in\mathbb{Z}\). A cycle class on \(X\) is an element of \(A_{*}(X)\)._
_A cycle class is_ **positive** _if it can be represented by a positive cycle._
**Remark 4.15**.:
1. _The subgroup_ \(\operatorname{Rat}_{k}(X)\subset Z_{k}(X)\) _is well-defined by Lemma_ 2.13_._
2. \(A_{k}(X)=A_{k}(X_{\operatorname{red}})\) _for any_ \(k\in\mathbb{Z}\)_._
3. _If_ \(X\) _is of pure dimension_ \(n\)_, then_ \(A_{n}(X)=Z_{n}(X)\) _is the free abelian group generated by the irreducible components of_ \(X\)_._
### Flat pull-backs
We have introduced Cartier divisors, cycles. Next we consider their pull-backs via flat morphisms.
Recall the definition of flatness in sense of [8, Definition 4.1.8], a morphism \(f:Y\to X\) of \(K\)-analytic spaces is naively flat if for any \(y\in Y\), there exist a good analytic domain \(V\subset Y\) containing \(y\) and a good analytic domain \(U\subset X\) containing \(f(V)\) such that \(\mathcal{O}_{V,y}\) is flat over \(\mathcal{O}_{U,f(y)}\). We say \(f\) is flat if moreover \(Y^{\prime}:=Y\times_{X}X^{\prime}\to X^{\prime}\) is naively flat for any morphism \(X^{\prime}\to X\). If \(f\) is flat, then \(\mathcal{O}_{Y}(V)\) is flat over \(\mathcal{O}_{X}(U)\) for any affinoid domains \(V\subset Y\) and \(U\subset X\) with \(f(V)\subset U\). The converse is not true in general unless \(f\) is locally finite. Notice that for any analytic domain \(V\) of \(X\), the natural morphism \(V\hookrightarrow X\) is flat.
**Definition 4.16**.: _A morphism \(f:Y\to X\) of \(K\)-analytic spaces has_ **relative dimension \(r\)** _if for any \(Z\in\overline{\operatorname{Irr}(X)}\), \(f^{-1}(Z)=\emptyset\) or any irreducible component \(Z^{\prime}\) of \(f^{-1}(Z)\) has \(\dim_{K}Z^{\prime}=\dim_{K}Z+r\)._
**Remark 4.17**.:
1. _The notion of relative dimension_ \(r\) _is an analogue of the one in algebraic geometry, see_ _[_9_, B.2.5]__. Our definition is different from the one in_ _[_8_, 1.4.13]__. We don't assume that such morphisms are surjective._
**Lemma 4.18**.: _Let \(f:Y\to X\) be a flat morphism of \(K\)-analytic spaces. Then \(f\) has relative dimension \(r\) if and only if \(Y_{x}=\emptyset\) or \(Y_{x}\) is of equidimension \(r\) for any \(x\in X\). In particular, if \(f:Y\to X\) is flat with \(X,Y\) equidimensional, then \(f\) has relative dimension \(\dim_{K}Y-\dim_{K}X\)._
Proof.: We apply [8, Lemma 4.5.11] saying that \(\dim_{y}Y=\dim_{y}Y_{x}+\dim_{x}X\) for any \(y\in Y_{x}\).
Assume that \(f\) has relative dimension \(r\). If \(x\in X\) such that \(Y_{x}\neq\emptyset\), then for any \(Z\in\operatorname{Irr}(X)\) containing \(x\), we have \(\dim_{K}f^{-1}(Z)-\dim_{K}Z=r\). This implies that \(\dim_{y}Y_{x}=\dim_{y}Y-\dim_{x}X=r\) for any \(y\in Y_{x}\) since \(\dim_{x}X=\max\limits_{x\in Z\in\operatorname{Irr}(X)}\{\dim_{K}Z\}\).
Conversely, for any \(Z\in\operatorname{Irr}(X)\) with \(f^{-1}(Z)\neq\emptyset\), without loss of generality, we can assume that \(Z=X\). We take \(y\in Y\) and \(x=f(y)\). Then \(\dim_{y}Y=\dim_{x}X+\dim_{y}Y_{x}=\dim_{K}X+r\). This implies that \(f\) has relative dimension \(r\).
If \(X,Y\) are equidimensional, then \(\dim_{y}Y_{x}=\dim_{y}Y-\dim_{x}X\) implies that \(Y_{x}\) is of equidimension for any \(y\in Y,x=f(x)\).
**Definition 4.19**.: _Let \(f:Y\to X\) be a flat morphism of \(K\)-analytic spaces._
1. _The canonical morphism_ \(f^{\#}:\mathcal{O}_{X}\to f_{*}\mathcal{O}_{Y}\) _extends to a morphism_ \(f^{\#}:K_{X}^{*}/\mathcal{O}_{X}^{*}\to f_{*}(K_{Y}^{*}/\mathcal{O}_{X}^{*})\)_, then we have a homomorphism_ \[f^{*}:\operatorname{Div}(X)\to\operatorname{Div}(Y).\] _This will induce a homomorphism_ \(f^{*}:\operatorname{CaCl}(X)\to\operatorname{CaCl}(Y)\)_._
2. _Assume that_ \(X,Y\) _are of equidimension. For any integral closed subspace_ \(Z\subset X\) _of pure codimension_ \(k\)_, we set_ \[f^{*}[Z]:=[f^{-1}(Z)]\in Z^{k}(Y).\] _This extends by linearity to a pull-back homomorphism_ \(f^{*}:Z^{k}(X)\to Z^{k}(Y)\)_._
**Remark 4.20**.:
1. _The flat pull-backs are functorial and we have a commutative diagram_
**Proposition 4.21**.: _Let \(f:Y\to X\) be a flat morphism of \(K\)-analytic spaces of pure dimension. For a coherent sheaf \(\mathcal{F}\) on \(X\) with \(\operatorname{codim}(\operatorname{Supp}(\mathcal{F}),X)\geq k\), we have \(\operatorname{codim}(\operatorname{Supp}(f^{*}\mathcal{F}),X)\geq k\) and_ \[[f^{*}\mathcal{F}]^{k}=f^{*}[\mathcal{F}]^{k}.\]
_In particular, if \(Z\) is a closed analytic subspace of \(X\) of pure codimension \(k\), then \(f^{*}[Z]=[f^{-1}(Z)]\)._
Proof.: We can reduce the statement to the case of affinoid spaces by Lemma 4.5, then the proposition from the analogue result in scheme theory by Remark 4.8 (1). For the result in scheme theory, see proof of [14, Lemma 42.14.4 (2)].
### Proper push-forward of cycles
For an affinoid space \(X=\mathcal{M}(A)\), it may happen that \(\dim_{\operatorname{Krull}}A<\dim_{K}X\). In order to avoid this dimension problem, we assume that all \(K\)-analytic spaces (including affinoid domains) in this subsection are strict. In this case \(\dim_{\operatorname{Krull}}A=\dim_{K}X\).
Recall a theorem of Kiehl.
**Theorem 4.22** ([2] Proposition 3.3.5).: _Let \(f:Y\to X\) be a proper morphism of \(K\)-analytic spaces, and \(\mathcal{F}\) a coherent \(\mathcal{O}_{Y}\)-module. Then \(R^{n}f_{*}\mathcal{F},n\geq 0\), are coherent \(\mathcal{O}_{X}\)-modules. In particular, we have Remmert's mapping theorem, saying that \(f(Y)\) is an Zariski-closed subset of \(X\)._
A similar result of the following lemma is given in [11, 2.6].
**Lemma 4.23**.: _Let \(f:Y\to X\) be a surjective finite morphism of integral, strictly \(K\)-analytic spaces. For any (strictly) affinoid domain \(V\subset X\) and \(T\in\operatorname{Irr}(V)\), we set_
\[\deg(Y/X):=\sum_{\begin{subarray}{c}Q\in\operatorname{Irr}(f^{-1}(V))\\ f(Q)=T\end{subarray}}[\operatorname{Frac}(A_{Q}):\operatorname{Frac}(A_{T})],\]
_where \(A_{T},A_{Q}\) are the affinoid algebras corresponding to \(T,Q\) with reduced structure. Then \(\deg(Y/X)\) is independent of the choice of \(V\) and \(T\), called the_ **degree of \(f\)**_._
Proof.: Apply the usual technique with Lemma 2.15, it is sufficient to show that for any affinoid domain \(V,W\subset X\) with \(W\subset V\), and any \(T\in\operatorname{Irr}(V)\), \(T^{\prime}\in\operatorname{Irr}(W)\), we have
\[\sum_{\begin{subarray}{c}Q\in\operatorname{Irr}(f^{-1}(V))\\ f(Q)=T\end{subarray}}[\operatorname{Frac}(A_{Q}):\operatorname{Frac}(A_{T})]= \sum_{\begin{subarray}{c}Q^{\prime}\in\operatorname{Irr}(f^{-1}(W))\\ f(Q)=T^{\prime}\end{subarray}}[\operatorname{Frac}(A_{Q^{\prime}}): \operatorname{Frac}(A_{T^{\prime}})].\]
This is in fact from Lemma 4.6 and Proposition 4.21 for affinoid case. Let \(V=\mathcal{M}(A),f^{-1}(V)=\mathcal{M}(B)\) and \(W=\mathcal{M}(A^{\prime})\), then \(f^{-1}(W)=\mathcal{M}(B^{\prime})\), where \(B^{\prime}=A^{\prime}\otimes_{A}B\). Let \(\mathcal{F}\) be the corresponding coherent sheaf associated to \(B\) as an \(A\)-module on \(V\), and \(i:W\to V\) the canonical morphism, then
\[[\mathcal{F}]^{0}=\sum_{T\in\operatorname{Irr}(V)}(\sum_{\begin{subarray}{c }Q\in\operatorname{Irr}(f^{-1}(V))\\ f(Q)=T\end{subarray}}[\operatorname{Frac}(A_{Q}):\operatorname{Frac}(A_{T})])[ T],\]
and we know that \(\sum_{\begin{subarray}{c}Q\in\operatorname{Irr}(f^{-1}(V))\\ f(Q)=T\end{subarray}}[\operatorname{Frac}(A_{Q}):\operatorname{Frac}(A_{T})]\) is independent of the choice of \(T\) by Lemma 4.6.
We also have
\[i^{*}[\mathcal{F}]^{0}=\sum_{T\in\operatorname{Irr}(V)}(\sum_{ \begin{subarray}{c}Q\in\operatorname{Irr}(f^{-1}(V))\\ f(Q)=T\end{subarray}}[\operatorname{Frac}(A_{Q}):\operatorname{Frac}(A_{T})]) \sum_{T^{\prime}\in\operatorname{Irr}(T\cap W)}[T^{\prime}],\]
\[[i^{*}\mathcal{F}]^{0}=\sum_{T^{\prime}\in\operatorname{Irr}(W)}(\sum_{ \begin{subarray}{c}Q^{\prime}\in\operatorname{Irr}(f^{-1}(W))\\ f(Q^{\prime})=T^{\prime}\end{subarray}}[\operatorname{Frac}(A_{Q^{\prime}}): \operatorname{Frac}(A_{T^{\prime}})])[T^{\prime}].\]
By Proposition 4.21, we compare the coefficient of some for any irreducible component \(T^{\prime}\), we can see that our claim holds.
We have the following equivalent conditions.
**Lemma 4.24**.: _Let \(f:Y\to X\) be a morphism of integral, separated, strictly \(K\)-analytic spaces. Then the following are equivalent._
1. \(f\) _is surjective and finite._
2. \(f\) _is surjective, proper, and_ \(\dim_{K}Y=\dim_{K}X\)_._
3. \(f\) _is proper, and for any_ \(x\in X\)_,_ \(\dim_{\mathscr{H}(x)}f^{-1}(x)=0\)_._
4. \(f\) _is proper, and for any rigid point_ \(x\in X\)_,_ \(f^{-1}(x)\neq\emptyset\) _has finite rigid points as an_ \(\mathscr{H}(x)\)_-analytic space._
5. \(f\) _is surjective and proper, and there is a point_ \(x\in X\) _such that_ \(\dim_{\mathscr{H}(x)}f^{-1}(x)=0\)_._
6. \(f\) _is surjective and proper, and there is a rigid point_ \(x\in X\) _such that_ \(\dim_{\mathscr{H}(x)}f^{-1}(x)=0\)_, i.e._ \(f^{-1}(x)\neq\emptyset\) _and has finite rigid points._
Proof.: Obviously, (i) \(\implies\)(iii.a), (iii.b) \(\implies\)(iv.b) \(\implies\)(iv.a).
(iii.a) \(\implies\)(ii). This is from [8, 1.4.14 (3)].
(ii) \(\implies\)(iii.b). Since \(f\) is quasi-compact, after taking irreducible components of affinoid domain of \(X,Y\), we can assume that \(X=\mathcal{M}(A)\), \(Y=\mathcal{M}(B)\) are affinoid, integral and \(\dim A=\dim B\). Moreover, since the original morphism is surjective, we know that the corresponding morphism \(\varphi:\operatorname{Spec}(B)\to\operatorname{Spec}(A)\) is dominant. For any closed point \(x\in\operatorname{Spec}(A)\) with \(\varphi^{-1}(x)\neq\emptyset\), by basic property of strict affinoid algebras, we know that \(\operatorname{codim}(x,\operatorname{Spec}(A))=\dim A\). Since \(\varphi\) is dominant, then \(\dim B\geq\operatorname{codim}(x,\operatorname{Spec}(A))+\dim\varphi^{-1}(x)\). So \(\dim\varphi^{-1}(x)=0\). Notice that \(K\to A\to\mathscr{H}(x)\) is finite, then \(\mathscr{H}(x)\) is the residue field of \(\operatorname{Spec}(A)\) at \(x\), and \(B\otimes_{A}\mathscr{H}(x)=B\widehat{\otimes}_{A}\mathscr{H}(x)\). Hence the rigid points of \(f^{-1}(x)\) is exactly the closed points of \(\varphi^{-1}(x)\) which are finite since \(B\widehat{\otimes}_{A}\mathscr{H}(x)\) Noetherian.
(iii.b) \(\Longrightarrow\) (i). The separatedness ensure that \(X,Y\) are also rigid \(K\)-analytic spaces, see [3, Theorem 1.6.1]. Then the result is from [5, Corollary 9.6.6] and [2, Proposition 3.3.2].
(iv.a) \(\Longrightarrow\) (ii). Notice that we have proved the equivalence (i) \(\Longleftrightarrow\) (ii) \(\Longleftrightarrow\) (iii.a) \(\Longleftrightarrow\) (iii.b). By [6, THEOREME 4.9], the set
\[\{y\in Y\mid\dim_{y}f\geq 1\}\]
is Zariski-closed in \(Y\). So
\[\{x\in X\mid\dim_{\mathscr{H}(x)}f^{-1}(x)\geq 1\}=f(\{y\in Y\mid\dim_{y}f\geq 1\})\]
is Zariski-closed in \(X\), i.e. \(U:=\{x\in X\mid\dim_{\mathscr{H}(x)}f^{-1}(x)\leq 0\}\) is Zariski-open in \(X\). Then
\(\dim_{K}f^{-1}(U)=\dim_{K}U\) by the equivalence of (iii.a) and (ii). Since \(\dim_{K}Y=\dim_{K}f^{-1}(U),\dim_{K}X=\dim_{K}U\), we have (ii).
With the lemmas above, we have the following definition.
**Definition 4.25**.: _Let \(f:Y\to X\) be a proper morphism of separated, strictly \(K\)-analytic spaces. For any irreducible closed subspace \(Z\) of \(Y\), the image \(f(Z)\) is a Zariski-closed subset of \(Y\). We set_
\[\deg(Z/f(Z)):=\begin{cases}\text{the degree of $f:Z\to f(Z)$}&\text{ if $\dim_{K}f(Z)=\dim_{K}Z$;}\\ 0&\text{ if $\dim_{K}f(Z)<\dim_{K}Z$}\end{cases}\]
_(notice that \(\dim_{K}f(Z)=\dim_{K}Z\) is equivalent to \(f:Z\to f(Z)\) is finite). Define \(f_{*}[Z]:=\deg(Z/f(Z))[f(Z)]\), then extends linearly to a homomorphism (of padding groups)_
\[f_{*}:Z_{*}(Y)\to Z_{*}(X).\]
**Remark 4.26**.:
1. _For_ \(Z\) _above, we know that_ \(f(Z)\) _with the reduced subspace structure is the Zariski image of_ \(Z\to X\) _by Lemma_ 2.7_._
We can easily prove the following lemma.
**Lemma 4.27**.: _Let \(f:Y\to X\) and \(g:Z\to Y\) be proper morphism of separated strictly \(K\)-analytic spaces. Then \(g_{*}\circ f_{*}=(g\circ f)_{*}\)._
**Proposition 4.28**.: _Let_
_be a Cartesian diagram of separated, strictly \(K\)-analytic spaces with \(f\) proper and \(g\) flat. Then \(f^{\prime}\) is proper, \(g^{\prime}\) is flat and \(g^{*}\circ f_{*}=f^{\prime}_{*}\circ g^{\prime*}\) on \(Z^{*}(Y)\)._
Proof.: The morphism \(f^{\prime}\) is proper by [3], and \(g^{\prime}\) is flat by definition.
For the equality, notice that it holds if \(f\) is a closed immersion. In general, To show \(g^{*}(f_{*}\alpha)=f^{\prime}_{*}(g^{\prime*}(\alpha))\), we can assume that \(\alpha=[Y]\) and it is irreducible. Moreover, we can assume that \(X=f(Y)\).
If \(\dim_{K}X<\dim_{K}Y\), then left-handed side is \(0\). For any \(x^{\prime}\in X^{\prime}\), let \(x=g(x^{\prime})\). We have
\[(f^{\prime})^{-1}(x^{\prime})=\mathcal{M}(\mathscr{H}(x^{\prime}))\times_{X^ {\prime}}Y^{\prime}=\mathcal{M}(\mathscr{H}(x^{\prime}))\times_{X}Y=\mathcal{ M}(\mathscr{H}(x^{\prime}))\times_{\mathscr{H}(x)}f^{-1}(x).\]
Since \(f\) is not finite, by Lemma 4.24 (iv.a), we have \(\dim_{\mathscr{H}(x^{\prime})}(f^{\prime})^{-1}(x)=\dim_{\mathscr{H}(x)}f^{-1} (x)>0\). This means that \(f^{\prime}\) is not finite, and \(f^{\prime*}([Y^{\prime}])=0\).
If \(\dim_{K}X=\dim_{K}Y\), then \(f:Y\to X\) is finite. By Lemma 4.5, it suffices to consider the affine case. Then the result is from Proposition 4.21, and can be proved similarly as Lemma 4.23.
With the proposition above, we can always assume that the base space is affinoid. We can use this to deduce the following result to the scheme case, see [14, Lemma 42.12.4] for the scheme version.
**Proposition 4.29**.: _Let \(f:Y\to X\) be a proper morphism of separated strictly \(K\)-analytic spaces._
1. _Let_ \(Z\subset Y\) _be a closed subspace with_ \(\dim_{K}Z\leq k\)_. Then_ \[f_{*}[Z]_{k}=[f_{*}\mathcal{O}_{Z}]_{k}.\]
2. _Let_ \(\mathcal{F}\) _be a coherent sheaf on_ \(X\) _such that_ \(\dim_{K}(\operatorname{Supp}(\mathcal{F}))\leq k\)_. Then_ \[f_{*}[\mathcal{F}]_{k}=[f_{*}\mathcal{F}]_{k}.\]
Proof.: Obviously, it suffices to show (2). By Lemma 2.3, there is a coherent sheaf \(\mathcal{G}\) on \(Z:=\operatorname{Supp}(\mathcal{F})\) such that \(\mathcal{F}=i_{*}\mathcal{G}\). Let \(Z^{\prime}\) be the Zariski image of \(Z\to X\). Notice that \(f(Z)=Z^{\prime}\) by Lemma 2.8 and properness of \(f\). So we have the following commutative diagram
By functorial property of push-forward, it suffices to show \((f|_{Z})_{*}[\mathcal{G}]=[(f|_{Z})_{*}\mathcal{G}]\). So we can assume that \(\dim_{K}X=k\) and \(f:X\to Y\) is proper and dominant. Moreover, we can assume that \(Y\) is affinoid. So \(\dim_{K}Y\leq k\).
We write
\[f_{*}[\mathcal{F}]_{k}=\sum_{W}n_{W}[W]\quad\text{and}\quad[f_{*}\mathcal{F}] _{k}=\sum_{W}m_{W}[W]\]
where \(W\) runs through irreducible component of \(X\) of dimension \(k\). For a fixed irreducible component \(W\), to show \(n_{W}=m_{W}\), it suffices to show that \((f_{*}[\mathcal{F}]_{k})|_{V}=([f_{*}\mathcal{F}]_{k})|_{V}\) for some affinoid domain \(V\subset X\) with \(V\cap W\neq\emptyset\). We can take Zariski-open subsets \(U\subset X\) such that \(U\cap W^{\prime}=\emptyset\) and \(U\cap f(T)=\) for any irreducible component \(W^{\prime}\) of \(X\) which is distinct from \(W\), and any irreducible component \(T\) of \(Y\) which doesn't dominate \(W\). We can take an affinoid domain of \(U\). So we can assume \(X=\mathcal{M}(A)\) is equidimensional and each irreducible component of \(Y\) dominates some irreducible component of \(X\). By [2, Corollary 3.3.8], we know that \(Y\) is finite over \(X\). So we reduce to the case where \(Y,X\) is affinoid and \(f\) is finite. This is an algebraic result, see the last part of the proof of [14, Lemma 41.13.3].
## 5 Proper intersection and intersection multiplicities
### Proper intersection
**Lemma 5.1**.: _Let \(X\) be a regular \(K\)-analytic space of pure dimension, and \(Y,\widetilde{Y}\in\overline{\operatorname{rr}}(X)\). Then for every irreducible component \(Z\) of \(Y\cap\widetilde{Y}\), we have_
\[\operatorname{codim}(Z,X)\leq\operatorname{codim}(Y,X)+\operatorname{codim}( \widetilde{Y},X).\]
Proof.: The proof is based on the corresponding result in scheme theory. We can assume that \(X\) is irreducible. For any affinoid domain \(V\subset X\), we have \(\operatorname{codim}(T,V)=\operatorname{codim}(Y,X)\), where \(T\) is a irreducible component of \(V\cap Y\). Then we can apply the corresponding result in scheme theory.
**Definition 5.2**.: _Let \(X\) be a regular \(K\)-analytic space of pure dimension._
1. _Let_ \(Y,\widetilde{Y}\in\overline{\operatorname{rr}}(X)\)_. We say that_ \(Y\) _and_ \(\widetilde{Y}\) **intersect properly** _if_ \(\operatorname{codim}(Z,X)\geq\operatorname{codim}(Y,X)+\operatorname{codim}( \widetilde{Y},X)\)_._
2. _Let_ \(\alpha=\sum\limits_{i\in I}n_{i}[Y_{i}]\in Z^{s}(X)\) _and_ \(\beta=\sum\limits_{j\in J}m_{j}[\widetilde{Y}_{j}]\in Z^{r}(X)\)_. We say that_ \(\alpha\) _and_ \(\beta\) **intersect properly** _if_ \(Y_{i}\) _and_ \(\widetilde{Y}_{j}\) _intersect properly for all_ \(i\) _and_ \(j\)
**Lemma 5.3**.: _Let \(X\) be a regular \(K\)-analytic space of pure dimension, and \(Y,\widetilde{Y}\subset X\) irreducible the following statements are equivalent:_
1. \(Y,\widetilde{Y}\) _intersect properly;_
2. _For any_ \(x\in Y\cap\widetilde{Y}\)_, there is an affinoid domain_ \(V\) _containing_ \(x\) _such that any_ \(Q\in\mathrm{Irr}(Y\cap V),\widetilde{Q}\in\mathrm{Irr}(\widetilde{Y}\cap V)\) _intersect properly on_ \(V\)_;_
3. _For any affinoid domain_ \(V\) _with_ \(Y\cap V\)_,_ \(\widetilde{Y}\cap V\neq\emptyset\) _and any_ \(Q\in\mathrm{Irr}(Y\cap\mathrm{V}),\widetilde{Q}\in\mathrm{Irr}(\widetilde{Y} \cap V)\)_, we have_ \(Q\) _and_ \(\widetilde{Q}\) _intersect properly._
Proof.: For any affinoid domain \(V\subset X\) with \(Y\cap V=\emptyset\) and any \(Q\in\mathrm{Irr}(Y\cap V)\), we have \(\mathrm{codim}(Q,V)=\mathrm{codim}(Y,X)\). Then the lemma follows.
### Multiplicities and intersect products
In this subsection, we will apply the intersection theory on a regular catenary Noetherian scheme to define multiplicities. Another definition using Tor formula will be given in the next subsection.
Recall, on a regular, catenary Noetherian scheme \(\mathcal{X}\), let \(Q,\widetilde{Q}\) be irreducible closed subschemes with \(\mathrm{codim}(Q,\mathcal{X})=s,\mathrm{codim}(\widetilde{Q},\mathcal{X})=t\). Then intersection product of \(Q,\widetilde{Q}\) is defined by
\[Q\cdot\widetilde{Q}=\sum_{T}e_{T}[T]:=\sum_{i}(-1)^{i}[\mathrm{Tor}_{i}^{ \mathcal{O}_{X}}(\mathcal{O}_{Q},\mathcal{O}_{\widetilde{Q}})]^{s+t}\in Z^{s+ t}(X),\]
i.e.
\[e_{T}=e(\mathcal{X},Q\cdot\widetilde{Q},T)=\sum_{i}(-1)^{i}\mathrm{length}_{ \mathcal{O}_{X,T}}(\mathrm{Tor}_{i}^{\mathcal{O}_{X,T}}(\mathcal{O}_{Q,T}, \mathcal{O}_{\widetilde{Q},T}))\]
where \(T\) runs through \(\mathrm{Irr}(Q\cap\widetilde{Q})\) with \(\mathrm{codim}(T,\mathcal{X})=s+t\), and \(\mathcal{O}_{\mathcal{X},T}\) (resp. \(\mathcal{O}_{Q,T}\), resp. \(\mathcal{O}_{\widetilde{Q},T}\)) denotes the local ring of \(\mathcal{X}\) (resp. \(Q\), resp. \(\widetilde{Q}\)) at the generic point of \(T\).
**Lemma 5.4**.: _Let \(X\) be a regular \(K\)-analytic space of pure dimension, and \(Y,\widetilde{Y}\subset X\) irreducible Zariski-closed subspaces with \(\mathrm{codim}(Y,X)=s,\mathrm{codim}(\widetilde{Y},X)=t\). Assume that \(Y\) and \(\widetilde{Y}\) intersect properly. For any irreducible component \(Z\) of \(Y\cap\widetilde{Y}\) with \(\mathrm{codim}(Z,X)=s+t\), and any affinoid domain \(V\subset X\) with \(Z\cap V\neq\emptyset\), we set_
\[e(X,Y\cdot\widetilde{Y},Z):=\sum_{Q,\widetilde{Q}}e(V,Q\cdot\widetilde{Q},T)\]
_where \(T\in\mathrm{Irr}(Z\cap V)\) and \((Q,\widetilde{Q})\) runs through \(\mathrm{Irr}(Y\cap V)\times\mathrm{Irr}(\widetilde{Y}\cap V)\) such that \(T\in\mathrm{Irr}(Q\cap\widetilde{Q})\). Then \(e(X,Y,\widetilde{Y},Z)\) is a positive integer which is independent of the choice of \(V\) and \(T\). We call \(e(X,Y,\widetilde{Y},Z)\) the_ **multiplicity of \(Z\) on \(Y\cap\widetilde{Y}\)**_._
Proof.: The idea of proof is similar with the proof of Lemma 4.6 and Lemma 4.11. It is sufficient to show that for any affinoid domain \(V,W\subset X\) with \(W\subset V\), \(Z\cap W\neq\emptyset\), we have that
\[\sum_{Q,\widetilde{Q}}e(V,Q\cdot\widetilde{Q},T)=\sum_{Q^{\prime},\widetilde {Q}^{\prime}}e(W,Q^{\prime}\cdot\widetilde{Q}^{\prime},T^{\prime})\]
where \(T\in\mathrm{Irr}(Z\cap V)\), \((Q,\widetilde{Q})\) runs through \(\mathrm{Irr}(Y\cap V)\times\mathrm{Irr}(\widetilde{Y}\cap V)\) such that \(T\in\mathrm{Irr}(Q\cap\widetilde{Q})\), and \(T^{\prime},Q^{\prime},\widetilde{Q}^{\prime}\) is given similarly with \(\overline{T^{\prime}}^{\mathrm{YZ}_{\mathrm{zr}}}=T\), \(\overline{T^{\prime}}^{\mathrm{XZ}_{\mathrm{zr}}}=Z\). Let \(V=\mathcal{M}(A),W=\mathcal{M}(B)\) and \(f:\mathrm{Spec}(B)\rightarrow\mathrm{Spec}(A)\) is the morphism of schemes given by \(W\subset V\). In the following, we view every irreducible subset is in the corresponding affine schemes. We fix a pair \((Q,\widetilde{Q})\). Let \(f^{*}[Q]=\sum\limits_{i=1}^{m}[Q^{\prime}_{i}],f^{*}[\widetilde{Q}]=\sum \limits_{j=1}^{\widetilde{m}}[\widetilde{Q}^{\prime}_{j}]\), \([Q]\cdot[Q]=\sum\limits_{p=1}^{k}e(V,Q\cdot\widetilde{Q},T_{p})[T_{p}]\) with \(T_{1}=T\), and \(f^{*}[T_{p}]=\sum\limits_{q=1}^{l_{1}}[T^{\prime}_{pq}]\) with \(T^{\prime}_{11}=T^{\prime}\). Notice that each coefficient of \([Q^{\prime}_{i}]\) in \(f^{*}[Q]\) is \(1\) by Lemma 4.6, similar for \(f^{*}[\widetilde{Q}]\) and \(f^{*}[T_{p}]\). We have
\[f^{*}[Q]\cdot f^{*}[\widetilde{Q}]=f^{*}([Q]\cdot[\widetilde{Q}]),\]
i.e.
\[\sum_{i,j}[Q_{i}]\cdot[\widetilde{Q}_{j}]=\sum_{i,j,p,q}e(W,Q_{i}\cdot\widetilde{Q }_{j},T_{pq})[T_{pq}]=\sum_{p,q}e(V,Q,\widetilde{Q},T_{p})[T_{pq}],\]
where \(e(W,Q_{i}\cdot\widetilde{Q}_{j},T_{pq})=0\) if \(T_{pq}\not\in\operatorname{Irr}(Q_{i}\cap\widetilde{Q}_{j})\). Comparing the coefficient of \([T_{11}]\), we have \(e(V,Q\cdot\widetilde{Q},T)=\sum\limits_{i,j}e(W,Q_{i}\cdot\widetilde{Q}_{j},T ^{\prime})\). When \((Q,\widetilde{Q})\) runs through \(\operatorname{Irr}(Y\cap V)\times\operatorname{Irr}(\widetilde{Y}\cap V)\) such that \(T\in\operatorname{Irr}(Q\cap\widetilde{Q})\), we have the equality we want.
**Definition 5.5**.: _Keep the notion in Lemma 5.4. We define the_ **intersection product of \(Y\) and \(\widetilde{Y}\)** _as_
\[Y\cdot\widetilde{Y}=\sum_{Z}e_{Z}[Z]\in Z^{s+t}(X),\]
_where \(Z\) runs through the set \(\operatorname{Irr}(Y\cap\widetilde{Y})\) with \(\operatorname{codim}(Z,X)=s+t\), and \(e_{Z}=e(X,Y\cdot\widetilde{Y},Z)\)._
_In general, let \(\alpha=\sum\limits_{i\in I}n_{i}[Y_{i}]\in Z^{s}(X)\) and \(\beta=\sum\limits_{j\in J}m_{j}[\widetilde{Y}_{j}]\in Z^{r}(X)\). Assume that \(\alpha\) and \(\beta\) intersect properly. We define_
\[\alpha\cdot\beta:=\sum_{i,j}n_{i}m_{j}Y_{i}\cdot\widetilde{Y}_{j}.\]
From the associativity of intersections in scheme theory, we have the associativity for our definition.
**Corollary 5.6**.: _Keep the notion in Lemma 5.4. Let \(Y,\widetilde{Y},\widetilde{\widetilde{Y}}\) be irreducible Zariski-closed subspaces of \(X\). Assume that \(Y,\widetilde{Y},\widetilde{\widetilde{Y}}\) intersect properly pairwise and that \(\operatorname{codim}(Y\cap\widetilde{Y}\cap\widetilde{\widetilde{Y}},X)= \operatorname{codim}(Y,X)+\operatorname{codim}(\widetilde{Y},X)+ \operatorname{codim}(\widetilde{\widetilde{Y}},X)\). Then_
\[Y\cdot(\widetilde{Y}\cdot\widetilde{\widetilde{Y}})=(Y\cdot\widetilde{Y}) \cdot\widetilde{\widetilde{Y}}\]
_as cycles on \(X\)._
Proof.: This is from Lemma 4.5 and the corresponding algebraic result, see [14, Lemma 43.20.1].
**Lemma 5.7**.: _Let \(f:X\to Y\) be flat morphism of regular \(K\)-analytic spaces. Let \(\mathcal{F},\mathcal{G}\) be coherent sheaves on \(Y\) with \(\operatorname{codim}(\operatorname{Supp}(\mathcal{F}),X)\leq r,\operatorname{ codim}(\operatorname{Supp}(\mathcal{G}),X)\leq s\), and \(\operatorname{codim}(\operatorname{Supp}(\mathcal{F})\cap\operatorname{Supp}( \mathcal{G}),X)\geq r+s+\dim(Y)-\dim(X)\). In this case, the cycle \([f^{*}\mathcal{F}]^{r}\) and \([f^{*}\mathcal{G}]^{s}\) intersect properly and_
\[f^{*}([\mathcal{F}]^{r}\cdot[\mathcal{G}]^{s})=[f^{*}\mathcal{F}]^{r}\cdot[f^{ *}\mathcal{G}]^{s}.\]
Proof.: This is from Lemma 4.5 and [14, Lemma 43.21.1] for regular, catenary Noetherian schemes.
The lemma implies the following corollary directly.
**Corollary 5.8**.: _Let \(f:X\to Y\) be flat morphism of regular \(K\)-analytic spaces. Let \(\alpha\in Z^{r}(Y),\beta\in Z^{s}(Y)\). Assume that \(\alpha\) and \(\beta\) intersect properly. Then \(f^{*}\alpha\) and \(f^{*}\beta\) intersect properly and \(f^{*}(\alpha\cdot\beta)=f^{*}\alpha\cdot f^{*}\beta\)._
### Intersection multiplicities using Tor formula
We could define the multiplicities following the idea in [14, Section 43] by using \(\operatorname{Tor}_{i}^{\mathcal{O}_{X}}(\mathcal{F},\mathcal{G})\).
Firstly, it is not hard to see that \(\operatorname{Tor}_{i}^{\mathcal{O}_{X}}(\mathcal{F},\mathcal{G})\) is a coherent sheaf on \(X\). Indeed, if \(X=\mathcal{M}(A)\) is affinoid, then \(\mathcal{C}oh(X)\simeq\mathcal{C}oh(\operatorname{Spec}(A))\). Since \(A\) is Noetherian, so we see that \(\operatorname{Tor}_{i}^{\mathcal{O}_{X}}(\mathcal{F},\mathcal{G})\) is a coherent sheaf on \(X\). For general case,
We show the following results.
**Proposition 5.9**.: _Let \(X\) be a regular, strictly \(K\)-analytic space._
1. _Let_ \(Y,\widetilde{Y}\) _be irreducible Zariski-closed subspaces of_ \(X\) _with_ \(\operatorname{codim}(Y,X)=s,\operatorname{codim}(\widetilde{Y},X)=t\)_. Assume that_ \(Y,\widetilde{Y}\) _intersect properly. Then_ \[Y\cdot\widetilde{Y}=\sum_{i}(-1)^{i}[\operatorname{Tor}_{i}^{\mathcal{O}_{X}}( \mathcal{O}_{Y},\mathcal{O}_{\widetilde{Y}})]^{s+t}.\]
2. _Let_ \(\mathcal{F},\mathcal{G}\) _be coherent sheaves on_ \(X\) _with_ \(\operatorname{codim}(\mathcal{F},X)\geq s\)_,_ \(\operatorname{codim}(\mathcal{F},X)\geq t\)_. Assume that_ \([\mathcal{F}]^{s},[\mathcal{G}]^{t}\) _intersecting properly. Then_ \[[\mathcal{F}]^{s}\cdot[\mathcal{G}]^{t}=\sum_{i}(-1)^{i}[\operatorname{Tor}_ {i}^{\mathcal{O}_{X}}(\mathcal{F},\mathcal{G})]^{s+t}.\]
Proof.: Obviously, (2) implies (1). By Lemma 4.5, Lemma 5.3 and Lemma 5.7, we can assume that \(X\) is strictly affinoid. Then this is [14, Lemma 43.19.4] for regular, catenary Noetherian schemes.
## 6 Projection formula
For a \(K\)-analytic spaces \(X\), we denote \(D(Coh(X))\) the derived category of \(\mathcal{Coh}(X)\). We have the derived tensor product \(\otimes^{\mathbf{L}}\) in \(D(\mathcal{Coh}(X))\), see [14, Definition 20.26.14]. If \(f:Y\to X\) is a morphism of \(K\)-analytic spaces, then we have a left derived functor
\[Lf^{*}:D(\mathcal{Coh}(X))\to D(\mathcal{Coh}(Y))\]
see [14, Section 21.18]. If \(f\) is proper, we have a right derived functor
\[Rf_{*}:D(\mathcal{Coh}(Y))\to D(\mathcal{Coh}(X)),\]
see [14, Section 21.19]. By adjointness of \((Lf^{*},Rf_{*})\), we have a morphism
\[Rf_{*}(\mathcal{E})\otimes^{\mathbf{L}}_{\mathcal{O}_{X}}\mathcal{F}\to Rf_{* }(\mathcal{E}\otimes^{\mathbf{L}}_{\mathcal{O}_{Y}}Lf^{*}\mathcal{F}),\]
see [14, Section 21.50]. As [14, Lemma 36.22.1], we have a similar result for \(K\)-analytic spaces.
**Lemma 6.1**.: _Let \(f:Y\to X\) be a proper morphism of strictly \(K\)-analytic spaces. Then for any \(\mathcal{F}\) in \(D(\mathcal{Coh}(X))\) and \(\mathcal{E}\) in \(D(\mathcal{Coh}(Y))\), the canonical morphism_
\[Rf_{*}(\mathcal{E})\otimes^{\mathbf{L}}_{\mathcal{O}_{X}}\mathcal{F}\to Rf_{* }(\mathcal{E}\otimes^{\mathbf{L}}_{\mathcal{O}_{Y}}Lf^{*}\mathcal{F})\]
_is an isomorphism._
Proof.: The proof is similar with the proof of [14, Lemma 36.22.1]. We can assume that \(X=\mathcal{M}(A)\) is affinoid. In this case, \(D(\mathcal{Coh}(Y))\) is the derived category of finitely generated \(A\)-modules, which is a subcategory of \(D(A)\), the derived category of \(A\)-modules. We fix a coherent sheaf \(\mathcal{E}\) on \(Y\). For an object \(M\) in \(D(A)\), we say that \(T(M)\) holds if the morphism
\[Rf_{*}(\mathcal{E})\otimes^{\mathbf{L}}_{\mathcal{O}_{X}}\widetilde{M}\to Rf_{ *}(\mathcal{E}\otimes^{\mathbf{L}}_{\mathcal{O}_{Y}}Lf^{*}\widetilde{M})\]
is an isomorphism, where \(\widetilde{M}\) is the corresponding sheaf of \(M\) on \(X\).
If \(M=\bigoplus\limits_{i}M_{i}\) and \(T(M_{i})\) holds, then so does \(T(M)\). Let \(N\to L\to M\to N[1]\) be a distinguished triangle in \(D(A)\). If \(T\) holds for two of \(N,L,M,\) then it holds for the third. Also \(T(A[n])\) for any shifts of \(A\) in \(D(A)\). Hence \(T(M)\) holds for any object \(M\) in \(D(A)\), see [14, Remark 15.59.11].
**Theorem 6.2** (Projection formula).: _Let \(f:Y\to X\) be a flat, proper morphism of regular, separated, strictly \(K\)-analytic spaces. Let \(\alpha\in Z^{*}(Y)\) and \(\beta\in Z^{*}(X)\). Assume that \(\alpha\) and \(f^{*}\beta\) intersect properly. Then \(f_{*}(\alpha)\) and \(\beta\) intersect properly and_
\[f_{*}(\alpha)\cdot\beta=f_{*}(\alpha\cdot f^{*}\beta).\]
Proof.: Our proof is an analytic version of the proof of [14, Lemma 43.22.1]
By Lemma 5.3, Corollary 5.8 and Lemma 4.5, we can assume that \(X=\mathcal{M}(A)\) is affinoid and integral. Moreover, we assume \(\alpha=[Z],\beta=[W]\) for some closed subspaces of dimension \(r\) and \(s\).
If \(\dim_{K}f(Z)\neq\dim_{K}Z\), then \(f_{*}[Z]=0\), so \(f_{*}[Z]\) and \([W]\) intersect properly. It sufficient to show that \(f_{*}([Z]\cdot f^{*}[W])=0\). We consider the morphism \(Z\to f(Z)\), where \(f(Z)\) is endowed with the reduced subspace structure. By Lemma 4.24, every fiber of \(Z\to f(Z)\) has dimension \(\geq 1\). This implies that every fiber of the morphism \(Z\cap f^{-1}(W)\to f(Z)\cap W\) has dimension \(\geq 1\), and \(\dim_{K}(Z\cap f^{-1}(W))>\dim_{K}(f(Z)\cap W)\). Since every irreducible component \(T\) of \(Z\cap f^{-1}(W)\) has dimension \(\dim_{K}(Z\cap f^{-1}(W))\), we conclude that \(\dim_{K}T>\dim_{K}f(T)\). This implies what we want.
If \(\dim_{K}f(Z)=\dim_{K}Z=r\), then \(Z\to f(Z)\) is finite. Let \(T\subset f(Z)\cap W\), and \(T_{i}\subset Z\cap f^{-1}(W)\), \(i=1,\cdots,t\) be the irreducible components of \(Z\cap f^{-1}(W)\) dominating \(T\). Since \(Z\cap f^{-1}(W)\to f(Z)\cap W\) is finite, \(f\) is flat and \(Z,f^{-1}(W)\) intersect properly, so
\[\dim_{K}T=\dim_{K}T_{i}=\dim_{K}Y-(\dim_{K}Y-r+\dim_{K}X-s)=r+s-\dim_{K}X,\]
Then \(f(Z)\) and \(W\) intersect properly. To show the equality, we follow the same idea of the proof of [14, Lemma 42.23.1]. Since \(f\) is flat, by Lemma 6.1, we have
\[Rf_{*}(\mathcal{O}_{Z})\otimes^{\mathbb{L}}_{\mathcal{O}_{X}}\mathcal{O}_{W} \simeq Rf_{*}(\mathcal{O}_{Z}\otimes^{\mathbb{L}}_{\mathcal{O}_{Y}}f^{*} \mathcal{O}_{W}).\]
So for any generic point \(\xi\in\operatorname{Spec}(A)\) corresponding to an irreducible component of \(f(Z)\cap W\), we have
\[(f_{*}\mathrm{Tor}^{\mathcal{O}_{Y}}_{i}(\mathcal{O}_{Z},f^{*}\mathcal{O}_{W} ))_{\xi}=(\mathrm{Tor}^{\mathcal{O}_{X}}_{i}(f_{*}\mathcal{O}_{Z},\mathcal{O} _{W}))_{\xi}. \tag{1}\]
On the other hand, by Proposition 5.9 and Proposition 4.29, we have
\[f_{*}([Z]\cdot f^{*}[W]) =\sum_{i}(-1)^{i}f_{*}[\mathrm{Tor}^{\mathcal{O}_{Y}}_{i}( \mathcal{O}_{Z},f^{*}\mathcal{O}_{W})]_{r+s-\dim_{K}Y}\] \[=\sum_{i}(-1)^{i}[f_{*}\mathrm{Tor}^{\mathcal{O}_{Y}}_{i}(\mathcal{ O}_{Z},f^{*}\mathcal{O}_{W})]_{r+s-\dim_{K}Y},\]
\[f_{*}[Z]\cdot[W] =[f_{*}\mathcal{O}_{Z}]\cdot[W]\] \[=\sum_{i}(-1)^{i}[\mathrm{Tor}^{\mathcal{O}_{X}}_{i}(f_{*} \mathcal{O}_{Z},\mathcal{O}_{W})]_{r+s-\dim_{K}Y}.\]
Then \(f_{*}([Z]\cdot f^{*}[W])=f_{*}[Z]\cdot[W]\) by Eq. (1).
## 7 Gaga
It is natural to expect that our definitions of cycles, flat pull-backs, proper push-forwards and intersection products, for algebraic variety will be coincide with the ones in the intersection theory of algebraic varieties.
**Proposition 7.1**.: _Let \(X\) be an algebraic variety over \(K\). Then we have an isomorphism \(Z^{*}(X)\simeq Z^{*}(X^{\mathrm{an}}),\ \ [Y]\mapsto[Y^{\mathrm{an}}]\). For a cycle \(\alpha\in Z^{*}(X)\), we will denote its image in \(Z^{*}(X^{\mathrm{an}})\) by \(\alpha^{\mathrm{an}}\). Moreover, the following properties hold._
1. _For any affinoid domain_ \(V\) _contained in some affine open subset of_ \(X^{\mathrm{an}}\)_, the diagram diagram commutes:_ _where_ \(\mathcal{V}=\operatorname{Spec}(\mathcal{O}_{X^{\mathrm{an}}}(V))\)_._
_._
2. _Let_ \(\alpha,\beta\in Z^{*}(X)\)_. Then_ \(\alpha=\beta\in Z^{*}(X)\) _(or_ \(\alpha^{\rm an}=\beta^{\rm an}\in Z^{*}(X^{\rm an})\)_) if and only if_ \(i^{*}\alpha=i^{*}\beta\in Z^{*}(\mathcal{V})\) _for any any affinoid domain_ \(V\) _contained in some affine open subset of_ \(X^{\rm an}\)_, where_ \(\mathcal{V}={\rm Spec}(\mathcal{O}_{X^{\rm an}}(V))\) _and_ \(i:\mathcal{V}\to X\) _is the canonical morphism._
Proof.: The map is obviously injective. It is suffices to show that every integral closed subspace \(Z\) of \(X^{\rm an}\) is algebraic. If \(X\) is proper over \(K\), by GAGA result, see [2, Proposition 3.4.11], we know that \(Z\) is algebraic. In general case, by Nagata's compactification theorem, there is a proper variety \(\overline{X}\) over \(K\) such that \(X\subset\overline{X}\) is an open immersion. We take the Zariski-closure \(\overline{Z}\) of \(Z\) in \(\overline{X}^{\rm an}\), which is algebraic, i.e. there is an integral subvariety \(T\subset\overline{X}\) such that \(T^{\rm an}=\overline{Z}\). We claim that \((T\cap X)^{\rm an}=Z\). By construction of analytification, we have \((T\cap X)^{\rm an}=T^{\rm an}\cap X^{\rm an}\). We also have \(\overline{Z}\cap X^{\rm an}=Z\). Then \(T^{\rm an}=\overline{Z}\) implies that \((T\cap X)^{\rm an}=Z\).
(1) The diagram is directly from the definition of \([Y^{\rm an}]\) and Remark 4.8 (1).
(2) This is from the isomorphism \(Z^{*}(X)\simeq Z^{*}(X^{\rm an})\), the commutative diagram in (1) and Lemma 4.5.
**Remark 7.2**.:
1. _We have a surjection_ \({\rm CH}^{*}(X)\twoheadrightarrow A^{*}(X^{\rm an})\)_._
**Proposition 7.3**.: _Let \(f:Y\to X\) be a morphism of algebraic varieties over \(K\). We have the following hold._
1. _Let_ \(\mathcal{F}\) _be a coherent sheaf on_ \(X\)_. Then_ \([\mathcal{F}]^{\rm an}=[\mathcal{F}^{\rm an}]\)_._
2. _We have a canonical homomorphism_ \({\rm Div}(X)\to{\rm Div}(X^{\rm an}),\ \ D\mapsto D^{\rm an}\) _such that for any_ \(D\in{\rm Div}(X)\)_, we have_ \([D]^{\rm an}=[D^{\rm an}]\)_._
3. _If_ \(\varphi\) _is flat and_ \(\alpha\in Z^{*}(X)\)_, then_ \((\varphi^{*}(\alpha))^{\rm an}=(\varphi^{\rm an})^{*}(\alpha^{\rm an})\)_._
4. _If_ \(\varphi\) _is proper and_ \(\beta\in Z^{*}(Y)\)_, then_ \((\varphi_{*}(\beta))^{\rm an}=(\varphi^{\rm an})_{*}(\beta^{\rm an})\)_._
5. _Let_ \(\alpha,\beta\in Z^{*}(X)\)_. Then_ \(\alpha,\beta\) _intersect properly if and only if_ \(\alpha^{\rm an},\beta^{\rm an}\in Z^{*}(X^{\rm an})\) _intersect properly, and in this case, we have_ \((\alpha\cdot\beta)^{\rm an}=\alpha^{\rm an}\cdot\beta^{\rm an}\)_._
Proof.: (1) Let \(V=\mathcal{M}(B)\subset X^{\rm an}\) be an affinoid domain contained in some affine open subsets of \(X^{\rm an}\). Then we have a canonical morphism \(\varphi:{\rm Spec}(A)\to X\) which is flat by [7, THEOREM 3.3]. It is sufficient to show that \([\mathcal{F}]^{\rm an}|_{V}=[\mathcal{F}^{\rm an}]|_{V}\). By the commutative diagram in (1), we have \([\mathcal{F}]^{\rm an}|_{V}=[\varphi^{*}\mathcal{F}]\); by Remark 4.8 (1), we have \([\mathcal{F}^{\rm an}]|_{V}=[\mathcal{F}^{\rm an}|_{V}]=[\varphi^{*}\mathcal{F}]\). So our claim holds.
(2) The homomorphism is given by the fact that \(\mathcal{V}\to X\) is flat for any an affinoid domain \(V=\mathcal{M}(A)\subset X^{\rm an}\) contained in some affine open subsets of \(X^{\rm an}\), where \(\mathcal{V}={\rm Spec}(A)\). Then the compatipleness on such affinoid domains will induce a divisor on \(X\). The equality can be proved as (1).
(3) We take any affinoid domains \(V=\mathcal{M}(A)\subset X^{\rm an}\) and \(W=\mathcal{M}(B)\subset Y^{\rm an}\) such that \(\varphi^{\rm an}(W)\subset V\) and \(V\), \(W\) are contained in some affine open subsets of \(X^{\rm an}\), \(Y^{\rm an}\) respectively. Let \(\mathcal{V}={\rm Spec}(A)\), \(\mathcal{W}={\rm Spec}(B)\). We have the following commutative diagram
Then
\[(\varphi^{*}(\alpha))^{\rm an}|_{W}=j^{*}\varphi^{*}(\alpha)=\widetilde{ \varphi}^{*}i^{*}(\alpha)=(\varphi^{\rm an}|_{W})^{*}(\alpha^{\rm an}|_{V})=( \varphi^{\rm an})^{*}(\alpha^{\rm an})|_{W}.\]
here we identify the canonical isomorphisms \(Z^{*}(V)\simeq Z^{*}(\mathcal{V})\) and \(Z^{*}(W)\simeq Z^{*}(\mathcal{W})\). By Lemma 4.5, (3) follows.
(4) Since \(\varphi\) is proper, we have \(\varphi^{\rm an}\) is proper. We may assume that \(\beta\) is prime, moreover, assume that \(X,Y\) are integral and \(\beta=[X]\), \(\varphi\) is finite, surjective. Hence we can assume that \(X={\rm Spec}(A)\) and \(Y={\rm Spec}(B)\) are affine. Let \(V=\mathcal{M}(A^{\prime})\subset X^{\rm an}\) be an affinoid domain, and
\(U=(\varphi^{\operatorname{an}})^{-1}(V)=\mathcal{M}(A^{\prime}\otimes_{A}B)\). Notice that \(\operatorname{Frac}(B)=B\otimes_{A}\operatorname{Frac}(A)\). We consider the following diagram
Notice that \(\operatorname{Frac}(A)\to\operatorname{Frac}(B)\) is finite, so \(\operatorname{Frac}(A)\otimes_{A}A^{\prime}\to\operatorname{Frac}(B)\otimes_{A }A^{\prime}\) is finite and flat. We have that
\[[\operatorname{Frac}(B):\operatorname{Frac}(A)]=\sum_{\mathfrak{q},\varphi( \mathfrak{a})=\mathfrak{p}}[(\operatorname{Frac}(B)\otimes_{A}A^{\prime})_{ \mathfrak{q}}:(\operatorname{Frac}(A)\otimes_{A}A^{\prime})_{\mathfrak{p}}]\]
where \(\mathfrak{q}\) runs through the minimal ideal of \(\operatorname{Frac}(B)\otimes_{A}A^{\prime}\), and we view \(\varphi:\operatorname{Spec}(\operatorname{Frac}(B)\otimes_{A}A^{\prime})\to \operatorname{Spec}(\operatorname{Frac}(A)\otimes_{A}A^{\prime})\). The right-handed side is exactly \(\deg(Y^{\operatorname{an}}/X^{\operatorname{an}})\) defined in Lemma 4.23, so (4) holds.
(5) We can assume that \(\alpha,\beta\) are prime. Since flat pull-backs preserve proper intersection, by Lemma 5.3, we know that \(\alpha,\beta\) intersect properly if and only if \(\alpha^{\operatorname{an}},\beta^{\operatorname{an}}\in Z^{*}(X^{\operatorname {an}})\) intersect properly. The proof of the equality is similar with the proof of (3).
## 8 The category of finite correspondences
In this section, we will define the additive category \(\operatorname{Cor}_{K}\) of finite correspondences of \(K\)-analytic spaces. We will follow the notation in [1] and the idea in [13, Lecture 1].
For the \(K\)-analytic spaces in this section, we always mean separated, quasi-paracompact, strictly \(K\)-analytic spaces, the category of such spaces is exactly the category of separated, quasi-paracompact, \(K\)-rigid spaces by [3, Theorem 1.6.1].
A \(K\)-analytic space is said to be quasi-smooth if it is geometrically regular at each point, see [8, Corollary 5.3.5]. In particular, a quasi-smooth space is regular.
**Definition 8.1**.: _Let \(X\) be a quasi-smooth, connected \(K\)-analytic space, and \(Y\) any \(K\)-analytic space. An_ **elementary correspondence** _from \(X\) to \(Y\) is an irreducible closed subset \(W\) of \(X\times Y\) whose associated integral subspace is finite and surjective over \(X\)._
_By an elementary corresponding from a quasi-smooth non-connected \(K\)-analytic space \(X\) to \(Y\), we mean an elementary correspondence from a connected component of \(X\) to \(Y\)._
_The group \(\operatorname{Cor}_{K}(X,Y)\) is the free abelian group generated by the elementary correspondences from \(X\) to \(Y\). The element of \(\operatorname{Cor}_{K}(X,Y)\) are called_ **finite correspondences**_._
**Remark 8.2**.:
1. _If_ \(X\) _is quasi-smooth,_ \(K\)_-analytic space, one important example of elementary correspondence from_ \(X\) _to_ \(Y\) _is the graph_ \(\Gamma_{f}\) _of a morphism_ \(f:X\to Y\)_. If_ \(X\) _is not connected, the_ \(\Gamma_{f}\) _is a finite correspondence from_ \(X\) _to_ \(Y\)_. Notice that_ \(\Gamma_{f}\) _is closed in_ \(X\times Y\) _since_ \(Y\) _is separated and_ \(\Gamma_{f}\) _is a section of_ \(X\times Y\to X\)_._
2. _If_ \(X\) _is not connected and_ \(X=\coprod X_{i}\) _is the decomposition into its connected components, we have_ \(\operatorname{Cor}_{K}(X,Y)=\bigoplus\limits_{i}\operatorname{Cor}_{K}(X_{i},Y)\)_._
3. _Every closed subspace_ \(Z\) _of_ \(X\times Y\) _which is finite and surjective over_ \(X\) _determines a finite correspondence_ \([Z]\) _from_ \(X\) _to_ \(Y\)_._
Proof.: We only consider the case where \(X\) is connected. We can write \([Z]=\sum\limits_{i}n_{i}[Z_{i}]\), where \(Z_{i}\) are irreducible component of \(Z\) such that \(Z_{i}\to X\) is surjective, and \(n_{i}\) is the geometric multiplicity of \(Z_{i}\) of \(Z\).
To define the composition of morphism in the category \(\operatorname{Cor}_{K}\), we need the following lemmas.
**Lemma 8.3**.: _Let \(f:T\to T^{\prime}\) be a morphism of \(K\)-analytic spaces over another \(K\)-analytic space \(S\). Let \(W\) be an irreducible Zariski-closed subset of \(T\) which is finite and surjective over \(S\). Then \(f(W)\) is irreducible, Zariski-closed in \(T^{\prime}\) and finite, surjective over \(S\)._
Proof.: Since \(T^{\prime}\to S\) is separated, \(W\to S\) is finite, hence proper by [2, Corollary 3.3.8], we know that \(W\to T^{\prime}\) is proper, see [5, Proposition 9.6.4]. So \(f(X)\) is irreducible Zariski-closed in \(T^{\prime}\).
We replace \(T,T^{\prime}\) by \(W,f(W)\) respectively, so we assume that \(T\) is finite and surjective over \(S\), and surjective on \(T^{\prime}\). By [2, Corollary 3.3.8], it remains to show that \(T^{\prime}\) is proper over \(S\). Obviously \(T^{\prime}\to S\) is quasi-compact since \(T\to T^{\prime}\) is surjective and \(T^{\prime}\to S\) quasi-compact. By [2, Proposition 2.5.8 (iii)], we have
\[T=\operatorname{Int}(T/S)=\operatorname{Int}(T/T^{\prime})\cap f^{-1}( \operatorname{Int}(T^{\prime}/S))=f^{-1}(\operatorname{Int}(T^{\prime}/S)),\]
this implies that \(\operatorname{Int}(T^{\prime}/S)=T^{\prime}\), i.e. \(\partial(T^{\prime}/S)=\emptyset\). So \(T^{\prime}\) is proper over \(S\).
**Lemma 8.4**.: _Let \(Z\) be an integral \(K\)-analytic space, finite and surjective over a normal \(K\)-analytic space \(S\). Then for every morphism \(S^{\prime}\to S\) with \(S^{\prime}\) connected (resp. irreducible), every connected (resp. irreducible) component of \(Z\times_{S}S^{\prime}\) is finite and surjective over \(S^{\prime}\)._
Proof.: This is in fact an algebraic result from [15, Proposition 2.17]. We can assume that \(S=\mathcal{M}(A),Z=\mathcal{M}(B)\) and \(S^{\prime}=\mathcal{M}(A^{\prime})\) are affinoid. Since \(B\) is finite over \(A\), so \(B^{\prime}:=B\widehat{\otimes}_{A}A^{\prime}=B\otimes_{A}A^{\prime}\).
By [15, Proposition 2.17 (3)], we know that \(\operatorname{Spec}(B)\to\operatorname{Spec}(A)\) is universally equidimensional, hence universally open. Then \(\operatorname{Spec}(B^{\prime})\to\operatorname{Spec}(A^{\prime})\) is open. For every connected component \(T=\mathcal{M}(C)\) of \(\mathcal{M}(B^{\prime})\), the morphism \(\operatorname{Spec}(C)\to\operatorname{Spec}(B^{\prime})\) is open. So \(\mathcal{M}(C)\to\mathcal{M}(B^{\prime})\) has image that is closed and Zariski-open, which is exactly \(\mathcal{M}(B^{\prime})\) since it is connected.
For the irreducible case, since \(\operatorname{Spec}(B^{\prime})\to\operatorname{Spec}(A^{\prime})\) is equidimensional. Then the image of each irreducible component \(\operatorname{Spec}(C)\) of \(\operatorname{Spec}(B^{\prime})\) is \(\operatorname{Spec}(A^{\prime})\). Since the image of \(\mathcal{M}(C)\) is a Zariski-closed subspace of \(\mathcal{M}(A)\), it must be \(\mathcal{M}(A)\).
**Lemma 8.5**.: _Let \(X,Y,Z\) be \(K\)-analytic spaces. Let \(V\subset X\times Y\) and \(W\subset Y\times Z\) be integral closed subspace which are finite and surjective over \(X\) and \(Y\) respectively. Assume that \(Y\) is normal. Then \(V\times Z\) and \(X\times W\) intersect properly in \(X\times Y\times Z\), and each component of the push-forward of the cycle \([V\times Z]\cdot[X\times W]\) on \(X\times Z\) is finite and surjective over \(X\)._
Proof.: Notice that \(V\times_{Y}W\hookrightarrow X\times Y\times_{Y}Y\times Z\simeq X\times Y\times Z\) is the intersection of \(V\times Z\) and \(X\times W\) in \(X\times Y\times Z\), see the explanation in the remark. Then we have the following diagram
By Lemma 8.4, each component of \(V\times_{Y}W\) is finite and surjective over \(V\), so it is also finite and surjective over \(X\), and it is of dimension \(\dim X\). This implies that \(V\times Z\) and \(X\times W\) intersect properly in \(X\times Y\times Z\). By Lemma 8.3, the image of each component of \(V\times_{Y}W\) in \(X\times Z\) is finite and surjective over \(X\).
**Definition 8.6**.: _Let \(\operatorname{Cor}_{K}\) be the category defined as follows:_
* _Objects: the quasi-smooth_ \(K\)_-analytic spaces;_
* _Morphisms: the finite correspondences_ \(\operatorname{Cor}_{K}(X,Y)\)_._
_Given \(V\in\operatorname{Cor}_{K}(X,Y),W\in\operatorname{Cor}_{K}(Y,Z)\), we define \(W\circ V\) as the push-forward of \([V\times Z]\cdot[X\times W]\) on \(X\times Z\), which is an element in \(\operatorname{Cor}_{K}(X,Z)\)._
**Remark 8.7**.:
* _The composition is associative and bilinear, and the diagonal_ \(\Delta_{X}\) _is the identity for a quasi-smooth_ \(K\)_-analytic space_ \(X\)_._
Proof.: This is from Proposition 4.28 and Theorem 6.2, see the proof of [9, Proposition 16.1.1] for the details.
2. _It is not hard to show that the category_ \(\mathrm{QSm}_{K}\) _of quasi-smooth_ \(K\)_-analytic spaces is fully faithful subcategory of_ \(\mathrm{Cor}_{K}\)_._
3. _By_ _[_1_, Proposition 2.2.35]_ _and a few work, we can see our definition of_ \(\mathrm{Cor}_{K}\) _coincide with_ _[_1_, Definition 2.2.29]__._
Following the idea in [4], we can define higher Chow groups \(\mathrm{CH}^{n}(X,s)\) for quasi-smooth \(K\)-analytic spaces. By GAGA principle, such definition will coincide with the one for algebraic varieties. On the other hand, the higher Chow groups is also defined in [1, Introduction generale] using motives of analytic spaces. It is natural to expect there is a close connection between these two and higher Chow groups have similar properties as in the case of algebraic varieties.
## Acknowledgements
The author would like to thank my host professor, Yigeng Zhao for his encouragement, support and valuable suggestions. He would also like to thank Antoine Ducros, Walter Gubler and Michael Temkin for their patience and answering questions during his study of Berkovich spaces. This research is supported by postdoctoral research grant.
|
2309.17360 | Qubit Gate Operations in Elliptically Trapped Polariton Condensates | We consider bosonic condensates of exciton-polaritons optically confined in
elliptical traps. A superposition of two non-degenerated \textit{p}-type states
of the condensate oriented along the two main axes of the trap is represented
by a point on a Bloch sphere, being considered as an optically tunable qubit.
We describe a set of universal single-qubit gates resulting in a controllable
shift of the Bloch vector by means of an auxiliary laser beam. Moreover, we
consider interaction mechanisms between two neighboring traps that enable
designing two-qubit operations such as CPHASE, \textit{i}SWAP, and CNOT gates.
Both the single- and two-qubit gates are analyzed in the presence of error
sources in the context of polariton traps, such as pure dephasing and
spontaneous relaxation mechanisms, leading to a fidelity reduction of the final
qubit states and quantum concurrence, as well as the increase of Von Neumann
entropy. We also discuss the applicability of our qubit proposal in the context
of DiVincenzo's criteria for the realization of local quantum computing
processes. Altogether, the developed set of quantum operations would pave the
way to the realization of a variety of quantum algorithms in a planar
microcavity with a set of optically induced elliptical traps. | Luciano S. Ricco, Ivan A. Shelykh, Alexey Kavokin | 2023-09-29T16:04:47Z | http://arxiv.org/abs/2309.17360v1 | # Qubit Gate Operations in Elliptically Trapped Polariton Condensates
###### Abstract
We consider bosonic condensates of exciton-polaritons optically confined in elliptical traps. A superposition of two non-degenerated \(p\)-type states of the condensate oriented along the two main axes of the trap is represented by a point on a Bloch sphere, being considered as an optically tunable qubit. We describe a set of universal single-qubit gates resulting in a controllable shift of the Bloch vector by means of an auxiliary laser beam. Moreover, we consider interaction mechanisms between two neighboring traps that enable designing two-qubit operations such as CPHASE, \(i\)SWAP, and CNOT gates. Both the single- and two-qubit gates are analyzed in the presence of error sources in the context of polariton traps, such as pure dephasing and spontaneous relaxation mechanisms, leading to a fidelity reduction of the final qubit states and quantum concurrence, as well as the increase of Von Neumann entropy. We also discuss the applicability of our qubit proposal in the context of DiVincenzo's criteria for the realization of local quantum computing processes. Altogether, the developed set of quantum operations would pave the way to the realization of a variety of quantum algorithms in a planar microcavity with a set of optically induced elliptical traps.
## I Introduction
The placement of solid-state exciton-polariton systems, here on just polaritons, in the quantum computing race remains questionable to date [1; 2] despite the growing quality of patterned optical cavities and available active materials therein [3; 4] and steadily advancing techniques in potential landscape engineering [5], and optical control to minimize decoherence processes [6]. In inorganic III-V semiconductor microcavities, significant cross-phase modulation [7], squeezing [8], blockade effect [9; 10] and interactions [11] at the single polariton level have already been demonstrated due to the large interaction strengths between polaritons, owing to the generous size of their underlying Wannier-Mott exciton component. Nowadays, quantum computing proposals using polaritons can be divided into two categories: single-particle [12; 13; 14; 15] and macroscopic field [16; 17; 18; 19] strategies. Here, we are concerned with the latter based on nonequilibrium polariton Bose-Einstein condensates.
Polaritons are hybrid particles arising in the strong coupling regime between matter (excitons) and light (confined photons) [20]. They possess extremely light effective mass, large inter-particle interaction strengths, and can be reversibly adjusted through all-optical techniques. Importantly, information about the polariton state is encoded in the emitted cavity light which can be measured through standard optical techniques. Being bosonic quasiparticles, polaritons can be stimulated into a macroscopically coherent state which lies at the interface between nonequilibrium Bose-Einstein condensates and polariton lasers [21]. In the mean-field picture, a condensate of polaritons can be conveniently described by a single macroscopic wavefunction \(\Psi(\mathbf{r},t)\)[20].
Because polariton condensates are driven-dissipative objects, with particles being generated from an external laser excitation and losses naturally occurring through the cavity mirrors, they can possess equilibrium points that do not coincide with the many-body system ground state in thermodynamic equilibrium. In particular, they can populate and stabilize into the excited state manifolds of their transverse trapping configuration which includes micropillars [22; 23; 24; 25; 26], patterned mesas [27; 28], cavities with metallic deposition [29], and optically generated potentials [30; 31; 32; 33; 34; 35; 36]. In particular, these optically induced potentials can be flexibly designed with the use of spatial light modulators (SLMs). Their shape may be varied on demand from one experiment to another, permitting the realization of polariton _XY_[37; 38] or Ising simulators [39; 40].
The concept of using polaritons as macroscopic quantum states for continuous-variable quantum computation [41] was recently visited by Xue _et al._[19] using the superposition of co-localized and non-degenerate ring-shaped polariton condensates of opposite circulation. These polariton qubits are similar to their superconducting counterpart, the flux qubits. However, instead of circulating basis states, polariton condensates might be more conveniently described in terms of \(p\)-states, corresponding to their spatial dipolar distribution along the two main axes of the transverse trap, i.e., \(p_{x}\) and \(p_{y}\). In the context of optical traps, the equal linear combination of \(p_{x}\) and \(p_{y}\) states corresponds to polariton condensates with an integer orbital angular momenta (OAM), whose the \(p\)-states wavefunctions are real and characterized by odd parity. In an ideal round trap potential, these \(p_{x}\) and \(p_{y}\) modes are degenerate, however, any small geometric ellipticity induces an energy splitting between them [30; 42; 43].
For the purpose of the realization of digital quantum computers based on polariton qubits [1], one should aim at maxi
mizing the coherence times of each individual qubit and coupled arrays of qubits. Interestingly, the dynamics of polariton condensates are characterized by a range of time scales, and it is a non-trivial question of which of them would be responsible for the decoherence of polariton qubits. The shortest timescale is given by the single polariton lifetime that is dependent on the quality factor of the cavity and can reach hundreds of ps in planar cavities [44] and grated waveguides utilizing photonic states protected from the continuum [45]. The coherence time of a polariton laser condensate, measured by the time-resolved interferometry measurements, can be two orders of magnitude longer, (_i.e._, several ns), because of the stimulated scattering of polaritons that stabilizes their final state [46; 47; 48]. Moreover, the spatial coherence of an optically trapped continuous wave (CW)-driven polariton condensate has been measured to be practically uniform within the trap [48; 34] underlining the condensate's quality as a macroscopic spatially coherent object. The ultimate upper bound is the duration of the laser excitation which sustains the condensates which can be in the range of milliseconds before sample heating becomes a problem.
Here, we consider a macroscopic polariton-based qubit composed of non-degenerate \(|p_{x,y}\rangle\)-states confined within an optically induced two-dimensional trap, as illustrated in Fig. 1. Specifically, these basis states comprise the \(|p_{x}\rangle\)- and \(|p_{y}\rangle\)-modes, corresponding to the spatial dipolar distribution along the two principal axes of the trap. The energy difference between these states is determined by the ellipticity of the trap, which can be adjusted using an SLM to shape the laser responsible for the generation of the trap potential [49; 34]. By manipulating the trapped polariton condensate with an auxiliary resonant laser beam, we establish a theoretical framework for a universal set of single-qubit operations. These operations allow for precise control over the qubit state vector within the Bloch sphere using purely optical means. Additionally, we explore a set of two-qubit gates, enabled by distinct coupling mechanisms between the \(|p_{x}\rangle\)- and \(|p_{y}\rangle\)-states of neighboring traps. These interactions can be selectively blocked through nonresonant laser control pulses that create an effective potential barrier between the traps [50]. In particular, by tuning the parameters of the auxiliary laser beams, their operational duration, and the interaction between the traps, we define essential two-qubit operations for quantum computing processes. These include fundamental gates such as the \(\iota\)SWAP, CPHASE, and CNOT operations, with the latter two falling into the category of _entangling gates_[51]. Furthermore, we assess the fidelity of the output state for representative quantum gates, e.g., Hadamard and CPHASE, along with key metrics such as Von Neumann entropy and quantum concurrence. These analyses take into account the presence of intrinsic error sources, namely pure dephasing and spontaneous relaxation, which can affect the performance of polariton-based qubits. Lastly, we briefly examine how our proposal of qubits based on optically trapped polariton condensates aligns with each of the five DiVincenzo's criteria [52; 53] for the practical implementation of local quantum computing processes.
The polariton condensate qubit system hereby explored presents some notable advantages when compared to qubits based on split-ring polariton condensates proposed in Ref. [19]. One key advantage is the ability to achieve direct optical control over the energy splitting between the two polariton states in the system by reprogramming the SLM to adjust the eccentricity of the optical trap [49]. This control directly impacts the period of Rabi oscillations and, consequently, the characteristic time required to execute a single-qubit quantum operation [36]. We also highlight that the present scheme differs significantly from the proposal by Ref. [18], which relies on a single-polariton nonlinearity and the polariton blockade effect giving rise to a qubit basis built upon quantum fluctuations on top of polariton condensates within semiconductor micropillars. Moreover, the macroscopic polariton qubit explored here offers the unique capability to optically switch on/off the interaction between neighboring traps, or equivalently, between the two qubits. This feature provides a natural
Figure 1: (a) Sketch of the proposed polariton condensate qubit system. Two polariton condensates are formed at distinct regions of a microcavity plane, within an elliptical trap in the presence of a non-resonant continuous wave (CW) laser pump with a hexagonal shape. These trapped condensates host superfluid circularly current states whose direction is defined by opposite OAM, as represented by the red cirted arrows in each trap. The linear combination of clockwise and anticlockwise OAM states defines the orthogonal \(|p_{x}\rangle\)- and \(|p_{y}\rangle\)-modes in the optically generated trap and are the basis states for a two-level qubit in each polariton condensate. Both the initial qubit states and qubit gates for an individual trap can be tuned by the corresponding auxiliary laser beams on resonance with the microcavity photonic modes. Moreover, the coupling between two qubits can be switched on and off by a control laser pulse that creates a potential barrier between the two traps (not shown). (b) Spatial representation of the \(p\)-modes, corresponding to the qubit basis states. The right panel illustrates the energy splitting \(\Delta e\) between the modes, which can be tuned by stretching out the example hexagonal trap.
advantage compared to superconducting-based qubits, where interaction cannot be completely excluded. In contrast, one main concern present is error correction for polaritons which would need at least some form of active feedback mechanism that fixes errors on GHz polariton rates whereas most SLM technologies are operating at KHz frequencies with few exotic solutions pushing the GHz limit [54]. Still, the unprecedented rapid development of polariton computing might enable one to circumvent this shortcoming by repeating the computation for a sufficient amount of time in order to accumulate the statistics of results, allowing one to filter out the correct solution.
## II Model Hamiltonian and quantum gate operations
### Single-qubit gate operations
We start by defining canonical single qubit operations [51] on a single trapped condensate and later move onto defining quantum gate operations between two trapped polariton condensates (i.e., a two-qubit system). Single qubit gates are mathematically defined as unitary rotations of the qubit state around a given axis of the Bloch sphere [53] which necessitates a suitable basis of quantum states to work with. A good basis might rely on the symmetric and antisymmetric polariton levels [16], the natural two-component spin structure of polaritons [17], or counterrotating superfluid currents [19; 36; 55]. Instead, we define our basis using the orthogonal \(|p_{x}\rangle\) and \(|p_{y}\rangle\) states of an optically generated trap [56; 33] whose energy levels can be continuously split, \(\Delta\varepsilon=E_{x}-E_{y}\), by simply adjusting the nonresonant excitation beam profile into an elliptical annular shape. Our choice of basis is motivated by the recent demonstrations of spatially coupled polariton condensate vortices [40; 57] whose coherent superposition of clockwise \(|\circlearrowright\rangle\) and anticlockwise \(|\circlearrowright\rangle\) OAM defines the trap's dipolar states,
\[\begin{split}|p_{x}\rangle&=\frac{1}{\sqrt{2}}\left( \leavevmode\nobreak\ |\circlearrowright\rangle+|\circlearrowright\rangle)\,,\\ |p_{y}\rangle&=\frac{1}{\sqrt{2}}\left(\leavevmode \nobreak\ |\circlearrowright\rangle-|\circlearrowrowright\rangle)\,,\end{split} \tag{1}\]
which are used as building blocks to define a qubit basis, i.e., \(|p_{x}\rangle\equiv|0\rangle\) and \(|p_{y}\rangle\equiv|1\rangle\). We assume that the system is pumped only slightly above the condensation threshold in order to avoid fragmentation of the condensate across multiple trap modes and power-induced collapse into the \(|s\rangle\) ground state [34]. The convenient feature of this basis is the spatial structure of the basis states which permits resonant excitation of different kinds of superpositions of \(|p_{x}\rangle\) and \(|p_{y}\rangle\)[58; 27; 42; 57]. Within the two-level qubit subspace \(\{|0\rangle,|1\rangle\}\), setting \(\hbar=1\), the effective single-qubit Hamiltonian can be written:
\[\hat{\mathcal{H}}=\mathcal{P}_{x}\hat{\sigma}_{x}+\mathcal{P}_{y}\hat{\sigma }_{y}+\frac{\Delta\varepsilon}{2}\hat{\sigma}_{z}, \tag{2}\]
The operators \(\hat{\sigma}_{x,y,z}\) are the standard Pauli matrices, and can be written in terms of the qubit basis as \(\hat{\sigma}_{z}=|0\rangle\langle 0|-|1\rangle\langle 1|\), \(\hat{\sigma}_{x}=|0\rangle\langle 1|+|1\rangle\langle 0|\) and \(\hat{\sigma}_{y}=-t(|0\rangle\langle 1|-|1\rangle\langle 0|)\), with \(|0\rangle=\left(1\leavevmode\nobreak\ 0\right)^{T}\) and \(|1\rangle=\left(0\leavevmode\nobreak\ 1\right)^{T}\). Thus, the \(\mathcal{P}_{x},\mathcal{P}_{y}\) parameters are responsible for changing the coupling between \(|0\rangle\) (\(|p_{x}\rangle\)) and \(|1\rangle\) (\(|p_{y}\rangle\)) states, while \(\Delta\varepsilon\) is the energy splitting.
Control over the parameters \((\mathcal{P}_{x},\mathcal{P}_{y},\Delta\varepsilon)\) comes from a suite of SLM techniques that change the shape and form of the nonresonant pump inducing the optical trap, and subsequently the coupling between the \(|p_{x}\rangle\) and \(|p_{y}\rangle\) states as demonstrated in Ref. [36]. The last term is proportional to the eccentricity of the trap which affects the splitting along the minor and major axis [34; 30]. The second term comes from adjusting the in-plane orientation of the trap [49]; e.g. a \(45^{\circ}\) degree rotation couples \(|p_{x}\rangle\) and \(|p_{y}\rangle\) to form their diagonal and antidiagonal counterparts. The first term couples the \(|p_{x}\rangle\) and \(|p_{y}\rangle\) states to form circulating currents which can be achieved by setting the trap into rotational motion [60; 59]. The condensate population can also be tuned gradually from the excited state to the ground state, i.e., sweep the polar angle of the Bloch sphere, with pump power [34]. From here on, we will refer to these control inputs as nonresonant _auxiliary laser_ inputs as indicated in Fig. 1.
Following the definition expressed in Eq. (1), the single qubit Hamiltonian of Eq. (2) can be written in its corresponding OAM basis \(\{|\circlearrowright\rangle,|\circlearrowright\rangle\}\) as follows:
\[\hat{\mathcal{H}} = \mathcal{P}_{x}(|\circlearrowright\rangle(\cup|-|\circlearrowright \rangle(\cup|)) \tag{3}\] \[+ \left[\left(\frac{\Delta\varepsilon}{2}+t\mathcal{P}_{y}\right) |\circlearrowright\rangle(\cup|+\text{h.c.}\right]\,.\]
In the OAM subspace, we can notice that the \(x\)-component of the auxiliary laser shifts the energy splitting of the clockwise and anticlockwise OAM states. Simultaneously, both the \(z\)-component and \(y\)-component modulate the coupling between OAM states with opposing directions when confined within the same trap. In particular, Barrat _et al._[36] achieve this manipulation of the interactions between counter-circulating polariton trap states by using an auxiliary laser in the form of an off-centered Gaussian "bump".
For an operational time \(\tau\) of the auxiliary laser beam, the temporal evolution of the single qubit Hamiltonian expressed in Eq. (2) is given by the unitary operator \(\hat{\mathcal{U}}(\tau)=e^{-t\hat{\mathcal{H}}\tau}\)[61] so that the final qubit state becomes \(|\psi\rangle=\hat{\mathcal{U}}(\tau)|\psi_{0}\rangle\), for a given initial qubit state \(|\psi_{0}\rangle\). To explore the effect of such unitary time evolution, the single qubit Hamiltonian is expressed in a Bloch sphere representation [51; 53], by considering a parameterized unit vector \(\hat{\mathbf{n}}\) in spherical coordinates, in which \(\hat{\mathbf{n}}\in\mathbb{R}^{3}\) is called Bloch vector, where the phase \(\theta\) of the laser beam is the polar angle of the Bloch sphere (\(0\leq\theta\leq\pi\)), with a parameterized azimuthal angle \(\phi\) (\(0\leq\phi<2\pi\)). In this way, \(\hat{\mathbf{n}}=(\cos\theta\sin\phi,\sin\theta\sin\phi,\cos\phi)\), so that
\[\hat{\mathbf{n}}=\frac{\mathbf{\mathcal{P}}}{|\mathbf{\mathcal{P}}|}, \tag{4}\]
where \(\mathbf{\mathcal{P}}=\left(\mathcal{P}_{0}\cos\theta,\mathcal{P}_{0}\sin\theta, \frac{\Delta\varepsilon}{2}\right)\), with the norm \(|\mathbf{\mathcal{P}}|\equiv\mathbf{\mathcal{P}}=\sqrt{\mathcal{P}_{0}^{2}+\frac{ \Delta\varepsilon^{2}}{4}}\). From this parameterization, \(\phi=\arccos\left(\frac{\Delta\varepsilon}{2\mathcal{P}}\right)\). The single-qubit Hamiltonian can then be rewritten as,
\[\hat{\mathcal{H}}=\mathcal{P}\mathbf{\sigma}\cdot\hat{\mathbf{n}}, \tag{5}\]
with the corresponding unitary operator given by [18]:
\[\hat{\mathcal{U}}(\mathcal{P}\tau)=\] \[\begin{bmatrix}\cos(\mathcal{P}\tau)-\iota\cos\phi\sin(|\mathcal{P}| \tau)&-\iota e^{-\iota\theta}\sin\phi\sin(\mathcal{P}\tau)\\ -\iota e^{\iota\theta}\sin\phi\sin(\mathcal{P}\tau)&\cos(\mathcal{P}\tau)+ \iota\cos\phi\sin(\mathcal{P}\tau)\end{bmatrix}. \tag{6}\]
The unitary operator in Eq. (6) represents distinct single-qubit gates by tuning the auxiliary laser beam parameters during a time interval \(\tau\). Let us consider the following cases, for instance:
\[\hat{\mathcal{U}}\left(\tau=\frac{\pi}{2\mathcal{P}},\theta=0,\phi=\frac{\pi} {2}\right)=\begin{bmatrix}0&-\iota\\ -\iota&0\end{bmatrix}\equiv e^{-\iota\frac{\pi}{2}}\hat{X}_{\pi}, \tag{7}\]
\[\hat{\mathcal{U}}\left(\tau=\frac{\pi}{2\mathcal{P}},\theta=\phi=\frac{\pi} {2}\right)=e^{-\iota\frac{\pi}{2}}\begin{bmatrix}0&-\iota\\ \iota&0\end{bmatrix}\equiv e^{-\iota\frac{\pi}{2}}\hat{Y}_{\pi}, \tag{8}\]
and
\[\hat{\mathcal{U}}\left(\tau=\frac{\pi}{2\mathcal{P}},\theta=\forall,\phi=\pi \right)=\begin{bmatrix}\iota&0\\ 0&-\iota\end{bmatrix}\equiv e^{\iota\frac{\phi}{2}}\hat{Z}_{\pi}, \tag{9}\]
where \(\hat{X}_{\pi}\), \(\hat{Y}_{\pi}\) and \(\hat{Z}_{\pi}\) are known as Pauli gates [51; 53], which rotates the qubit state by \(\pi\) radians around \(x\)-, \(y\)- and \(z\)-axis, respectively. Thus, by manipulating the auxiliary laser beam parameters, within a specified operational time \(\tau\), we can effectively implement all the Pauli gates. These single-qubit operations belong to a universal quantum gate set denoted as \(\mathcal{G}_{0}=\{\hat{X}_{\varphi},\hat{Y}_{\varphi},\hat{Z}_{\varphi}, \text{Ph}_{\varphi},\text{CNOT}\}\)[51]. The CNOT operation, a two-qubit gate, will be defined in the subsequent section.
In the context of quantum computing operations, another crucial single-qubit gate is the so-called Hadamard gate \(\hat{H}\)[51; 53]. The Hadamard gate is responsible for generating an equal superposition of states, forming the foundation of qubit basis manipulation. In the framework of our proposal, we can derive this gate from the unitary operator defined in Eq. (6), as follows:
\[\hat{\mathcal{U}}\left(\tau=\frac{\pi}{2\mathcal{P}},\theta=0,\phi=\frac{\pi} {4}\right)=\frac{-\iota}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\equiv e^{-\iota\frac{\pi}{2}}\hat{H}, \tag{10}\]
which represents the standard Hadamard gate, accompanied by an overall phase gate \(\text{Ph}_{-\frac{\pi}{2}}=e^{-\iota\frac{\pi}{2}}\mathds{1}\) acting on the qubit state.
To illustrate the practical effect of the Hadamard gate as defined in Eq. (10), let us assume that the elliptical trap is initially set to favor the state \(|p_{x}\rangle\equiv|0\rangle\). Consequently, we have:
\[e^{-\iota\frac{\pi}{2}}\hat{H}|0\rangle=\frac{e^{-\iota\frac{\pi}{2}}}{\sqrt{ 2}}\left(|0\rangle+|1\rangle\right)\equiv\text{Ph}_{-\frac{\pi}{2}}|\circ \rangle. \tag{11}\]
It is evident that the Hadamard gate operation on the initial state \(|0\rangle\) results in a clockwise OAM state [see Eq. (1)]. Similarly, if the trap's initial state is set to \(|p_{y}\rangle\equiv|1\rangle\), after applying Eq. (10), the final single-qubit state will be the anticlockwise OAM state \(|\circ\rangle\). This demonstrates that the Hadamard gate operation manipulates the direction of OAM states within the trap, depending on the qubit initial state.
### Two-qubit gate operations
We next introduce a two-qubit Hamiltonian, given by:
\[\hat{\mathcal{H}}_{12}=\sum_{j=1,2}\left(\mathcal{P}_{x}^{(j)}\hat{\sigma}_{x }^{(j)}+\mathcal{P}_{y}^{(j)}\hat{\sigma}_{y}^{(j)}+\frac{\Delta\varepsilon_{j }}{2}\hat{\sigma}_{z}^{(j)}\right)+\hat{\mathcal{H}}_{\text{int}}, \tag{12}\]
where \(\hat{\mathcal{H}}_{\text{int}}\) accounts for the interaction between the two qubits, or equivalently, between the macroscopic dipolar modes of distinct traps. The operators for each qubit are expanded within their respective subspaces, denoted as \(\hat{\sigma}_{x,y,z}^{(1)}=\hat{\sigma}_{x,y,z}\otimes\mathds{1}\) and \(\hat{\sigma}_{x,y,z}^{(2)}=\mathds{1}\otimes\hat{\sigma}_{x,y,z}\), where \(\mathds{1}\) represents a \(2\times 2\) identity matrix.
The Hamiltonian (12) indicates that, in addition to the parameters of the auxiliary laser beam, two-qubit gate operations are influenced by the interaction between the individual qubits, specifically, the interaction between neighboring traps. Typically, this interaction is represented as \(\hat{\mathcal{H}}_{\text{int}}=J_{k}(\hat{\sigma}_{k}\otimes\hat{\sigma}_{k})\), where \(k=x,y,z\), and \(J_{k}\) represents the strength of the qubit-qubit interaction. Within our proposal, the coupling between adjacent qubits can be adjusted using a control laser pulse, which creates an all-optical potential barrier between the elliptical traps [40], or through the use of an acousto-optic modulator [36].
For instance, for an Ising-type interaction in the qubit basis, \(J_{x}=J_{y}=0\) and \(J_{z}=J_{12}\), which leads to the following interacting Hamiltonian:
\[\hat{\mathcal{H}}_{\text{int}}=J_{12}(\hat{\sigma}_{z}^{(1)}\cdot\hat{\sigma}_ {z}^{(2)})=J_{12}(\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}). \tag{13}\]
By considering this interaction into Eq. (12), and tuning the system parameters so that \(\Delta\varepsilon_{1}=\Delta\varepsilon_{2}=-2J_{12}\) and \(\mathcal{P}_{x}^{(j)},\mathcal{P}_{y}^{(j)}\ll J_{12}\) (weak auxiliary laser beam), the two-qubit Hamiltonian is reduced to:
\[\hat{\mathcal{H}}_{12}=J_{12}\left(\hat{\sigma}_{z}\otimes\hat{\sigma}_{z}- \hat{\sigma}_{z}\otimes\mathds{1}-\mathds{1}\otimes\hat{\sigma}_{z}\right). \tag{14}\]
The unitary operator that describes the time-evolution of Eq. (14) for a given time duration \(\tau\) of the auxiliary laser beams reads:
\[\hat{\mathcal{U}}_{12}(J_{1,2},\tau)=e^{\iota J_{12}\tau}\begin{bmatrix}1&0&0&0 \\ 0&1&0&0\\ 0&0&1&0\\ 0&0&e^{-\iota J_{12}\tau}\end{bmatrix}. \tag{15}\]
By setting \(J_{1,2}\tau=\frac{\pi}{4}\) in the neighboring traps, such a unitary operator simplifies to:
\[\hat{\mathcal{U}}_{12}\left(\frac{\pi}{4}\right)=e^{\iota\frac{\pi}{2}}\text{ CPHASE}, \tag{16}\]
where the term CPHASE represents a two-qubit gate operation. This operation applies a \(\hat{\sigma}_{z}\) operation to the target qubit exclusively when the control qubit is in the state \(|1\rangle\)[53]. The CPHASE gate falls under the category of _entangling gates_, as well as the CNOT gate. This categorization arises from their capability to transform separate input states into entangled output states. Moreover, the application of a CPHASE gate in conjunction with two Hadamard gates [Eq. (10)] generates a CNOT gate, i.e, \(\text{CNOT}=(\mathbb{I}\otimes\mathds{H})\text{CPHASE}(\mathbb{I}\otimes \mathds{H})\)[51].
As an illustrative example of applying a CPHASE gate to generate an entangled two-qubit state, let us begin with the initial state \(|\psi_{0}\rangle\), where the optical traps are prepared with opposite OAM states, i.e., \(|\psi_{0}\rangle=|\circlearrowright\rangle_{\text{C}}\otimes|\circlearrowright\rangle_ {\text{T}}=|\circlearrowright\rangle_{\text{C}}|\circlearrowright\rangle_{\text{T}}\), with the subscripts C and T denoting the control and target qubits, respectively. Upon applying the CPHASE operation defined in Eq. (16) to \(|\psi_{0}\rangle\), we obtain the following result:
\[\hat{\mathcal{U}}_{12}\left(\frac{\pi}{4}\right)|\psi_{0}\rangle = \frac{e^{t}\frac{\pi}{4}}{2}\left(|0\rangle_{\text{C}}|0\rangle_{ \text{T}}-|0\rangle_{\text{C}}|1\rangle_{\text{T}}+|1\rangle_{\text{C}}|0 \rangle_{\text{T}}\right. \tag{17}\] \[+ \left.|1\rangle_{\text{C}}|1\rangle_{\text{T}}\right)\neq( \ldots)_{\text{C}}(\ldots)_{\text{T}},\]
in which the inequality indicates that the final two-qubit state cannot be factored into individual qubit subspaces, indicating the presence of entanglement between the qubits [53; 61]. We will see later that this entangled state leads to a maximal value of quantum concurrence.
Another kind of two-qubit operation that can be implemented in our elliptically trapped polariton condensate by means of tuning the parameters of the auxiliary laser beans and interaction between the traps is the so-called \(i\)SWAP gate [51]. This gate specifically requires an \(XY\)-type interaction between the qubits, i.e, \(J_{x}=J_{y}=J_{12}\) and \(J_{z}=0\), which corresponds to,
\[\hat{\mathcal{H}}_{\text{int}}=J_{12}(\hat{\sigma}_{x}\otimes\hat{\sigma}_{x} +\hat{\sigma}_{y}\otimes\hat{\sigma}_{y}). \tag{18}\]
By considering a trap of zero eccentricity so that \(\Delta\varepsilon_{j}=0\), and also weak auxiliary laser beam \(\mathcal{P}_{x}^{(j)},\mathcal{P}_{y}^{(j)}\ll J_{12}\), the total two-qubit Hamiltonian of Eq. (12) is reduced only to the Hamiltonian describing the \(XY\)-interaction in the qubit basis, cf. Eq. (18), with the corresponding unitary time-evolution operator:
\[\hat{\mathcal{U}}_{12}(J_{1,2},\tau)=\begin{bmatrix}1&0&0&0\\ 0&\cos(2J_{12}\tau)&-t\sin(2J_{12}\tau)&0\\ 0&-t\sin(2J_{12}\tau)&\cos(2J_{12}\tau)&0\\ 0&0&0&1\end{bmatrix}. \tag{19}\]
For \(J_{12}\tau=\frac{\pi}{4}\), the unitary operator above corresponds exactly to the \(i\)SWAP gate as follows:
\[\hat{\mathcal{U}}_{12}\left(\frac{\pi}{4}\right)=\begin{bmatrix}1&0&0&0\\ 0&0&-t&0\\ 0&-t&0&0\\ 0&0&0&1\end{bmatrix}\equiv i\text{SWAP}. \tag{20}\]
The \(i\)SWAP gate operation performs a state swap on the two-qubit system while introducing a phase difference of \(\pi/2\). In practical terms, this means that in an illustrative scenario where the initial state is represented as \(|\psi_{0}\rangle=|p_{x}\rangle_{\text{C}}|p_{y}\rangle_{\text{T}}\equiv|0 \rangle_{\text{C}}|1\rangle_{\text{T}}\), the application of the \(i\)SWAP gate, as defined in Eq. (20), transforms it into \(e^{-t\frac{\pi}{4}}|p_{y}\rangle_{\text{C}}|p_{x}\rangle_{\text{T}}\equiv e^ {-t\frac{\pi}{4}}|1\rangle_{\text{C}}|0\rangle_{\text{T}}\) as the final state of the two-qubit system. Notice that both the CPHASE and \(i\)SWAP gates, as previously defined, need the condition \(J_{12}\tau=\frac{\pi}{4}\), along with the presence of weak auxiliary laser beams. However, the distinction between performing the CPHASE and the \(i\)SWAP operation hinges on the adjustability of the ellipticity parameter \(\Delta\varepsilon_{j}\).
The CNOT gate, which plays a role in entangling distinct qubit states [51; 53], also can be implemented within our proposal. Unlike the previously defined CPHASE and \(i\)SWAP gates, the CNOT operation requires individual control over the laser beam parameters for each trap. To demonstrate the CNOT gate implementation in the proposed device, we first initialize the two-qubit system with a CPHASE gate, allowing us to establish a two-qubit state basis comprising \(\{|0\rangle_{\text{C}}|0\rangle_{\text{T}},|0\rangle_{\text{C}}|1\rangle_{ \text{T}},|1\rangle_{\text{C}}|0\rangle_{\text{T}},|1\rangle_{\text{C}}|1 \rangle_{\text{T}}\}\). Subsequently, considering this basis, the unitary operator responsible for performing individual unitary operations \(\hat{\mathcal{U}}_{j}(\mathcal{P}j,\tau_{j})\) on the \(jth\)-qubit, as defined by Eq. (6), is expressed as follows:
\[\hat{\mathcal{U}}_{1|2}(\mathcal{P}_{1},\tau_{1};\mathcal{P}_{2},\tau_{2})= \begin{bmatrix}\hat{\mathcal{U}}_{1}(\mathcal{P}_{1},\tau_{1})&0\\ 0&\hat{\mathcal{U}}_{2}(\mathcal{P}_{2},\tau_{2})\end{bmatrix}, \tag{21}\]
where \(\mathcal{P}_{1,2}\) and \(\tau_{1,2}\) are the parameterized norm [Eq. (4)] and time duration of the auxiliary laser beam in trap 1 and 2, respectively. Setting \(\theta=0\) and using distinct laser operational times \(\mathcal{P}_{1}\tau_{1}=\pi\) and \(\mathcal{P}_{2}\tau_{2}=\frac{\pi}{2}\), the operator of Eq. (21) is reduced to:
\[\hat{\mathcal{U}}_{1|2}\left(\pi,\frac{\pi}{2}\right)=\begin{bmatrix}-1&0&0&0\\ 0&-1&0&0\\ 0&0&-t\cos\phi_{2}&-t\sin\phi_{2}\\ 0&0&-t\sin\phi_{2}&t\cos\phi_{2}\end{bmatrix}, \tag{22}\]
which is equivalent to a \(-\)CNOT gate for \(\phi_{2}=\frac{\pi}{2}\), with a phase of \(\frac{\pi}{2}\) in the second qubit, i.e:
\[\hat{\mathcal{U}}_{1|2}\left(\pi,\frac{\pi}{2}\right)=-\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&t\\ 0&0&t&0\end{bmatrix}. \tag{23}\]
A standard CNOT operation flips the target qubit if and only if the control bit is in the \(|1\rangle\) state, as explained in [53]. To illustrate this in our system, consider that the first trap (C-qubit) is configured to be in the \(|\circlearrowright\rangle\) state, while the second trap (T-qubit) is appropriately excited to be in the \(|p_{y}\rangle\equiv|1\rangle\) state. This sets up an initial state of \(|\psi_{0}\rangle=|\circlearrowright\rangle_{\text{C}}|1\rangle_{\text{T}}\). When we apply the CPHASE operation, as defined in Eq. (16), to this two-qubit state, it transforms into the following state:
\[|\psi\rangle=\frac{e^{t\frac{\pi}{4}}}{\sqrt{2}}\left(|0\rangle_{\text{C}}|1 \rangle_{\text{T}}-|1\rangle_{\text{C}}|1\rangle_{\text{T}}\right)=e^{t\frac{ \pi}{4}}|\circlearrowright\rangle_{\text{C}}|1\rangle_{\text{T}}. \tag{24}\]
Notice that this final composite state of two qubits comprises separable states, thereby indicating an absence of entanglement. Now, by applying a CNOT gate [Eq. (23)] on \(|\psi\rangle\), one gets:
\[\hat{\mathcal{U}}_{1|2}|\psi\rangle = -\frac{e^{t\frac{\pi}{4}}}{\sqrt{2}}\left(|0\rangle_{\text{C}}|1 \rangle_{\text{T}}-i|1\rangle_{\text{C}}|0\rangle_{\text{T}}\right) \tag{25}\] \[\neq -e^{t\frac{\pi}{4}}\left(\ldots\right)_{\text{C}}\left(\ldots \right)_{\text{T}},\]
which therefore leads to an entangled final state due to its nonseparability.
## III Qubit error mechanisms and quantum measurements
A critical milestone in achieving large-scale quantum computing is the successful experimental implementation of fault-tolerant quantum logical operations [51, 62]. When dealing with a two-level system as a qubit, it becomes imperative to execute a series of gate operations while preserving the coherence between the qubit's basis states. In this context, the primary sources of quantum errors leading to qubit decoherence are pure dephasing and spontaneous relaxation from the excited state [19, 53, 62]. Both these mechanisms compromise the operational fidelity of quantum computing processes [62] and reduce the entanglement between qubit states [63].
To quantify the detrimental effects of spontaneous relaxation and pure dephasing, we numerically solve the Lindblad Master Equation (LME) for the density matrix operator, denoted as \(\hat{\rho}=|\psi\rangle\langle\psi|\), or simply the density operator, with \(|\psi\rangle\) representing the quantum state of the system. The LME is expressed as follows [64]:
\[\iota\frac{d\hat{\rho}}{d\tau}=[\hat{\mathcal{H}},\hat{\rho}]+\gamma_{r} \mathcal{L}[\hat{\sigma}_{-}]\hat{\rho}+\gamma_{d}\mathcal{L}[\hat{\sigma}_{ \varepsilon}]\hat{\rho}, \tag{26}\]
where \(\hat{\mathcal{H}}\) is either the single-qubit [Eq. (5)] or two-qubit Hamiltonian [Eq. (12)], \(\mathcal{L}[\hat{A}]\hat{\rho}=\hat{A}\hat{\rho}\hat{A}^{\dagger}-\frac{1}{2 }\{\hat{A}^{\dagger}\hat{A},\hat{\rho}\}\) and \(\hat{\sigma}_{-}=\hat{\sigma}_{x}-i\hat{\sigma}_{y}\) is the standard lowering operator. Notice that the operators \(\hat{\sigma}_{-}\) and \(\hat{\sigma}_{z}\) will be expanded in a \(4\times 4\) subspace for the two-qubit system, following the definition of Sec. II.2. The first term in the right-hand-side of Eq. (26) accounts for the coherent dynamics of the density operator, while the last two govern the spontaneous relaxation and pure dephasing, with rates \(\gamma_{r}\) and \(\gamma_{d}\), respectively. As a result of the LME approach, we obtain the final density matrix operator after evolving over a given time interval \(\Delta t:0\rightarrow\tau\), i.e., the laser operational time defined in the unitary operator of Eq. (6) in either the presence or absence of dephasing and relaxation mechanisms.
The phenomenon of spontaneous relaxation corresponds to the transition of the qubit's upper energy state to its lowest state. This type of error primarily impacts the populations of each qubit state, thus manifesting itself in the diagonal elements of the density operator. Conversely, the pure dephasing mechanism is responsible for the degradation of coherent information between the qubit states, resulting in a reduction of the off-diagonal elements of the density operator. In the context of polariton condensates, both relaxation and pure dephasing effects emerge from the interplay between polaritons within the condensate state and their surrounding environment of noncondensed particles [47, 48], such as the scattering of polaritons with the reservoir of hot excitons.
Decoherence mechanisms due to interaction with the system environment impact all quantum operations within a universal quantum computer [51], i.e., initialization, quantum gate operation, measurement, and memory, which is realized through the integration of numerous qubits and distinct quantum gates. While tackling the LME for the many-body scenario remains complicated [62, 65, 66], it is worth noting that every quantum operation can be decomposed into a series of single and two-qubit gates. This allows us to simplify the analysis of quantum errors in gate operations introduced by decoherence phenomena into a single or two-body problem.
For a single-qubit gate, cf. Sec. II.1, a final unit Bloch vector \(\hat{\mathbf{u}}=(x,y,z)\) can be directly extracted from the resulting density matrix [53], so that \(x=2\text{Re}(\beta_{01})\), \(y=2\text{Im}(\hat{\rho}_{10})\) and \(z=\hat{\rho}_{00}-\hat{\rho}_{11}\), where \(\hat{\rho}_{ij}\) are the elements \((i,j)\) of the density matrix operator given by Eq. (26). In this way, we can compare the initial and final qubit states projected onto the Bloch sphere after its dynamical evolution for a given single qubit gate, both in the presence and absence of quantum errors introduced by the interaction with the surrounding environment and described by Eq. (26).
Also, in the context of single-qubit operations, a valuable metric for quantifying the impact of decoherence processes within the trap's environment on the final qubit state, as described by Eq.(26), is the Von-Neumann entropy [53, 63]. This entropy is defined as:
\[S(\hat{\rho})=-\text{Tr}(\hat{\rho}\log_{2}\hat{\rho}). \tag{27}\]
When the qubit system is in a perfectly pure state, the Von-Neumann entropy equals zero. The maximum value of the Von-Neumann entropy, denoted as \(S(\hat{\rho})=\log_{2}(d)\), is reached when the qubit is in a completely mixed state, with \(d\) representing the system's dimensionality [53]. For the single-qubit operations defined in Sec. II.1, the maximum Von-Neumann entropy is \(S_{\text{max}}(\hat{\rho})=\log_{2}(2)=1\).
Another figure-of-merit, due to the presence of quantum error sources, is the so-called fidelity \(\in[0,1]\) of a decoherent quantum system compared with its corresponding ideal coherent case (\(\gamma_{r}=\gamma_{d}=0\)), being computed as follows [12, 62]:
\[F(\hat{\rho}_{\text{ideal}},\hat{\rho}(t))=\text{Tr}\left(\sqrt{\sqrt{\hat{ \rho}_{\text{ideal}}}\hat{\rho}(t)\sqrt{\hat{\rho}_{\text{ideal}}}}\right). \tag{28}\]
Here, \(\hat{\rho}_{\text{ideal}}\) is the density matrix for the pure qubit state and \(\hat{\rho}\) is from (26) in presence of decoherent terms \(\gamma_{r}\) and \(\gamma_{d}\). A perfect fidelity, \(F(\hat{\rho}_{\text{ideal}},\hat{\rho}(t))=1\), is only achieved when \(\hat{\rho}(t)=\hat{\rho}_{\text{ideal}}\).
In cases where a two-qubit gate operation results in an entangled state, such as with CNOT and CPHASE operations [51], it is essential to assess the degree of entanglement in the output state and how it is influenced by dephasing and relaxation mechanisms. This can be achieved by examining the quantum concurrence of the density operator [67, 11, 63], defined as follows:
\[\mathcal{C}(\hat{\rho})=\text{max}(0,\lambda_{1}-\lambda_{2}-\lambda_{3}- \lambda_{4}), \tag{29}\]
where \(\lambda_{i}\) are the eigenvalues of a matrix \(\hat{R}=\sqrt{\sqrt{\hat{\rho}}\hat{\rho}\sqrt{\hat{\rho}}}\) arranged in a decreasing order, with \(\hat{\rho}=\hat{\Sigma}\hat{\rho}^{*}\hat{\Sigma}\) being the "spin-flipped" density operator and \(\hat{\Sigma}=\hat{\sigma}_{y}\otimes\hat{\sigma}_{y}\). The concurrence metric ranges between 0 and 1, with a value of 1 indicating the highest degree of entanglement, meaning a complete mixture of states. Oppositely, a concurrence value of 0 characterizes separated states, indicating a complete lack of entanglement between the qubits.
## IV Results and discussion
We start our analysis by addressing the impacts of both pure dephasing and spontaneous relaxation in the dynamics of a single-qubit Hadamard gate operation, as defined in Eq. (10). Both the solution of the LME [Eq. (26)], calculation of fidelity [Eq. (28)], Von-Neumann entropy [Eq. (27)] and subsequent quantum concurrence [Eq. (29)] were numerically calculated employing the _QuTiP \(\copyright\) package_, _a Quantum Toolbox in Python, version 4.7.1_[68; 69].
As stated by the unitary operator of Eq. (10), to perform a Hadamard gate, the phase of the auxiliary laser beam \(\theta=0\), with an operating time \(\tau=\frac{\pi}{2\pi}\) and \(\phi=\frac{\pi}{4}\). This last condition implies a non-zero laser beam-qubit detuning, once \(\cos\phi=\Delta\varepsilon/2\mathcal{P}\). As it can be noticed, the parameters are in units of \(\mathcal{P}\), which may vary from one experimental system to another depending on the parameters of the microcavity, photon-exciton detuning, and exciton oscillator strength.
Fig. 2(a) shows the evolution of the initial qubit state \(\ket{\psi}=\ket{0}\), or equivalently \(\ket{p_{x}}\), projected onto a Bloch sphere when submitted to a Hadamard operation, as defined in Eq. (10), in the absence of any quantum error source (\(\gamma_{r}=\gamma_{d}=0\)). It can be noticed that the action of the Hadamard gate in the qubit basis state \(\ket{0}\) creates an equal symmetric superposition of the two-basis states \((\ket{0}+\ket{1})/\sqrt{2}\equiv\ket{\circlearrowright}\), despite the presence of a phase of \(-\frac{\pi}{2}\), as shown in Eq. (11). The resulting qubit state is represented by a unit vector (\(\ket{\hat{\mathbf{u}}}=1\)) residing within the equatorial plane of the Bloch sphere, thereby defining a pure state.
The contrasting case of Fig. 2(a) is depicted in the corresponding panel (b), where the pure dephasing mechanism as an error source is considered, specifically for \(\gamma_{d}=0.2\mathcal{P}\). A direct comparison between Figs. 2(a) and (b) reveals the effects of pure dephasing on the final qubit state following the application of the Hadamard gate. Upon the completion of the laser beam operation within the time interval \(\tau\), the final state diverges from the vortex mode \(\ket{\circlearrowright}\) and is now characterized by a vector state \(\hat{\mathbf{u}}\) located within the Bloch sphere, with \(\ket{\hat{\mathbf{u}}}\approx 0.70\). This reduction in the norm of the Bloch vector indicates that the final qubit state is in a mixed state. This mixing is a consequence of the presence of the pure dephasing mechanism, which introduces non-zero values in the off-diagonal elements of the density operator.
The application of the Hadamard gate in the single-qubit state \(\ket{1}\equiv\ket{p_{y}}\) is depicted in Fig. 2(c), in which we can verify that the final state is \(\ket{\circlearrowright}\), i.e., an anticlockwise current mode, as expected for the Hadamard operation. By considering now the pure relaxation of the qubit initial state, with \(\gamma_{r}=0.2\mathcal{P}\), Fig. 2(d) points out that the qubit final state vector after \(\tau\) is not in the same position of \(\ket{\circlearrowright}\), also with a small reduction of its vector state (\(\ket{\hat{\mathbf{u}}}\approx 0.96\)), also indicating some degree of mixture.
To enhance our comprehension of the impacts stemming from pure relaxation and dephasing, as depicted in the Bloch sphere (Fig. 2) for the Hadamard gate, in Fig. 3 we delve
Figure 2: Evolution of the initial single-qubit state projected as a unit vector onto the Bloch sphere, under the application of the Hadamard gate defined in Eq. (10), considering that the auxiliary laser beam was applied for a period of time \(\tau=\frac{\pi}{2\pi}\). Panels (a) and (b) depict the dynamics of the initial qubit state \(\ket{0}\equiv\ket{p_{x}}\) for a Hadamard gate, in the absence and presence of pure dephasing, respectively. The same is shown in panels (c) and (d), but with \(\ket{1}\equiv\ket{p_{y}}\) as the single-qubit initial state, now considering the presence and absence of spontaneous relaxation rate, respectively. The lateral color bar indicates the corresponding timescale for each Bloch vector, from the initial time \(\tau=0\) to the final time \(\frac{\pi}{2\mathcal{P}}\).
Figure 3: Single-qubit state fidelity [Eq. (28)] and Von-Neumann entropy for a Hadamard gate operation [Eq. (10)] as a function of either pure dephasing [panels (a)-(b)] or spontaneous relaxation [panels (c)-(d)] rates \(\gamma_{d}\) and \(\gamma_{r}\), respectively, considering the qubit initial state \(\ket{1}\), with \(x\)-axis on the logarithmic scale. Both the rates vary between \(0\) and \(0.4\mathcal{P}\).
we explore the fidelity and Von-Neumann entropy for the resulting single-qubit state, as outlined in Eqs.(28) and (27). In Fig. 3(a), we observe a notable exponential reduction in fidelity as the pure dephasing rate \(\gamma_{d}\) increases. Simultaneously, the Von-Neumann entropy exhibits a corresponding increase, beginning at the value of \(0\) when \(\gamma_{d}=0\), indicating a completely pure state. As \(\gamma_{d}\) progresses to \(0.4\mathcal{P}\), the Von-Neumann entropy approaches \(1\), indicating the formation of an almost entirely mixed state. This strong enhancement of the Von-Neumann entropy is in agreement with the pure dephasing mechanism, which mixes the basis states of the qubit.
Figs. 3(c) and (d) exhibit a similar pattern to their counterparts in panels (a) and (b) regarding fidelity and Von-Neumann entropy, but as functions of the pure spontaneous relaxation rate \(\gamma_{r}\). It is worth noting that while the impact of pure relaxation on the final qubit state appears to be less severe when compared to pure dephasing, it is crucial to emphasize that in the case of pure relaxation, the qubit exchanges energy with its surrounding environment, leading to a complete loss of information, characterizing an irreversible process [62].
Our analysis now shifts towards the operation of a two-qubit gate, specifically examining the impact of pure dephasing and spontaneous relaxation in the CPHASE operation, as defined by Eq. (16). As previously mentioned, the CPHASE gate falls into the category of _entangling gates_, as its application leads to a final state that cannot be decomposed into individual single-qubit states. This characteristic makes the CPHASE gate particularly suitable for investigating quantum concurrence [see Eq. (29)].
In Fig. 4 we explore how the two-qubit state fidelity and concurrence changes under the application of a CPHASE gate when both the spontaneous relaxation and dephasing mechanisms are accounted equally in both neighboring traps. The two-qubit initial state is prepared in the same state employed in Eq. (17), which leads to an entangled state after the application of the CPHASE gate as defined in Eq. (16). Fig. 4(a) and (b) show both the fidelity of the final qubit state and quantum concurrence as a function of pure dephasing. In addition to the expected reduction of the fidelity in Fig. 4(a) when compared to the ideal case (\(\gamma_{d}=\gamma_{r}=0\)), in Fig. 4(b) we observe a progressive degradation of the entangled output state, indicated by the exponential decline in concurrence. It's worth noting that when \(\gamma_{d}=0\), the entanglement between the qubit states reaches its maximum, with \(C(\hat{\rho})=1\). However, as the pure dephasing rate increases, the off-diagonal (coherence) elements of the density operators undergo significant reduction, leading to a nearly complete loss of entanglement in the system. This is evident in the upper limit case of \(\gamma_{d}=0.4J_{12}\), where \(C(\hat{\rho})\approx 0.16\), corresponding to a final state fidelity of approximately \(0.76\), as depicted in Fig. 4(a).
Fig. 4(c) and (d) also show the fidelity and concurrence of the final entangled two-qubit state, but for the case in which the CPHASE gate operation is subjected to a spontaneous relaxation mechanism. We can verify the reduction of both quantities as the spontaneous relaxation rate \(\gamma_{r}\) increases. As the spontaneous relaxation rate \(\gamma_{r}\) increases, both fidelity and concurrence noticeably decrease. However, it is noteworthy that for the higher value of \(\gamma_{r}\) considered, both the output state fidelity and concurrence surpass those observed in the case of pure dephasing, as depicted in Figs. 4(a)-(b). Specifically, for \(\gamma_{r}=0.4J_{12}\), the fidelity reaches approximately \(0.90\), while \(C(\hat{\rho})\approx 0.66\). This significant difference between the effects of pure dephasing [Figs.4(a)-(b)] and spontaneous relaxation [Figs.4(c)-(d)] arises from the distinct impact that each mechanism has on the density operator. Pure dephasing primarily reduces the off-diagonal elements of the density operator, which encode information about quantum coherence
Figure 4: Two-qubit state fidelity [Eq. (28)] and quantum concurrence [Eq. (29)] for a CPHASE gate operation as a function of pure dephasing [panels (a)-(b)] or spontaneous relaxation [panels (c)-(d)] rates \(\gamma_{d}\) and \(\gamma_{r}\), respectively, with the \(x\)-axis on the logarithmic scale. The two-qubit initial state is set up as \(\ket{\psi_{0}}=|\circlearrowright\rangle_{\text{C}}|\circlearrowright\rangle_{ \text{T}}\), resulting in an entangled final state, as shown in Eq. (17). Panel (e) maps the behavior of the concurrence as a function of both \(\gamma_{d}\) and \(\gamma_{r}\) for the same CPHASE gate applied in \(\ket{\psi_{0}}\).
between distinct qubits. In contrast, spontaneous relaxation predominantly affects the diagonal elements (populations) of the density operator, leaving the off-diagonal elements relatively unchanged.
To provide a comprehensive overview of the concurrence behavior in the presence of both pure dephasing and spontaneous relaxation during the application of the CPHASE gate, we present a concurrence colormap in Fig. 4(e). This panel reveals an optimal range for both rates where entanglement between the two qubits is almost entirely preserved, specifically \(0.90\leq\mathcal{C}(\hat{\rho})\leq 1\), for \(\gamma_{d}\leq 0.03J_{12}\) and \(\gamma_{r}\leq 0.1J_{12}\). This region is enclosed by the first white dashed line in Fig. 4(e).
## V Conclusions and outlook
In this work, we explore the possibility of employing elliptically trapped polariton condensates as quantum logic gates. The single-qubit basis states \(|0\rangle\) and \(|1\rangle\) in each condensate are defined as being the \(|p_{x}\rangle\) and \(|p_{y}\rangle\) states, respectively. These states are associated with the spatial dipolar distributions of the polariton density along the orthogonal axes of the trap, while their energy splitting can be adjusted through the geometrical ellipticity of the trap via SLMs. Distinct linear combinations between these \(p\)-states describe polariton condensates with integer OAM, which carry clockwise and anticlockwise superfluid current modes.
By introducing an auxiliary laser beam in each trap, we demonstrate the feasibility of implementing a versatile set of universal single-qubit operations, including Pauli and Hadamard gates. Furthermore, by exploring different types of interaction between the two traps, we unveil the potential for executing two-qubit gate operations such as CPHASE and CNOT gates. These two-qubit gates fall into the crucial category of entangling gates, which are fundamental for quantum computation.
We also investigate how quantum error sources, common in polariton condensates such as pure dephasing and spontaneous relaxation from the qubit's excited state, impact the final state of the proposed two-qubit system, particularly for Hadamard and CPHASE gate operations. To address this, we numerically compute the corresponding density operator via the Master Equation approach. Subsequently, we assess key performance metrics, including the fidelity of the final state, Von-Neumann entropy, and quantum concurrence.
In the context of local quantum computing operations, it is important to ensure that any proposal of practical implementation of a quantum computer must satisfy the DiVincenzo's criteria [52], which are, _ipsis litteris_, the following: _(i) A scalable physical system with well-characterized qubits; (ii) The ability to initialize the state of the qubits to a simple fiducial state, such as \(|000\ldots\rangle\); (iii) Long relevant decoherence times, much longer than the gate operation time; (iv) A universal set of quantum gates and (v) A qubit-specific measurement capability_. Below, we briefly discuss each of these criteria within our proposal, with the exception of criterion _(iv)_, which constitutes the main finding of the current work. This concise discussion aims to outline potential avenues for realizing the proposed optically trapped polariton condensate qubits.
The criterion _(i)_ means, at first, that a quantum computer must contain several quantum bits, which in the context of our system, implies an optical device with a collection of coupled traps of polariton condensates. Furthermore, each qubit (trap) must have its physical parameters well characterized, as the qubit Hamiltonian, the presence of couplings between distinct qubits and interaction with external fields which can be used for initializing the qubit state and performing quantum logic operations. Considering both these features, we drawn attention for the work of Alyatkin _et al._[40] which have experimentally explored a triangular lattice of all-optical driven trapped polariton condensates with integer OAM \(l=\pm 1\) carrying vortex (\(l=+1\)) and antivortex (\(l=-1\)) states with a Ising-type interaction between them. This system could be extended to encompass several traps of polariton condensates, effectively creating a multiple-qubit setup, with optical tunability for each individual component, as described by Eq. (2). Moreover, the Ising-type interaction between trapped condensates at distinct lattice sites favors the implementation of a CPHASE gate, cf. Eq. (16).
The capability to set the initial qubit states, as indicated by criterion _(ii)_, arises from the obvious prerequisite that input states must be precisely defined before implementing any quantum computing operation. Furthermore, the specific need to initialize the system in the global ground state \(|000\ldots\rangle\), equivalent to \(|p_{x}p_{x}p_{x}\ldots\rangle\) within our proposal, is required by an effective quantum error correction implementation, which demands a constant and pristine source of qubits in a low-entropy state [52]. In the context of the device sketched in Fig. 1, the direction of a polariton vortex can be controlled by means of an ultra-short (120 fs) off-resonant optical control pulse [70], for instance, in which the final direction of the vortex depends on the power and duration of this pulse. Trapped polariton condensates also can be effectively controlled by external rotating potentials, as evidenced by recent experimental findings [59; 60] and theoretically described by Yulin _et al._[71]. Furthermore, we draw attention to the possibility of experimental tuning of the coupling between neighboring polariton traps within a two-dimensional network through all-optical methods, as detailed in Ref. [50]. This result holds significant implications for the realization of two-qubit quantum gates, as detailed in Section II.2.
Regarding point _(iii)_, the decoherence time plays a pivotal role in the dynamics of any quantum system coupled to its surrounding environment, once it indicates for how long the quantum behavior is preserved before succumbing to the classical one. This means that the decoherence time \(\tau_{\text{coh}}\) for a quantum gate operation should be long enough to ensure that the quantum coherence of the system is preserved, which implies that it should be longer than the time to perform a quantum operation. In our analysis of single and double-qubit gates, the "clocktime" for each corresponding operation is given by \(\tau\). Specifically, for a single operation \(\tau\sim 10^{-5},10^{-4}\tau_{\text{coh}}\)[52], in order to preserve the quantum nature of the computing process and also ensure the performance of error correction mechanisms. In the scenario of polariton condensates, the decoherence time can reach several ns [46; 47; 48], which is related to the time
dependent fluctuations of the overall phase of the condensate wave function. However, the spatial coherence of an optically trapped polariton condensate driven by a CW pump has been reported as practically uniform [24, 34, 48]. Notably, in an experiment conducted by Sedov _et al._[24], no traces of any loss due to spatial decoherence were detected throughout the entire duration of the optical experiment, suggesting a decoherence time on the order of milliseconds (\(\tau_{\mathrm{coh}}\sim\) ms).
Finally, as addressed by criterion (_v_), the outcomes of a quantum computing process must be read out. In other words, the probability outcomes associated with states of the system encoded by the density operator must be experimentally accessed. In the framework of our proposal, the information of the final two-qubit states can be detected by implementing a protocol inspired by the work of Mair _et al._[72], originally devised for characterizing photons with entangled OAM. As a first step, the light emitted by each of the traps of polariton condensates would pass through an SLM, where a predefined function is applied to manipulate the spatial phase distribution of the light. These preset functions would correspond to either the circular current states or the dipolar \(p\)-states, oriented along the main axis and short axis of the corresponding elliptical trap. Subsequently, once the light has passed through the SLM, it is directed into a single-mode optical fiber. This optical fiber operates as a filter, allowing only the Gaussian mode to propagate while suppressing all other modes. At the termination point of each optical fiber, a photodetector is positioned and calibrated to generate a binary signal. This binary signal is designed to indicate a value of 1 when the intensity of the Gaussian mode surpasses a predefined critical threshold, and it registers 0 when the intensity falls below this threshold. Then, a coincidence counter compares the signal coming from both photodetectors. If both record the value of 1, the counter computes the value of 1; conversely, it records 0. By considering the four possible preset functions for each of the two SLMs, we obtain 16 independent measurements of the states of two qubits projected along the \(z\) or \(x\) axes of the corresponding Bloch spheres. By accumulating statistics over thousands of measurements, a high-fidelity measurement matrix of the system is obtained [73, 53]. This matrix can then be converted into the corresponding density operator, which encodes the probabilities of the system states.
###### Acknowledgements.
L.S.R. and I.A.S. acknowledge the support from the Icelandic Research Fund (Rannis), grant No. 163082-051 and Project Hybrid Polaritonics. A.K. acknowledges the support of the Russian Foundation for Basic Research (Grant No. 19-52-12032). We acknowledge Aleksey Fedorov, Helgi Sigurdsson, and Boris Altshuler for fruitful discussions. We also acknowledge Roman Cherbunin for designing the read-out scheme of a two-qubit polariton gate.
|
2309.08240 | Understanding ice and water film formation on soil particles by
combining DFT and Casimir-Lifshitz forces | Thin films of ice and water on soil particles play crucial roles in
environmental and technological processes. Understanding the fundamental
physical mechanisms underlying their formation is essential for advancing
scientific knowledge and engineering practices. Herein, we focus on the role of
the Casimir-Lifshitz force, also referred to as dispersion force, in the
formation and behavior of thin films of ice and water on soil particles at
273.16 K, arising from quantum fluctuations of the electromagnetic field and
depending on the dielectric properties of interacting materials. We employ the
first-principles density functional theory (DFT) to compute the dielectric
functions for two model materials, CaCO$_3$ and Al$_2$O$_3$, essential
constituents in various soils. These dielectric functions are used with the
Kramers-Kronig relationship and different extrapolations to calculate the
frequency-dependent quantities required for determining forces and free
energies. Moreover, we assess the accuracy of the optical data based on the DFT
to model dispersion forces effectively, such as those between soil particles.
Our findings reveal that moisture can accumulate into almost micron-sized water
layers on the surface of calcite (soil) particles, significantly impacting the
average dielectric properties of soil particles. This research highlights the
relevance of DFT-based data for understanding thin film formation in soil
particles and offers valuable insights for environmental and engineering
applications. | M. Boström, S. Kuthe, S. Carretero-Palacios, V. Esteso, Y. Li, I. Brevik, H. R. Gopidi, O. I. Malyi, B. Glaser, C. Persson | 2023-09-15T08:30:18Z | http://arxiv.org/abs/2309.08240v1 | Understanding ice and water film formation on soil particles by combining DFT and Casimir-Lifshitz forces
###### Abstract
Thin films of ice and water on soil particles play crucial roles in environmental and technological processes. Understanding the fundamental physical mechanisms underlying their formation is essential for advancing scientific knowledge and engineering practices. Herein, we focus on the role of the Casimir-Lifshitz force, also referred to as dispersion force, in the formation and behavior of thin films of ice and water on soil particles at 273.16 K, arising from quantum fluctuations of the electromagnetic field and depending on the dielectric properties of interacting materials. We employ the first-principles density functional theory (DFT) to compute the dielectric functions for two model materials, CaCO\({}_{3}\) and Al\({}_{2}\)O\({}_{3}\), essential constituents in various soils. These dielectric functions are used with the Kramers-Kronig relationship and different extrapolations to calculate the frequency-dependent quantities required for determining forces and free energies. Moreover, we assess the accuracy of the optical data based on the DFT to model dispersion forces effectively, such as those between soil particles. Our findings reveal that moisture can accumulate into almost micron-sized water layers on the surface of calcite (soil) particles, significantly impacting the average dielectric properties of soil particles. This research highlights the relevance of DFT-based data for understanding thin film formation in soil particles and offers valuable insights for environmental and engineering applications.
## I Introduction
Ice and water, omnipresent in nature, play pivotal roles in an array of environmental and technological phenomena, as evidenced by multiple studies. [1; 2; 3] Therefore, comprehending the primary physical principles that dictate the formation of thin ice and water films is crucial for numerous scientific pursuits and engineering applications. One specific example lies within civil engineering, where thin films of ice and water on soil particles bear significant implications. [4] They influence the construction and maintenance of critical infrastructure, including building foundations, roads, and bridges. Gaining insights into the generation and characteristics of these films enables engineers to design structures with enhanced resistance to frost heave and thaw settlement damage. Likewise, soil is a complex and dynamic system, and understanding its behavior at a fundamental level can lead to new insights and discoveries in geology, chemistry, and physics. Comprehension about the formation of thin films of ice and water on soil particles is also critical for predicting and mitigating the impact of climate change on soil ecosystems. As temperatures fluctuate, the formation and melting of ice and water films affect the availability of nutrients and water to plants, the stability of soil structure, and overall, the health of soil micro-organisms. [5] In addition, having knowledge of the formation of thin films of ice and water on soil particles is important for advancing our understanding of basic questions such as frost heave, [6] and, more generally, the physical and chemical properties of soil. In recent years, there has been growing interest in the role of the Casimir-Lifshitz force in the formation and behavior of thin films of ice and water in diverse (astro-)geological systems, covering ice seeding particles in clouds [7; 8] and the potential involvement of insulating gas hydrate caps [9] in facilitating the persistence of liquid water on celestial bodies like the moon Enceladus. [10] This force, which arises from quantum fluctuations of the electromagnetic (EM) field, also called dispersion force, strongly depends on the dielectric properties
of the interacting materials, amongst other parameters.
Motivated by the above, herein, we investigate the necessity of accurate dielectric functions derived from density functional theory (DFT), which can provide more information of the optical response of materials than the standard experimental measurements, for reliable modeling of dispersion forces between soil particles. Specifically, we present the imaginary part of the dielectric functions (related to dissipative properties of the materials) for CaCO\({}_{3}\) and Al\({}_{2}\)O\({}_{3}\), vital components found in diverse soil compositions.[11; 12] These are then used with a Kramers-Kronig relationship and different extrapolations to calculate the real-valued dielectric function evaluated at imaginary frequencies,which facilitates the computation. This latter quantity is used to calculate forces and free energies. Our main objective is to determine how well-established low and medium-energy optical spectra from DFT can be combined with high-energy extrapolations, aiming to confirm the validity of previous conclusions based on the comparison between experimental optical data and theoretical forces.[13] Remarkably, our findings indicate that the calculated interaction energies remain largely unaffected by the specific approach employed for the low and high-energy extrapolations in a few significant scenarios.
Our DFT-based predictions indicate a dielectric constant of 8.7 for calcite, which aligns closely with the previously measured static dielectric constant range of 8 to 9.[14] However, our current research reveals a significant phenomenon: the accumulation of moisture, in the form of water molecules, in micron-sized layers on the surface of calcite particles found in soil. This accumulation profoundly impacts the average dielectric properties of soil particles. Notably, existing models[14] utilized to estimate water content in soils rely on a mineral static dielectric constant value of 5 as an input parameter. This poses a potential problem since calcite is a primary constituent in various soils. Consequently, accurate modeling of soil dielectric properties necessitates a comprehensive understanding of these properties for constituent materials such as calcite, quartz, water, and others.[14] Given the significant impact of calcite on various soil compositions, addressing this issue becomes imperative.
## II The semi-classical theory for Lifshitz interactions
### Some initial considerations
The semi-classical theory of intermolecular forces follows from the realization that much of the quantum electrodynamics formalism[15; 16] can be derived via Maxwell's equations with boundary conditions, and the subsequent assignment to each quantized EM mode a zero point energy at zero temperature (or, at finite temperatures, the free energy). Previously, it was believed that the complex Lifshitz theory[16] required knowledge of the dielectric function over the entire spectrum to calculate dispersion forces in layered structures. However, van Kampen, Nijboer, and Schram,[17] made progress in simplifying the theory by demonstrating the derivation of non-retarded interactions from a semi-classical approach. In 1969, Parsegian and Ninham,[13] further advanced this work, leading to numerous Lifshitz and Casimir interaction calculations. Although outdated in light of subsequent publications, Parsegian and Ninham's pioneering paper,[13] remains significant. Their breakthrough was recognizing that only partial knowledge of the optical spectrum of different materials is sufficient to understand the van der Waals-Lifshitz interaction between planar surfaces separated by intervening material.[18; 19; 20; 13] In what follows, we will explore similar concepts to examine how approximations in DFT-based material properties present in soil particles relate to the accuracy of calculated Hamaker constants and Lifshitz interactions. Our findings confirm that different high-frequency extrapolations for evaluating DFT-derived dielectric functions are not crucial for obtaining accurate Lifshitz forces and Hamaker constants.
### Optical quantities and their interrelationships
The real (\(\varepsilon_{i}^{\prime}\)) and imaginary (\(\varepsilon_{i}^{\prime\prime}\)) parts of the dielectric function (for material \(i\) = 1, 2, and 3) are related via the well-known Kramers-Kronig relationships using Cauchy principal (\(P\)) value integration[21]
\[\varepsilon_{i}^{\prime}(\omega)=1+\frac{2}{\pi}P\int_{0}^{\infty}d\Omega \frac{\Omega\,\varepsilon_{i}^{\prime\prime}(\Omega)}{\Omega^{2}-\omega^{2}}. \tag{1}\]
We can also use the well-known relationship to the refractive index \(n_{i}\) and the extinction coefficient \(k_{i}\)[21]
\[\sqrt{\varepsilon_{i}^{\prime}(\omega)+i\varepsilon_{i}^{\prime\prime}( \omega)}=n_{i}(\omega)+ik_{i}(\omega), \tag{2}\]
which can be rewritten as
\[\varepsilon_{i}^{\prime}(\omega)=n_{i}(\omega)^{2}-k_{i}(\omega)^{2}, \tag{3}\]
and
\[\varepsilon_{i}^{\prime\prime}(\omega)=2n_{i}(\omega)k_{i}(\omega). \tag{4}\]
The Kramers-Kronig transformation requires a sufficiently wide frequency range to obtain accurate estimates for the complex-valued dielectric function. _Ab initio_ DFT modeling of the has the advantage over utilizing experimental data by describing the response functions for much larger frequencies. Normally they are calculated from 0 to \(\sim 1.5\times 10^{17}\,\mathrm{rad/s}\) (i.e., \(\sim 100\,\mathrm{eV}\)), while the measurements are restricted to photon energies typically below some tens of eV and by using different sources for the exciting beam. Moreover, the wider frequency range is important also to accurately calculate the Casimir-Lifshitz forces.
### The Ninham-Parsegian model for Lifshitz forces
We are revisiting the Ninham-Parsegian [13] model for Lifshitz interaction, closely following the approach outlined in some remarkably lucid papers from the 1970s. [13; 18; 19] We aim to enhance our understanding of this model and its implications. At zero temperature, the non-retarded (\(NR\)) van der Waals-Casimir-Lifshitz interaction energy is simply the change in zero-point energies (\(\hbar\omega_{\lambda}/2\)) of the allowed quantized EM surface modes when two surfaces are at a finite distance \(d\) compared to when the surfaces are infinite apart,
\[E^{NR}(d)=\frac{\hbar}{2}\sum_{\lambda}\int\frac{d^{2}q}{(2\pi)^{2}}[\omega_{ \lambda}(d)-\omega_{\lambda}(\infty)], \tag{5}\]
where zero-point energies are summed over different allowed (\(\lambda\)) modes and integration is done over the different wavevectors (\(q\)). The EM surface modes arise from solving Maxwell's equations with appropriate boundary conditions. The dispersion equation to be solved to obtain the surface modes is in the non-retarded regime (i.e. short separations between the plates, where one can ignore finite velocity of light), [18]
\[D(d,\omega)=1-\frac{[\varepsilon_{1}(\omega)-\varepsilon_{2}(\omega)][ \varepsilon_{3}(\omega)-\varepsilon_{2}(\omega)]}{[\varepsilon_{1}(\omega)+ \varepsilon_{2}(\omega)][\varepsilon_{3}(\omega)+\varepsilon_{2}(\omega)]}e^{ -2qd}=0 \tag{6}\]
with subscripts 1 and 3 representing the two interacting materials, through material 2.
Following the generalized argument theorem [18] (see the equations below), a much simplified formula for intermolecular forces between surfaces can be presented. Assuming an analytic function \(\Delta\) with zeros at \(\omega_{\lambda}(d)\), and that has a derivative which has singularities at \(\omega_{\lambda}(\infty)\), complex analysis produces, [18]
\[\sum_{\lambda}\frac{\hbar}{2}[\omega_{\lambda}(d)-\omega_{\lambda}(\infty)]= \frac{1}{2\pi i}\oint_{C}\frac{\hbar\omega}{2}\frac{d\omega}{\Delta(d,\omega) }\frac{\partial\Delta(d,\omega)}{\partial\omega}. \tag{7}\]
Here, \(C\) is the closed path going down the imaginary axis and closing in the right-hand plane forming a semi-circle at which all quantities vanish. Doing a partial integration, we find, [18]
\[\frac{1}{2\pi i}\oint_{C}\frac{\hbar\omega}{2}\frac{d\omega}{\Delta(d,\omega) }\frac{\partial\Delta(d,\omega)}{\partial\omega}=\frac{\hbar}{4\pi}\int_{- \infty}^{\infty}d\xi\ln[\Delta(d,i\xi)], \tag{8}\]
where \(\xi\) is a variable of integration (which at finite temperatures it goes over to the so-called Matsubara frequencies). By substituting the above expression in Eq.(5) we get,
\[E^{NR}(d)\approx\frac{\hbar}{4\pi^{2}}\int_{0}^{\infty}dqq\int_{0}^{\infty}d \xi\ln[1-\Delta_{12}^{NR}\Delta_{32}^{NR}e^{-2qd}], \tag{9}\]
where the non-retarded (NR) reflection coefficients are given as
\[\Delta_{ij}^{NR}=\frac{\varepsilon_{i}(i\xi)-\varepsilon_{j}(i\xi)}{ \varepsilon_{i}(i\xi)+\varepsilon_{j}(i\xi)}. \tag{10}\]
This can be generalized when the finite speed of light is accounted for by writing it as a sum of a transverse magnetic (TM) and a transverse electric (TE) contributions, [18]
\[E(d)\approx\frac{\hbar}{4\pi^{2}}\int_{0}^{\infty}dqq\int_{0}^{\infty}d\xi\{ \ln[G^{TM}]+\ln[G^{TE}]\}, \tag{11}\]
\[G^{TM/TE}=1-\Delta_{12}^{TM/TE}\Delta_{32}^{TM/TE}e^{-2\gamma_{2}d}, \tag{12}\]
\[\Delta_{ij}^{TM}=\frac{\gamma_{j}\varepsilon_{i}(i\xi)-\gamma_{i}\varepsilon_ {j}(i\xi)}{\gamma_{j}\varepsilon_{i}(i\xi)+\gamma_{i}\varepsilon_{j}(i\xi)}, \ \Delta_{ij}^{TE}=\frac{\gamma_{j}-\gamma_{i}}{\gamma_{j}+\gamma_{i}}, \tag{13}\]
where \(\gamma_{i}^{2}=q^{2}+\xi^{2}\varepsilon_{i}/c^{2}\), and \(c\) is the speed of light.
At finite temperature, \(T\), the zero point energy of each mode should be replaced with the Helmholtz free energy, [18]
\[F(\omega,T)=k_{B}T\ln[2\sinh(\hbar\omega/[2k_{B}T])]. \tag{14}\]
with \(k_{B}\) the Boltzmann constant. A derivative of the Helmholtz free energy expression, arising from a partial integration in the same way as in Eq.(8), provides a factor \(\coth[\hbar\omega/(2k_{B}T))]\). The coth factor has an infinite number of poles on the imaginary axis. This leads to the fact that zero temperature and finite temperatures can be dealt with a simple substitution, [18]
\[\frac{\hbar}{2\pi}\int_{0}^{\infty}d\xi\to k_{B}T\sum_{m=0}^{\infty}{}^{\prime },\xi\rightarrow\xi_{m}=2\pi k_{B}Tm/\hbar, \tag{15}\]
where the sum was originally from minus infinity to plus infinity leading to a factor of 1/2 for the \(m\)=0 term. The quantity related to forces expressed in Matsubara frequencies, \(\xi_{m}\), can be obtained directly from \(\varepsilon_{i}^{\prime}(\omega)\) and \(\varepsilon_{i}^{\prime\prime}(\omega)\), i.e. from materials optical properties, via well-known Kramers-Kronig relationships [21]
\[\varepsilon_{i}(i\xi_{m})=1+\frac{2}{\pi}\int_{0}^{\infty}d\omega\frac{\omega \varepsilon_{i}^{\prime\prime}(\omega)}{\omega^{2}+\xi_{m}^{2}}, \tag{16}\]
This quantity is real-valued and decays smoothly towards one leading to very simple calculations.
The leading non-retarded interaction energy (using Eq.(9)) is, [13]
\[E^{NR}(d)\approx\frac{-A}{12\pi d^{2}}, \tag{17}\]
with \(A\) a Hamaker constant for the system. We will use the finite temperature non-retarded expression for the Hamaker constant,
\[A=-6k_{B}T\sum_{m=0}^{\infty}{}^{\prime}\int_{0}^{\infty}dqq\ln[1-\Delta_{12}^ {NR}\Delta_{32}^{NR}e^{-2q}]. \tag{18}\]
The measurements by Haydon and Taylor [22] of the energy were in the past [13] used to estimate the Hamaker
constant for water surfaces separated by a biomolecular lipid film. The measured energy was \(-3.94\times 10^{-6}\,\mathrm{J/m^{2}}\) for a film of estimated thickness \(56\,\mathrm{\SIUnitSymbolAngstrom}\). To test the Lifshitz theory, one requires to model dielectric functions that are derived from optical data or, in recent years, density functional theory (DFT) has become a commonly employed method. Within the Ninham and Parsegian model [13] the dielectric functions \(\varepsilon(\omega)\) of water, oil (resembling the lipid membrane), and many different materials can be modeled as,
\[\varepsilon(\omega)=1+\frac{c_{rot}}{1-i\omega/\omega_{rot}}+\sum_{j}\frac{c_{ j}}{1-(\omega/\omega_{j})^{2}+i\gamma_{j}\omega}, \tag{19}\]
where \(\omega_{j}\) are characteristic frequencies and \(c_{j}\) are proportional to the oscillator strengths. For calculations of the Hamaker constant, one requires the dielectric functions for imaginary frequencies,
\[\varepsilon(i\xi)=1+\frac{c_{rot}}{1+\xi/\omega_{rot}}+\sum_{j}\frac{c_{j}}{1+ (\xi/\omega_{j})^{2}}, \tag{20}\]
where the damping term (\(\gamma_{j}\)) on the imaginary frequency axis can usually be ignored since bandwidths are generally much smaller than the absorption frequencies. The rotational relaxation (\(\omega_{rot}\)) occurs at very low frequencies. In the far ultraviolet, both water and the oil film behave as a simple plasma with
\[\varepsilon(i\xi)=1+\frac{\omega_{P}^{2}}{\xi^{2}}\,, \tag{21}\]
being \(\omega_{P}^{2}=4\pi Ne^{2}/m\) the plasma frequency with \(N\), \(e\), and \(m\) the electron density, charge, and mass, respectively. Since water and oil have similar electron densities, [13] contributions from the ultraviolet frequency region and higher were ignored. The experimental Hamaker constant for water surfaces separated by a biomolecular lipid film was (within large error estimates) equal to \(A\sim 4.66\times 10^{-21}\,\mathrm{J}\). [13; 22] Testing different parameters [13] suggested that removing the ultraviolet contribution gave a Hamaker constant \(A\sim 3.9\times 10^{-21}\,\mathrm{J}\), while varying the refractive index of oils lead to \(A\sim 4.5-5.4\times 10^{-21}\,\mathrm{J}\).
Since the 1970s, numerous groups worldwide have conducted extensive comparisons between theory and experiments analyzing the effect of the accuracy of the optical data in the calculation of dispersion forces. Despite this, the fundamental concepts remain largely unchanged. In the following discussion, we will demonstrate that for a few selected model examples, the treatment of the extrapolated high-frequency tail in DFT-based dielectric functions for imaginary frequencies is not as critical as initially anticipated when it comes to accurately describing Hamaker constants and Lifshitz interactions.
## III DFT modeling of the solids
The modeling of Al\({}_{2}\)O\({}_{3}\) and CaCO\({}_{3}\), essential components in diverse soil compositions, are performed within the DFT and with the projector augmented wave (PAW) method for the \(GW\)-type core potentials, as implemented in the Vienna Ab initio Simulation Package (VASP). [23] The valence configurations for the atoms are chosen to C: \(2s^{2}p^{2}\), O: \(2s^{2}p^{4}\), Al: \(2s^{2}p^{6}3s^{2}p^{1}\), and Ca: \(3s^{2}p^{6}4s^{2}\). As these compounds are wide-gap insulators, we employ as default the generalized gradient approximation, revised exchange-correlation functional for solids (PBEsol), developed by Perdew, et al.; [24] the band gap energy is corrected with a hybrid functional. The unit cells are described by ten-atom trigonal lattices, and the irreducible Brillouin zones are sampled by a \(6\times 6\times 6\)\(\mathbf{k}\)-mesh. A quasi-Newton (variable metric) algorithm is utilized for the structural relaxation with a cut-off energy of \(800\,\mathrm{eV}\), to an accuracy of \(10^{-4}\,\mathrm{eV/\AA}\) for the forces on all atoms, Thereafter, the charge density is generated with a \(600\,\mathrm{eV}\) cut-off energy, using the linear tetrahedron integration, and iterated in the electronic self-consistent loop to reach an energy accuracy of \(10^{-6}\,\mathrm{eV}\). The irreducible representations of the electronic eigenstates are determined by the open-source program Irvsp. [25]
From the electronic structure, the imaginary part \(\varepsilon^{\prime\prime}(\omega)\) of the macroscopic dielectric function is calculated. With the independent single-electron eigenfunctions, the response due to electronic transitions is described as the joint density-of-states modulated by the optical matrix elements. In the long-wavelength limit, the latter reads
\[\varepsilon^{\prime\prime\,ele}_{\alpha\alpha}(\omega)=\lim_{ \mathbf{q}\to 0}\frac{4\pi^{2}e^{2}}{V_{\Omega}q^{2}}\sum_{v,c,\mathbf{k}} \delta(\epsilon_{c,\mathbf{k}}-\epsilon_{v,\mathbf{k}}-\hbar\omega)\\ \times\langle u_{c,\mathbf{k}+\mathbf{e}_{\alpha}q}|u_{v,\mathbf{ k}}\rangle\langle u_{v,\mathbf{k}}|u_{c,\mathbf{k}+\mathbf{e}_{\alpha}q}\rangle\,, \tag{22}\]
in the three Cartesian directions \(\mathbf{e}_{\alpha}\). Here, \(V_{\Omega}\) is the unit-cell volume and \(u_{v/c}\) is the cell periodic part of the valence (\(v\)) or conduction (\(c\)) state eigenfunction with the energy \(\epsilon_{v/c,\mathbf{k}}\). Local field effects are neglected. As the two compounds are insulators, we perform the \(\mathbf{k}\)-space summation by Bloch's linear tetrahedron method. Since the accuracy of the calculation can strongly depend on the size of the \(\mathbf{k}\)-point grid, [26] we use a \(12\times 12\times 12\)\(\mathbf{k}\)-mesh though the values of the low-frequency dielectric constants are sufficiently converged already for the charge density from the \(6\times 6\times 6\)\(\mathbf{k}\)-mesh.
Alumina and calcite are ionic compounds, and we, therefore, consider the local lattice dynamics. The vibrations associated with the longitudinal optical (LO) modes build up an electric field that screens the carriers. The dipole-active LO phonons and the corresponding transverse optical (TO) modes contribute to the dielectric response. In the long-wavelength limit, the phonon dispersion is approximated to be constant, and the ionic response is modeled as Lorentz oscillators.
\[\varepsilon^{\prime\prime\,ion}_{\alpha\alpha}(\omega)=\sum_{j}\frac{S_{j}\, \omega_{\mathrm{TO}}^{2}\Gamma_{j}\omega}{(\omega_{\mathrm{TO}}^{2}-\omega^{2} )^{2}+\Gamma_{j}\omega^{2}}\,. \tag{23}\]
\(\Gamma_{j}\) is the damping and \(S_{j}\) is the oscillator strength of the \(j\)th mode in its vibration direction. We employ the
density functional perturbation theory to compute the Hessian matrix of the ionic displacements, incorporating the symmetry of the crystals.
The total imaginary part of the dielectric function is the summation of the two contributions. The corresponding dielectric response function for the real part of \(\varepsilon^{\prime}(\omega)\) is obtained from the Kramers-Kronig relation. For the average response functions, we take the arithmetic mean of the three Cartesian directions.
## IV Results
### Crystalline structures and dielectric response functions of alumina and calcite
Both Al\({}_{2}\)O\({}_{3}\) and CaCO\({}_{3}\) crystalize in the space group structure R\(\overline{3}c\) (\(D_{3d}^{6}\); No. 167), based on the ditrional-scalenohedral point group with rhombohedral Bravais lattices. Since the accuracy of split-off energies in the electronic a structure can depend on bond lengths and bond angles,[27] we relax the crystalline structures with four different exchange-correlation functionals; see Table. 1, where the two lattice constants describe hexagonal lattices.
As expected, the local density approximation (LDA) overbinds by about 1%, while the regular generalized gradient approximation (PBE) underbinds by about 1%. Both the revised PBE for solids (PBEsol) and the hybrid functional (HSE with 30% Hartree-Fock exchange) agree very well with the experimental data. Since we want to compute the electronic transitions on a dense \(\mathbf{k}\)-mesh and to energetically very high states, we choose to use the PBEsol functional. Lattice parameters for the rhombohedral lattices are \(a=5.137\,\mathrm{\SIUnitSymbolAngstrom}\) and \(\gamma=55.34\,\mathrm{\SIUnitSymbolDegree}\) for Al\({}_{2}\)O\({}_{3}\) and \(a=6.307\,\mathrm{\SIUnitSymbolAngstrom}\) and \(\gamma=46.57\,\mathrm{\SIUnitSymbolDegree}\) for CaCO\({}_{3}\), with the PBEsol potential. Although the two oxides crystallize in the same space group symmetry, they have rather different crystalline structures (Supplemental Material (SM)[38]), which mainly depends on the cation sizes and valence configurations. For alumina, each O atom binds to two Al atoms with the bond length \(1.86\,\mathrm{\SIUnitSymbolAngstrom}\) and to two other Al atoms with the bond length \(1.97\,\mathrm{\SIUnitSymbolAngstrom}\). For calcite, each O atom has a bond to one C atom with the bond length \(1.29\,\mathrm{\SIUnitSymbolAngstrom}\) and to two Ca atoms with the bond length \(2.34\,\mathrm{\SIUnitSymbolAngstrom}\).
The underestimated gap energy for the PBE potential is adjusted by a constant energy shift of the conduction bands so that the \(\Gamma\)-point gap corresponds to that of HSE. The two compounds are wide-gap insulators, and we do not expect any valence-conduction band hybridization that could otherwise affect the band dispersion,[39] and thereby also the transition probability. The differences in the bond character are reflected in the electronic structures. Al\({}_{2}\)O\({}_{3}\) is a insulator with a direct gap at the \(\Gamma\)-point, and we estimate the gap energy to \(E_{g}^{dir}\approx 8.7\,\mathrm{eV}\). CaCO\({}_{3}\), on the other hand, has an indirect gap of \(E_{g}^{ind}\approx 7.3\,\mathrm{eV}\), located close to the \(\mathbf{k}\)-point (\(\underline{1},\underline{2},0\)), for which the direct gap is \(E_{g}^{dir}\approx 7.4\,\mathrm{eV}\).
Further, both compounds have the same single-group irreducible representations at the \(\Gamma\)-point for the energetically lowest conduction state (a single degenerate \(\Gamma_{1}^{+}\)) and the topmost valence state (single degenerate \(\Gamma_{2}^{-}\)). However, while the second highest valence state in alumina is a double degenerate state (\(\Gamma_{3}^{-}\)) only \(0.04\,\mathrm{eV}\) below the topmost valence state, the corresponding state in calcite is single degenerate (\(\Gamma_{2}^{+}\)) \(0.48\,\mathrm{eV}\) below the topmost \(\Gamma\)-point valence state. The irreducible representations for three symmetry points are presented in SM[38]. Calcite has more flat conduction band dispersion, and one could expect a stronger onset to the electronic dielectric response for this compound.
The dielectric response functions for alumina and calcite are presented in Fig. 1. The response due to electronic transitions contributes above the direct-gap energy on the eV scale, while the lattice dynamics contributes on the 0.1-eV scale; here, below \(0.2\,\mathrm{eV}\). We find that the PAW potential for Al with the electronic valence configuration \(2s^{2}p^{6}3s^{2}p^{1}\) easily yields incorrect vibrational frequencies, and we, therefore, instead use the corresponding potential with the valence configuration \(3s^{2}p^{1}\). As expected, CaCO\({}_{3}\) has a strong electronic response right above \(9\,\mathrm{eV}\), while Al\({}_{2}\)O\({}_{3}\) has a smoother increase of the response up to the energy \(14\,\mathrm{eV}\). Since CaCO\({}_{3}\) constitutes both a lighter and a heavier cation, it is natural that the compound has vibrations that are both lower and higher
\begin{table}
\begin{tabular}{c c c c c c} & LDA & PBE PBEsol & HSE & Expt. \\ \hline Al\({}_{2}\)O\({}_{3}\) & & & & & \\ \(a\) [Å] & 4.728 & 4.809 & 4.777 & 4.742 & 4.7657 [28] \\ & & & & & 4.761 [29] \\ & & & & & 4.7597 [30] \\ \(c\) [Å] & 12.884 & 13.122 & 13.018 & 12.950 & 13.010 [28] \\ & & & & & 13.011 [29] \\ & & & & & 12.993 [30] \\ \(E_{g,\Gamma}^{dir}\) [eV] & 6.45 & 5.85 & 6.02 & 8.7 & 8.8 [31] \\ CaCO\({}_{3}\) & & & & & 9.1 [32] \\ \(a\) [Å] & 4.939 & 5.039 & 4.990 & 4.990 & 4.988 [33] \\ & & & & & 4.9889 [34] \\ \(c\) [Å] & 16.302 & 17.225 & 16.841 & 17.097 & 17.068 [33] \\ & & & & & 17.064 [34] \\ \(E_{g,\Gamma}^{dir}\) [eV] & 5.67 & 5.63 & 5.70 & 8.0 & 5.65-6.35 [35] \\ & & & & & 6.9-7.7 [37] \\ \end{tabular}
\end{table}
Table 1: Lattice constants \(a\) and \(c\) of alumina and calcite describing the hexagonal structures. The direct band-gap energy \(E_{g,\Gamma}^{dir}\) refers to the \(\Gamma\)-point.
in frequency than those of Al\({}_{2}\)O\({}_{3}\). From the figure, one can observe that the two compounds have rather different dielectric functions, both in the regime of the vibrational contribution and for higher frequencies where the electronic transitions contribute. In SM,[38], we present the dielectric functions in more detail for both CaCO\({}_{3}\) and Al\({}_{2}\)O\({}_{3}\), showing that there is only a moderate difference between the components in the perpendicular and the parallel directions. That is also obvious for the dielectric constants (Table 2). Both compounds have high-frequency constants from the electronic contribution that is close to 3. The optical vibration modes contribute to the static dielectric constant; this is somewhat larger for Al\({}_{2}\)O\({}_{3}\) (\(\sim\) 7.0) compared to CaCO\({}_{3}\) (\(\sim\) 6.0). One can notice that the main difference is that Al\({}_{2}\)O\({}_{3}\) has a much larger response in the parallel direction. Overall, there is a good agreement with the experimental findings,[40] and the average static dielectric constant is calculated to be 10.0 in Al\({}_{2}\)O\({}_{3}\), and 8.7 in CaCO\({}_{3}\).
### Sensitivity of DFT-based Hamaker constants for selected calcite systems
Before we set out to exploit the calculated dielectric functions at imaginary frequencies, we first explore how sensitive the results are to the low and high-frequency extrapolations methods.
The dielectric response for the Matsubara frequencies \(\varepsilon(i\xi_{m})\) is obtained from the Kramers-Kronig relation as in Eq. 16. The calculated electronic structure from DFT implies energies up to about 250 eV, much higher than energies reached by standard experimental measurements with ellipsometers (typically, well below 10 eV). Higher energies have only small probability for excitation. However, in order to include high-energy transitions, we extrapolate the high-frequency tail of \(\varepsilon(i\xi_{m})\) with a \(1/\xi_{m}^{2}\) behaviour up to about 10 keV. With that extrapolation we can analyze the accuracy of the calculations of the Hamaker constants with respect to the number of Matsubara frequencies. Moreover, while the default calculations include semicore states down to 100 eV below the valence band maximum, we have also generated spectra of the dielectric functions with no semicore states for the cations Al and Ca, i.e., \(3s^{2}p^{1}\) and \(3p^{6}4s^{2}\) valence configurations, respectively. Those spectra are denoted by "no SC".
To analyze the importance to consider the optical phonons for small Matsubara frequencies (typically, \(m<\) 4), we have generated electric functions for which the vibrational contribution is completely neglected (those spectra are denoted by "no vib") but where the static dielectric constant is included for the \(m=0\) term (denoted by "no vib with \(\varepsilon_{0}\)").
Parsegian and Ninham's study [13] in 1969 provided clear evidence that having partial knowledge of the optical spectra could sometimes be enough to make reasonable estimates of Hamaker constants and calculate the corresponding force. Assuming accurate model calculations using DFT, the extrapolation schemes mentioned earlier in this discussion yield very similar Hamaker constants for the calcite-ice-vapor system, with an accuracy of approximately 10%. According to the findings presented in Table 3, the predicted values for the cal
Figure 1: (Color online) Dielectric function of Al\({}_{2}\)O\({}_{3}\) and CaCO\({}_{3}\). Left panel describes the vibrational contribution and the right panel describes the electronic transitions.
cite (1)-ice (2)-vapor (3) configuration are approximately A\({}_{123}\sim-3.39\times 10^{-20}\) J and A\({}_{123;0}\sim 0.26\times 10^{-20}\) J, where A\({}_{123;0}\) is the contribution from the zeroth Matsubara term of A\({}_{123}\).
Significant variations in precision are observed when calculating the Hamaker interaction for calcite-vapor-calcite at different temperatures, as indicated in Table 4. At lower temperatures, a greater number of Matsubara terms is required to cover the necessary upper-frequency range for achieving comparable accuracy. Utilizing only 500 Matsubara terms could result in a substantial 25% error in the calculated Hamaker constant. In Table 4, the "default", "noSC", and "no vib with \(\varepsilon_{0}\)" approximations all yield identical zero frequency Hamaker constants (as shown in the first row of Table 4). However, when vibrations are disregarded, the zero frequency Hamaker constant experiences a decrease of 0.045, 0.185, and 0.749\(\times 10^{-20}\) J at temperatures of 70 K, 370 K, and 1500 K, respectively. It is crucial to highlight, however, that when considering different material combinations, such as cases where the dielectric functions of the materials intersect at certain frequencies or for the large separation behavior of gapped metals, [41] the results become more reliant on the chosen approximations.
### Parameterised \(\varepsilon(i\xi)\) for optimised data sets for calcite and alumina
To enable simple use of the calculated dielectric functions to study, for example, Casimir-Lifshitz interactions, we present parametrized average dielectric functions (see Table 5 for parameters) using a 14-mode oscillator model,[42] exploiting Eq. 20 but without any rotational relaxation. To be explicit we use the following model.
\[\varepsilon(i\xi)=1+\sum_{j}\frac{C_{j}}{1+(\xi/\omega_{j})^{2}}. \tag{24}\]
Here \(\omega_{j}\) are the characteristic frequencies (given in \(eV\) in the Table 5) and \(C_{j}\) are proportional to the oscillator strengths. For ice and cold water (\(T=273.16\) K) we use parameterised dielectric functions given in the literature.[7; 43; 4]
### Casimir-Lifshitz force near alumina and calcite surfaces
The relationship between the retarded (distance-dependent) Hamaker constant and the retarded free energy, denoted as \(A^{ret}(d)\) and \(F(d,T)\) respectively, can be expressed as \(A^{ret}(d)=-12\pi d^{2}\times F(d,T)\). This connection is illustrated in Figure 2 for various material combinations, including alumina-vacuum-alumina, calcite-vacuum-calcite, alumina-water-vapor, and calcite-water-vapor. The first two cases unequivocally confirm the well-known phenomenon where the interaction between identical surfaces is attractive and influenced by the material properties of the surfaces. Moreover, in situations involving the interface between a solid and a region with water vapor, there is a possibility of a short-range repulsion transitioning into a long-range attraction, that enables the formation of thin water films. In the latter two cases, where water serves as an intermediate layer, theoretical analysis suggests that in the presence of moisture (water vapour), a thin layer of water can indeed form on the outer surface of calcite, such as on soil particles. The corresponding free energy for these cases are presented in Figure 3. Interestingly, the systems exhibit energy minima for finite-sized water layers, further supporting the notion of water formation at these interfaces. At short separations, specifically when the finite velocity of light can be considered infinite, the product of reflection coefficients is proportional to
\[\frac{(\varepsilon_{1}-\varepsilon_{2})(\varepsilon_{3}-\varepsilon_{2})}{( \varepsilon_{1}+\varepsilon_{2})(\varepsilon_{3}+\varepsilon_{2})}. \tag{25}\]
This observation suggests that in cases where the dominating frequency range exhibits dielectric functions which
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline model & \(m_{max}\) & 70 K & 370 K & 1500 K \\ \hline & & \(A_{123}\) (\(10^{-20}\)J) & \(A_{123}\) (\(10^{-20}\)J) & \(A_{123}\) (\(10^{-20}\)J) \\ \hline default & 0 & 0.0505 & 0.267 & 1.082 \\ & 500 & 10.441 & 14.316 & 14.970 \\ & 1000 & 13.051 & 14.430 & 14.972 \\ & 1500 & 13.802 & 14.442 & 14.972 \\ & 2000 & 14.078 & 14.445 & 14.972 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The Hamaker constants at different temperatures using Eq. 18, \(A_{123}\) (\(10^{-20}\)J), in three-layer configurations with CaCO\({}_{3}\) (1)-vacuum (2)-CaCO\({}_{3}\) (3) using different cut-off Matsubara number (\(m_{max}\)) for the default dielectric function. The case with \(m_{max}\)=0 corresponds to the zero frequency Hamaker constant.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline model & \(m_{max}\) & \(A_{123}\) (\(10^{-20}\)J) & \(A_{123;0}\) (\(10^{-20}\)J) \\ \hline default & 250 & -3.088 & 0.260 \\ & 500 & -3.332 & 0.260 \\ & 1000 & -3.385 & 0.260 \\ & 1500 & -3.391 & 0.260 \\ & 2000 & -3.392 & 0.260 \\ & 2000 & -3.459 & 0.260 \\ no vib & 2000 & -3.325 & 0.306 \\ no vib with \(\varepsilon_{0}\) & 2000 & -3.371 & 0.260 \\ \hline \end{tabular}
\end{table}
Table 3: The Hamaker constants at 273.16 K using Eq.18, \(A_{123}\) and its contributions from the zeroth Matsubara term \(A_{123;0}\) for various three-layer configurations with CaCO\({}_{3}\) (1)-ice (2)-vapor (3) for different models CaCO\({}_{3}\) and using different cut-off Matsubara number (\(m_{max}\)).
fulfill \(\varepsilon_{1}>\varepsilon_{2}>\varepsilon_{3}\), a repulsive interaction can occur. On the other hand, when the intermediate layer possesses a higher (or lower) dielectric function than both surrounding media within the dominant frequency range, an attractive force emerges. Analyzing the dielectric functions of water (which has an exceptionally high zero frequency dielectric constant) and calcite (whose dielectric function surpasses that of water at intermediate and high frequencies), we find that, through energy minimization, a Casimir-Lifshitz force can induce the formation of moisture on the surfaces of calcite (and alumina). This phenomenon, driven by Casimir-Lifshitz interactions, leads to the wetting of soil particles by water. In the subsequent subsection, we will further discuss how this Casimir-Lifshitz-induced water wetting affects the effective dielectric function of calcite soil particles.
### Application to water and ice formation on calcite surfaces
In a pioneering paper, Elbaum and Schick [44] predicted that a minimum in the dispersion free energy of a thin liquid water film growing on ice would provide an explanation for observed partial melting on ice surfaces near the triple point of water. Following improvements of the modeling of water and ice dielectric functions,[7; 8; 43] further works explored this idea. These studies include ice melting and ice formation on ice nucleating particles in the atmosphere,[7; 8] the modeling for anomalous stability of gas hydrates in ice cold water,[9; 45] and ice formation/melting [43; 46] on cold water surfaces. Some additional effects of temperature and intermolecular forces on ice adhesion have been discussed by Emelyanenko _et al._.[47]. An interesting idea to explore is the understanding of how the accumulation of ice-cold water or ice from available water vapor outside a calcite surface occurs at the triple point of water. This phenomenon is predicted to lead to the formation of either a thin water or ice film, resulting in a reduction of the overall free energy. Ad
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline \multicolumn{4}{c}{\(C_{j}\) and \(\omega_{j}\) (in eV) for different compounds} \\ \hline \multicolumn{2}{c}{Al\({}_{2}\)O\({}_{3}\)} & \multicolumn{2}{c}{CaCO\({}_{3}\)} \\ \hline modes (\(\omega_{j}\)) & coefficient (\(C_{j}\)) & modes (\(\omega_{j}\)) & coefficient (\(C_{j}\)) \\ \hline
0.0478 & 3.4263 & 0.0038 & 0.0879 \\
0.0684 & 3.5999 & 0.0127 & 2.9661 \\
1.1552 & 0.0015 & 0.035 & 2.6176 \\
13.0704 & 1.0213 & 0.1696 & 0.4434 \\
20.5561 & 0.8539 & 1.6088 & 0.0023 \\
48.8508 & 0.0929 & 10.3375 & 0.8039 \\
119.7988 & 0.0295 & 18.9299 & 0.6241 \\
1288.9534 & 0.0 & 34.3621 & 0.2317 \\
67441.835 & 0.0005 & 71.2998 & 0.0 \\
102566.9502 & 0.0 & 81.1893 & 0.0221 \\
407868.9726 & 0.0 & 84.7809 & 0.0 \\
889915.6853 & 0.0 & 114.126 & 0.0 \\
1723890.6517 & 0.0 & 124.3659 & 0.0 \\
3447781.2791 & 0.0 & 241.5762 & 0.0005 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Parametrization of the average dielectric function of continuous media, \(\varepsilon(i\xi)\), at imaginary frequencies for Al\({}_{2}\)O\({}_{3}\) and CaCO\({}_{3}\) as calculated with first-principles calculations. In this table frequencies are given in eV. The largest difference between fitted and calculated \(\varepsilon(i\xi)\) is about 0.08%.
Figure 2: (Color online) The retarded Hamaker constant, \(A^{ret}(d)=-F(d,T)\times 12\pi d^{2}\), for alumina-vacuum-alumina (red curve), calcite-vacuum-calcite (green curve), alumina-water-vapor (blue curve), and calcite-water-vapor (black curve). Temperature is 273.16 K, and other details are given in the text. The corresponding free energies in the region where blue and black curves cross over to positive values are studied in Fig. 3.
ditionally, we identify a previously overlooked correction to the effective dielectric function of soil particles associated with this process. Addressing this aspect is crucial for the advancement of soil science models. To investigate this phenomenon, we utilize the dielectric functions for ice and cold water proposed by Luengo-Marquez and MacDowell. [7] The prediction based on Casimir-Lifshitz theory is that the growth of almost micron-sized ice or water layers is favored. For each calcite model considered in Table 3, the combination of calcite-ice-vapor results in an equilibrium ice layer with thickness \(\mathrm{d}_{2}^{eq}\). Utilizing the retarded finite temperature Casimir-Lifshitz theory, the values for \(\mathrm{d}_{2}^{eq}\) are found to be 0.151 \(\mu\)m, 0.153 \(\mu\)m, 0.121 \(\mu\)m, and 0.141 \(\mu\)m for the "default" calculations, "default noSC", "no vib", and "no vib with \(\varepsilon_{0}\)", respectively. These estimated thicknesses of approximately 0.12 \(\mu\)m to 0.15 \(\mu\)m are not significantly influenced by the number of terms included in the Matsubara summation, as long as a minimum of 500 terms is considered. Using the best available models for the dielectric functions of calcite and cold water, [7] we find that above the freezing point of water, the Lifshitz interaction promotes the existence of water vapor accumulating on the calcite surface leading to an \(\sim 0.14\,\mu\)m water film on the surface of soil particles. As an example, a calcite particle with a radius of \(\sim 1\,\mu\)m (neglecting curvature effects for demonstration purposes) could experience a volume increase of around 48% due to the presence of the wetting film. This phenomenon has implications for the effective dielectric function of soil, even in the absence of liquid water but in contact with water vapor. Notably, based on volume-averaged theory, the effective dielectric constant of a water-coated calcite particle would be approximately 34.8 for the specific example provided. Similarly, an ice coating with a thickness of approximately 0.15 \(\mu\)m would result in an effective dielectric constant of approximately 37.1. Figure 4 illustrates the estimated effective dielectric constants for calcite spheres coated with either ice or water as a function of the calcite particle's radius. Above the freezing point of water, water can adsorb onto soil particles, while at the triple point of water, the growth of ice, water, or a combination thereof depends on the initial conditions. It is noteworthy that even for a calcite particle with a radius of 100 \(\mu\)m, the effective dielectric constant is enhanced by approximately 4% compared to the dielectric constant of pure calcite.
## V Conclusions
Lebedew, in 1894, likely pioneered the connection between intermolecular forces and radiation processes. [48; 49] The theory linking optics and forces was subsequently established by Lifshitz and colleagues in their seminal papers. [15; 16] Initially, incorporating optical data across a broad frequency range seemed challenging for achieving highly accurate force calculations. However, Parsegian and Ninham demonstrated that a few oscillator models for the dielectric function could yield reasonably good agreement between theory and experimental forces. [13; 18; 19; 50; 51] In our current study, we employed DFT to investigate the optical properties of two significant components, CaCO\({}_{3}\) and Al\({}_{2}\)O\({}_{3}\), commonly found in diverse soil compositions. Our main objective was to investigate the influence of accurately describing the optical properties of these components on the formation of thin layers of water and ice on soil particles. This investigation carries significant scientific and engineering implications related to soil dynamics and related phenomena. Intriguingly, our findings reveal that the extrapolations made for low and high frequencies have minimal impact
Figure 4: (Color online) The estimated effective dielectric constants for ice-coated calcite sphere (red dashed curve) and for water-coated calcite sphere (blue solid curve), both curves as functions of the bare calcite radius. Details are given in the text.
Figure 3: (Color online) The retarded free energy per unit area for two cases studied in Fig. 2 where energy minima are predicted: alumina-water-vapor (blue curve), and calcite-water-vapor (black curve). Temperature is 273.16 K, and other details are given in the text.
on the Hamaker constants and Lifshitz interactions. Importantly, all the extrapolations we investigated were reasonably accurate, as an inadequate extrapolation would lead to erroneous predictions for Casimir-Lifshitz forces and Hamaker constants, as evidenced by the divergent outcomes for the two materials. However, it is essential to acknowledge that the accuracy of these conclusions may vary for other systems, especially those characterized by crossings of different \(\varepsilon(i\xi)\) functions at specific frequencies [52, 44] or involving metallic components [53, 41]. This work, along with prior research, establishes a direct link between optics derived from DFT and dispersion forces and their associated energies. We analyze these interactions and their impact on the formation of water and ice layers on soil particles in contact with moisture. Our findings reveal the previously unrecognized significance of this phenomenon in assessing soil water content, as highlighted by Lebron _et al._ [14]. Furthermore, our predictions hold substantial implications for future models concerning frost heave [6], as well as related effects such as cement storage and degradation in moist environments. [3]
###### Acknowledgements.
The authors thank the "ENSEMBLE3 - Centre of Excellence for nanophotonics, advanced materials and novel crystal growth-based technologies" project (GA No. MAB/2020/14) carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund and the European Union's Horizon 2020 research and innovation programme Teaming for Excellence (GA. No. 857543) for support of this work. We acknowledge also the financial support from the European Union's Horizon 2020 research and innovation programme, grant agreements No. 869815 and No. 101058694 and the Research Council of Norway (Project No. 250346). We acknowledge access to high-performance computing resources via NAISS, provided by NSC and PDC. All DFT calculations were performed at KTH Royal Institute of Technology (Sweden). We finally acknowledge constructive discussions on related topics, over the last decade, with Dr. Kristian Berland.
|
2309.03315 | Robotic Table Tennis: A Case Study into a High Speed Learning System | We present a deep-dive into a real-world robotic learning system that, in
previous work, was shown to be capable of hundreds of table tennis rallies with
a human and has the ability to precisely return the ball to desired targets.
This system puts together a highly optimized perception subsystem, a high-speed
low-latency robot controller, a simulation paradigm that can prevent damage in
the real world and also train policies for zero-shot transfer, and automated
real world environment resets that enable autonomous training and evaluation on
physical robots. We complement a complete system description, including
numerous design decisions that are typically not widely disseminated, with a
collection of studies that clarify the importance of mitigating various sources
of latency, accounting for training and deployment distribution shifts,
robustness of the perception system, sensitivity to policy hyper-parameters,
and choice of action space. A video demonstrating the components of the system
and details of experimental results can be found at
https://youtu.be/uFcnWjB42I0. | David B. D'Ambrosio, Jonathan Abelian, Saminda Abeyruwan, Michael Ahn, Alex Bewley, Justin Boyd, Krzysztof Choromanski, Omar Cortes, Erwin Coumans, Tianli Ding, Wenbo Gao, Laura Graesser, Atil Iscen, Navdeep Jaitly, Deepali Jain, Juhana Kangaspunta, Satoshi Kataoka, Gus Kouretas, Yuheng Kuang, Nevena Lazic, Corey Lynch, Reza Mahjourian, Sherry Q. Moore, Thinh Nguyen, Ken Oslund, Barney J Reed, Krista Reymann, Pannag R. Sanketi, Anish Shankar, Pierre Sermanet, Vikas Sindhwani, Avi Singh, Vincent Vanhoucke, Grace Vesom, Peng Xu | 2023-09-06T18:56:20Z | http://arxiv.org/abs/2309.03315v1 | # Robotic Table Tennis: A Case Study
###### Abstract
We present a deep-dive into a real-world robotic learning system that, in previous work, was shown to be capable of hundreds of table tennis rallies with a human and has the ability to precisely return the ball to desired targets. This system puts together a highly optimized perception subsystem, a high-speed low-latency robot controller, a simulation paradigm that can prevent damage in the real world and also train policies for zero-shot transfer, and automated real world environment resets that enable autonomous training and evaluation on physical robots. We complement a complete system description, including numerous design decisions that are typically not widely disseminated, with a collection of studies that clarify the importance of mitigating various sources of latency, accounting for training and deployment distribution shifts, robustness of the perception system, sensitivity to policy hyper-parameters, and choice of action space. A video demonstrating the components of the system and details of experimental results can be found at [https://youtu.be/uFcnWjB42I0.1](https://youtu.be/uFcnWjB42I0.1)
Footnote 1: Corresponding emails: {bewley, ddambro, lauragraesser, psanketi}@google.com.
## I Introduction
There are some tasks that are infeasible for a robot to perform unless it moves and reacts quickly. Industrial robots can execute pre-programmed motions at blindingly fast speeds, but planning, adapting, and learning while executing a task at high speed can push a robotic system to its limits and introduce complex safety and coordination challenges that may not show up in less demanding environments. Yet many vital tasks, particularly those that involve interacting with humans in real time, necessitate such an _high-speed robotic system_.
The goal of this paper is to describe such a system and the process behind its creation. Building any robotic system is a complex and multifaceted challenge, but nuanced design decisions are not often widely disseminated. Our hope is that this paper can help researchers who are starting out in high-speed robotic learning and serve as a discussion point for those already active in the area.
We focus on a robotic table tennis system that has shown promise in playing with humans (340 hit cooperative rallies) [2] and targeted ball returns (competitive with amateur humans) [20]. This platform provides an excellent case study in system design because it includes multiple trade-offs and desiderata -- e.g. perception latency v.s. accuracy, ease of use v.s. performance, high speed, human interactivity, support for multiple learning methods -- and is able to produce strong real world performance. This paper discusses the design decisions that went into the creation of the system and empirically validates many of them through analyses of key components.
This work explores all aspects of the system, how they relate to and inform one another, and highlights several important contributions including: (1) a highly optimized perception subsystem capable of running at 125Hz, (2) an example of high-speed, low latency control with industrial robots, (3) a simulation paradigm that can prevent damage in the real world while performing agile tasks and also train policies for zero-shot transfer using a variety of learning approaches, (4) a common interface for simulation and real world deployment, (5) an automatic physical environment reset system for table tennis that enables training and evaluation for long periods without human intervention, and (6) a research-friendly modular design that allows customization and component swapping. A summary of widely applicable lessons can be found in Section V and a video of the system in operation and experimental results can be found at [https://youtu.be/uFcnWjB42I0](https://youtu.be/uFcnWjB42I0).
## II Table Tennis System
Table tennis is easy to pick up for humans, but poses interesting challenges for a robotic system. Amateurs hit the ball at up to 9m/s, with professionals tripling that. Thus, the robot must be able to move, sense, and react quickly just to make contact, let alone replicate the precise hits needed for high-level play.
The components of this system are numerous with many interactions (Figure 2). Therefore, a major design focus was on modularity to enable testing and swapping. At a high level, the hardware components (cameras + vision stack, robot, ball thrower) are controlled through C++ and communicate state to the environment through a custom message passing system called Fluxworks. The various components not only send policy-related information this way (e.g. where the the ball is, the position of the robot) but also synchronize the state of the system (e.g. the robot has faulted or a new episode has started). Note that this process is simplified in simulation where all state information is centralized. Information from the components determines the state of the game (in the Referee) and input to the policy. The policy then produces actions which feed into the low-level controllers while the game state drives the system as a whole (e.g. the episode is over). All logging (Appendix M), including videos, is handled with Fluxworks which utilizes highly optimized Protobuffer communication.
The rest of this section describes the components in the system and their dependencies and interactions.
### _Physical Robots_
The player in this system consists of two industrial robots that work together: an ABB 6DOF arm and a Festo 2DOF linear actuator, creating an 8DOF system (Figure 1). The two robots complement each other: the gantry is able to cover large distances quickly, maneuvering the arm into an appropriate position where it can make fine adjustments and hit the ball in a controlled manner with the arm. The choice of industrial robots was deliberate, to focus on the machine learning challenges of the problem and for high reliability. However one major limitation of working with off-the-shelf industrial systems is that they may contain proprietary, "closed-box" software that must be contended with. For example, the ABB arm runs an additional safety layer that instantly stops the robot when it _thinks_ something bad will happen. It took careful effort to work within these constraints because the robot was operating near its limits. See Appendix C for details.
For the ABB arms, either an ABB IRB 120T or ABB IRB 1100-4/0.58 are used, the latter being a faster version with a
Fig. 2: Overview of the components for running simulated and real environments. The diagram on the left shows how the various software components fit to form the environment: in simulation, everything runs in a single process, but the real environment splits the work among several. The diagram on the right shows the components of the real hardware system. A custom MPI manages communication between the parts and logging of all data.
different joint structure. Both are capable of fast (joints rotate up to 420 or 600 degrees/s), repeatable (to within 0.01mm) motions and allow a high control frequency. The arm's end effector is an 18.8cm 3D-printed extension attached to a standard table tennis paddle that has had its handle removed (Figure 1 right). While the ABB arms are not perfect analogs to human arms, they can impart significant force and spin on the ball.
Taking inspiration from professional table tennis where play can extend well to the side of and away from the table, the Festo gantries range in size from \(2\times 2\)m to \(4\times 2\)m, despite the table tennis table being 1.525m wide. This extra range gives the robot more options for returning the ball. The gantries can move up to 2 m/s in in both axes. Most other robotic table tennis systems (discussed in Section IV-B) opt for a fixed-position arm but the inclusion of a gantry means the robot is able to reach more of the table space and has more freedom to adopt general policies. The downside is that the gantry complicates the system by adding two degrees of freedom leading to an overdetermined system whilst also imparting additional lateral forces on the robot arm that must be accounted for.
### _Communication, Safety, and Control_
The ABB robot accepts position and velocity target commands and provides joint feedback at 248Hz via the Externally Guided Motion (EGM) [1] interface. The Festo gantry is controlled through a Modbus [90] interface at approximately 125Hz. See Appendix C for full communication details.
Safety is a critical component of controlling robots. While the robot should be hitting the ball, collision with anything else in the environment should be avoided. To solve this problem, commands are filtered through a safety simulator before being sent to the robot (a simplified version of Section II-C). The simulator converts a velocity action generated by the control policy to a position and velocity command required by EGM at each timestep. Collisions in the simulator generate a repulsive force that pushes the robot away, resulting in a valid, safe command for the real robot. Objects in the safety simulator are dilated for an adequate safety margin and additional obstacles are added to block off the "danger zones" robot should avoid.
Low-level robot control can be extremely time-sensitive and is typically implemented in a lower-level language like C++ for performance. Python on the other hand is very useful for high-level machine learning implementations and rapid iteration but is not well suited to high speed robot control due to the Global Interpreter Lock (GIL) which severely hampers concurrency. This limitation can be mitigated through multiple Python processes, but is still not optimal for speed. Therefore this system adopts a hybrid approach where latency sensitive processes like control and perception are implemented in C++ while others are partitioned into several Python binaries (Figure 2). Having these components in Python allows researchers to iterate rapidly and not worry as much about low-level details. This separation also allows components to be easily swapped or tested.
### _Simulator_
The table tennis environment is simulated to facilitate sim-to-real training and prototyping for real robot training. PyBullet [19] is the physics engine and the environment interface conforms to the Gym API [12].
Figure 2 (left) gives an overview of the environment structure in simulation and compares it with the real world environment (see Section II-E). There are five conceptual components; (1) the physics simulation and ball dynamics model which together model the dynamics of the robot and ball, (2) the StateMachine which uses ball contact information from the physics simulation and tracks the semantic state of the game (e.g. the ball just bounced on the opponent's side of the table, the player hit the ball), (3) the RewardManager which loads a configurable set of rewards and outputs the reward per step, (4) the DoneManager which loads a configurable set of done conditions (e.g. ball leaves play area, robot collision with non-ball object) and outputs if the episode is done per step, and (5) the Observation class which configurably formats the environment observation per step.
The main advantage of this design is that it isolates components so they are easy to build and iterate on. For example, the StateMachine makes it easy to extend the environment to more complex tasks. New tasks are defined by implementing a new state machine in a config file. The StateMachine also makes it easier to determine the episode termination condition and some rewards (e.g. for hitting the ball). Note that whilst related, it is not the same as the transition function of the MDP; the StateMachine is less granular and changes at a lower frequency. Another example is the RewardManager. It is common practice in robot learning when training using the reinforcement learning paradigm to experiment frequently with the reward function. To facilitate this, reward components and their weights are specified in a config file taken in by the RewardManager, which calculates and sums each component. This makes it straightforward to change rewards and easy to define new components.
#### Ii-C1 Latency modeling
Latency is a major source of the sim-to-real gap in robotics [91]. To mitigate this issue, and inspired by Tan et al. [91], latency is modelled in the simulation as follows. During inference, the history of observations and corresponding timestamps are stored and linearly interpolated to produce an observation with a desired latency. In contrast to [91] which uses a single latency range sampled uniformly for the whole observation, the latency of five main components -- Ball observation (i.e. latency of the ball perception system), ABB observation, Festo observation, ABB action, Festo action
\begin{table}
\begin{tabular}{||c|c c||} \hline & \multicolumn{2}{c||}{Latencies (ms)} \\ Component & \(\mu\) & \(\sigma\) \\ \hline \hline Ball observation & 40 & 8.2 \\ ABB observation & 29 & 8.2 \\ Festo observation & 33 & 9.0 \\ ABB action & 71 & 5.7 \\ Festo action & 64.5 & 11.5 \\ \hline \end{tabular}
\end{table} TABLE I: Latency distribution values.
-- are modeled as a Gaussian distribution and a distinct distribution is used for each component. The mean and standard deviation per component were measured empirically on the physical system through instrumentation that logs timestamps throughout the software stack (see Table I). In simulation, at the beginning of each episode a latency value is sampled per component and the observation components are interpolated to those latency values per step. Similarly, action latency is implemented by storing the raw actions produced by the policy in a buffer, and linearly interpolating the action sent to the robot to the desired latency.
#### Iii-C2 Ball distributions, observation noise, and domain randomization
A table tennis player must be able to return balls with many different incoming trajectories and angular velocities. That is, they experience different _ball distributions_. Ball dynamics and distributions are implemented following [2]. Each episode, initial ball conditions are sampled from a parameterized distribution which is specified in a config. To account for real world jitter, random noise is added to the ball observation. Domain randomization [77, 15, 41, 75] is also supported for many physical parameters. The paddle and table restitution coefficients are randomized by default.
For more details on the simulator see Appendix D.
### _Perception System_
Table tennis is a highly dynamic sport (an amateur-speed ball crosses the table in 0.4 seconds), requiring extremely fast reaction times and precise motor control when hitting the ball. Therefore a vision system with the desiderata of low latency and high precision is required. It is also not possible to instrument (e.g. with LEDs) or paint the ball for active tracking as they are very sensitive to variation in weight or texture and so a passive vision system must be employed.
A custom vision pipeline that is fast, accurate and passive is designed to provide 3D balls positions. It consists of three main components 1) 2D ball detection across two stereo cameras, 2) triangulation to recover the 3D ball position and 3) a sequential decision making process which manages trajectory creation, filtering, and termination. The remainder of this section will provide details on the hardware and these components.
#### Iii-D1 Camera Hardware, Synchronization and Setup
For image capture the system employs a pair of Ximea MQ013CG-ON cameras that have a hardwired synchronization cable and are connected to the host computer via USB3 active optical cables. Cameras lenses are firmly locked and focused. Synchronization timestamps are used to match images downstream. Many different cameras were tried, but these had high frame rates (the cameras can run at 125FPS at a resolution of 1280x1024) and an extremely low latency of 388\(\mu\)s. Other cameras were capable of higher FPS, at the cost of more latency which is not acceptable in this high-speed domain. To achieve the desired performance the camera uses a global shutter with a short (4ms) exposure time and only returns the raw, unprocessed Bayer pattern.
The ball is small and moves fast, so capturing it accurately is a challenge. Ideally the cameras would be as close to the action as possible, but in a dual camera setup, each needs to view the entire play area. Additionally, putting sensitively calibrated cameras in the path of fast moving balls is not ideal. Instead, the cameras are mounted roughly 2m above the play area on each side of the table and are equipped with Fujinon FE185C086HA-1 "fisheye" lenses that expand the view to the full play area, including the gantries. While capturing more of the environment, the fisheye lens distortion introduces challenges in calibration and additional uncertainty in triangulation.
The direct linear transform (DLT) method [35] for binocular stereo vision estimates a 3D position from these image locations in the table's coordinate frame. However, the problem of non-uniform and non-zero mean bias known as triangulation bias [23] must be considered in optimizing camera placement. Two stereo camera configurations are considered, two overhead cameras viewing the scene from: 1) the same side of the table and 2) opposite sides. Simulation is used to quantify triangulation bias across these configurations and decouple triangulation from potential errors in calibration. Quantifying this bias for common ball positions (see Figure 3) indicates that positioning the cameras on opposite table sides results in a significant reduction in the overall triangulation bias. Furthermore, this configuration also benefits from a larger baseline between the cameras for reducing estimation variance [25].
#### Iii-D2 Ball Detection
The core of the perception system lies with ball detection. The system uses a temporal convolutional architecture to process each camera's video stream independently and provides information about the ball location and velocity for the downstream triangulation and filtering (see Figure 4). The system uses raw Bayer images and temporal convolutions, which allow it to efficiently process each video stream independently and thus improve the latency and accuracy of ball detection. The output structure takes
Fig. 3: Quantification of triangulation bias over the length of playing area (y-position) at a height of 250mm above the center line. The more orthogonal viewpoints offered by placing cameras on opposite sides of the tables lead to an order of magnitude reduction in triangulation bias.
inspiration from CenterNet [99, 100] by producing per location predictions that include: a ball score indicating corresponding to the likelihood of the ball center at that location, a 2D local offset to accommodate sub-pixel resolution, and a 2D estimate of the ball velocity in pixels.
Direct Processing of Bayer ImagesThe detection network takes the raw Bayer pattern image [7] as input directly from the high speed camera after cropping to the play area at a resolution of \(512\times 1024\). By skipping Bayer to RGB conversion, 1ms (or 15% of the time between images) of conversion induced latency per camera is avoided and data transferred from camera to host to accelerator is reduced by \(\frac{2}{3}\), further reducing latency. In contrast to other models utilizing Bayer images [14], no loss in performance was found using the raw format, largely due to special attention given to structure of the \(2\times 2\) Bayer pattern and ensuring the first convolution layer is also set to have a stride of \(2\times 2\). This alignment means that the individual weights of the first layer are only responsible for a single color across all positions of the convolution operation. The immediate striding also benefits wall-clock time by down-sampling the input to a quarter of the original size. The alignment with the Bayer pattern is also extended to any crop operations during training as discussed later in this section.
Detector Backbone with Buffered Temporal ConvolutionsA custom deep-learning based ball detector is used to learn the right combination of color, shape and motion for identifying the ball in play. Its architecture falls in the category of a convolutional neural network (CNN) with a compact size of only 27k parameters spread over five spatial convolutional layers and two temporal convolutions to capture motion features. Compared to related architectures such as ConvLSTM [85], this fully convolutional approach restricts the temporal influence of the predictions to a finite temporal window allowing for greater interpretability and fault diagnosis. Full details of the architecture are provided in Appendix E.
Temporal convolutional operations are employed to capture motion as a visual cue for detecting the ball in play and the direction of motion. In contrast to the typical implementation that requires a window of frames to be presented at each timestep, the implementation in this system only requires a single frame to be presented to the CNN for each timestep during inference. This change minimises data transfer from the host device to the accelerator running the CNN operations, a critical throughput bottleneck. This temporal layer creates a buffer to store the input feature for the next timestep as in Khandelwal et al. [49].
Training the Detector ModelTo train the detection model, a dataset of 2.3M small temporal patches were selected to match the receptive field of the architecture (\(64\times 64\) pixels and \(n\) frames). The patches are selected from frames with a labeled ball position where a single _positive patch_ is defined as being centered on the ball position in the current frame with the temporal dimension filled with the same spatial position but spanning \([t-n+1,t]\). Similarly a _negative patch_ can be selected from the same frame at a random location which does not overlap with the positive patch. Examples of positive and negative patches are provided in the Appendix. Special consideration is taken to align the patch selection with the Bayer pattern by rounding the patch location to the nearest even number. This local patch based training has several benefits; it 1) reduces the training time by 50\(\times^{2}\), 2) helps generalization across different parts of the image as the model is unable to rely on global statistics of ball positions, 3) offers a more fine-grained selection of training data for non-trivial cases e.g. when another ball is still moving in the scene, and similarly 4) allows for hard negative mining [89] on sequences where it is known for no ball to exist in play.
For each patch the separate outputs each have a corresponding loss. First, the ball score is optimized using the standard binary cross-entropy loss for both positive and negative patches. For positive patches only, the local offset is optimized using the mean-squared error loss using the relative position between the corresponding pixel coordinate and the ball center in the current frame. The velocity prediction is similarly optimized, instead using the relative position of the ball in next frame to the current frame as the regression target.
#### Ii-D3 3D Tracking
To have a consistent representation that is invariant to camera viewpoint, the ball is represented in 3D in the table's coordinate frame. If the maximum score in both images are above a learnt threshold, their current and next image positions using the local offset and velocity predictions are triangulated using DLT [35]. This corresponds to the 3D position and 3D velocity of the ball in the table frame. Finally these observations are provided to a recursive Kalman filter [46] to refine the estimated ball state before its 3D position is sent to the robot policy.
### _Running on the Real Robot_
As an analog to the simulated environment (Section II-C) there is an equivalent Gym environment for the real hardware. This environment must contend with an additional set of challenges that are either nonexistent or trivial in simulation: 1) continuous raw sensor observation at different frequencies that is subjected to jitter and real world noise, 2) determining the start of an episode, 3) monitoring environment state, 4) environment resets.
Fig. 4: Ball Detection. These synchronized images (cropped to approximately 50% normal size) show the temporal convolutional network detecting the ball (detected ball center in pixels) independently from cameras on both sides of the table. These detections are triangulated and used for 3D tracking.
#### Iii-D1 Observation generation
In the simulator, the state of every object is known and can be queried at fixed intervals. In contrast, the real environment receives sensor readings from different modalities at different frequencies (e.g. the ball, ABB, Festo) that may be inaccurate or arrive irregularly. To generate policy observations, the sensor observations, along with their timestamps are buffered and interpolated or extrapolated to the environment step timestamp. To address noise and jitter a bandpass filter is applied to the observation buffer before interpolation (see Appendix F). These observations are afterwards converted according to the policy observation specification.
#### Iii-D2 Episode Starts
Simulators provide a direct function to reset the environment to a start state instantly. In the real world, the robot must be physically moved to a start state with controllers based on standard S-curve trajectory planning at the end of the episode or just after a paddle hit. The latter was shown to be beneficial in [2], so that a human and robot could interact as fast as possible. An episode starts when a valid ball is thrown towards the robot. The real world must rely on vision to detect this event and can be subject to spurious false positives, balls rolling on the table, bad ball throws, etc., which need to be taken into consideration. Therefore an episode is started only if a ball is detected incoming toward a robot from a predefined region of space.
#### Iii-D3 Referee
To interface with the GymAPI a process called _Referee_ generates the reward, done, and info using the StateMachine, RewardManager, and DoneManager as defined in Section II-C. It receives raw sensor observations at different frequencies and updates a PyBullet instance. The observations are filtered (see Appendix F) and used to update the PyBullet state (only the position). It calculates different ball contact events (see Appendix D), compensates for false positives, and uses simple heuristics and closest point thresholds to determine high confidence ball contact detections to generate the events used by the previously mentioned components.
#### Iii-D4 Automatic system reset
_-- continuously introducing balls:_ An important aspect of a real world robotic system is environment reset. If each episode requires a lengthy reset process or human intervention, then progress will be slow. Human table tennis players also face this problem and so-called "table tennis robots" are commercially available to shoot balls continuously and even in a variety of programmed ways. Almost all of these machines accomplish this task with a hopper of balls that introduces a ball to two or more rotating wheels forcing it out at a desired speed and spin (see Figure 1 left). Unfortunately, while many of these devices are "programmable", none provide true APIs and instead rely on physical interfaces. Therefore, an off-the-shelf thrower was customized with a Polulu motor controller and an infrared sensor for detecting throws, allowing it to be controlled over USB. This setup allows balls to be introduced purely through software control.
However, the ball thrower is still limited by the hopper capacity. A system to automate the refill process was designed that exploits the light weight of table tennis balls by blowing air to return them to the hopper. A ceiling-mounted fan blows down to remove balls stuck on the table, which is surrounded by foamcore to direct the balls into carpeted pathways. At each corner of the path is a blower fan (typically meant for drying out carpet) that directs air across the floor. The balls circulate around the table until they reach a ramp that directs them to a tube that also uses air to transport them back into the hopper. When the thrower detects it hasn't shot a ball for a while, the fans turn on for 40 seconds, refilling the hopper so training or evaluation can continue indefinitely. See Appendix F for a diagram and the video at [https://youtu.be/uFcnWjB42I0](https://youtu.be/uFcnWjB42I0) for a demonstration.
One demonstration of the utility of this system is through the experiments in this paper. For example, the simulator parameter ablation studies (Section III-A) involved evaluating over 150 policies in 450+ independent evaluations on a physical robot with 22.5k+ balls thrown. All evaluations were conducted remotely and required onsite intervention just once3.
Footnote 3: Some tape became unstuck and the balls escaped.
### _Design of Robot Policies_
Policies have been trained for this system using a variety of approaches. This section details the basic structure of these policies and any customization needed for specific methods.
#### Iii-F1 Policies
The policy input consists of a history of the past eight robot joint and ball states, and it outputs the desired robot state, typically a velocity for each of the eight joints (joint space policies). Many robot control frequencies ranging from from 20Hz - 100Hz have been explored, but 100Hz is used for most experiments. Most policies are compact, represented as a three layer, 1D, fully convolutional gated dilated CNN with \(\approx\)1k parameters introduced in [26]. However, it is also possible to deploy larger policies. For example, a 13m parameter policy consisting of two LSTM layers with a fully connected output layer has successfully controlled the robot at 60Hz [20].
#### Iii-F2 Robot Policies in Task Space
Joint space policies lack the relation between joint movement and the task at hand. A more compact task space -- the pose of the robot end effector -- is especially beneficial in in robotics, showing significant improvements in learning of locomotion and manipulation tasks [21, 60, 95, 57].
Standard task space control uses the Jacobian Matrix to calculate joint torques or velocities given target pose, target end effector velocities, joint angles and joint velocities. This system employs a reduced (pitch invariant) version with 5 dimensions. Instead of commanding the full pose of the end effector, it commands the position in 3 dimensions and the surface normal of the paddle in 2 dimensions (roll and yaw). In contrast to the default joint space policies, which use velocity control, task space policies are position controlled, which have the added benefit of easily defining a bounding cube that the paddle should operate in. The robot state component of the
observation space is also represented in task space, making policies independent of a robot's form factor and enabling transfer of learned policies across different robots (see Section III-D).
### _Blackbox Gradient Sensing (BGS)_
The design of the system allows for interaction with many different learning approaches, as long as they conform to the given APIs. The system supports training using a variety of methods including BGS [2] (evolutionary strategies), PPO [83] and SAC [33] (reinforcement learning), and GoalsEye (behavior cloning). The rest of the section describes BGS, since it is used as the training algorithm in all the system studies in this paper (see Section III).
BGS is an ES algorithm. This class of algorithm maximize a smoothed version of expected episode return, \(\mathcal{R}\), given by:
\[\mathcal{R}_{\sigma}(\theta)=\mathbb{E}_{\delta\sim\mathcal{N}(0,\mathbf{L}_ {\delta})}[\mathcal{R}(\theta+\sigma\delta)] \tag{1}\]
where \(\sigma>0\) controls the precision of the smoothing, and \(\delta\) is a random normal perturbation vector with the same dimension as the policy parameters \(\theta\). \(\theta\) is perturbed by adding or subtracting \(N\) Gaussian perturbations \(\delta_{R_{i}}\) and calculating episode return, \(R_{i}^{+}\) and \(R_{i}^{-}\) for each direction. Assuming the perturbations, \(\delta_{R_{i}}\), are rank ordered with \(\delta_{R_{1}}\) being the top performing direction, then the policy update can be expressed as:
\[\theta^{{}^{\prime}}=\theta+\alpha\frac{1}{\sigma^{R}}\sum_{i=1}^{k}\Bigg{[} \Big{(}\Big{(}\frac{1}{m}\sum_{j=1}^{m}R_{i,j}^{+}\Big{)}-\Big{(}\frac{1}{m} \sum_{j=1}^{m}R_{i,j}^{-}\Big{)}\Big{)}\delta_{R_{i}}\Bigg{]} \tag{2}\]
where \(\alpha\) is the step size, \(\sigma^{R}\) is the standard deviation of each distinct reward (positive and negative direction), \(N\) is the number of directions sampled per parameter update, and \(k(<N)\) is the number of top directions (elites). \(m\) is the number of repeats per direction to reduce variance for reward estimation. \(R_{i,j}^{+}\) is the reward corresponding to the j-th repeat of i-th in the positive direction. \(R_{i,j}^{-}\) is the same but in the negative direction.
BGS is an improvement upon a popular ES algorithm ARS [59], with two major changes.
#### Ii-G1 Reward differential elite-choice.
In ARS, rewards are ranked yielding an ordering of directions based on the absolute rewards of either the positive or negative directions. BGS takes the absolute difference in rewards between the positive and negative directions and rank the differences to yield an ordering over directions. ARS can be interpreted as ranking directions in absolute reward space, whereas BGS ranks directions according to reward curvature:
ARS: Sort
\[\delta_{R_{i}}\]
by
\[\text{max}\{R_{i}^{+},R_{i}^{-}\}\]
. (3)
BGS: Sort
\[\delta_{R_{i}}\]
by
\[|R_{i}^{+}-R_{i}^{-}|\]
. (4)
#### Ii-G2 Orthogonal sampling
Orthogonal ensembles of perturbations \(\delta_{R_{i}}\)[18] relies on constructing perturbations \(\delta_{R_{i}}\) in blocks, where each block consists of pairwise orthogonal samples. Those samples are still of Gaussian marginal distributions, matching those of the regular non-orthogonal variant. The feasibility of such a construction comes from the isotropic property of the Gaussian distribution (see: [18] for details).
BGS policies are trained in simulation and transferred zero-shot to the physical hardware. An important note is that the BGS framework can also fine tune policies on hardware through the real Gym API (Section II-E). Hyperparameters must be adjusted in this case to account for there only being one "worker" to gather samples.
## III System Studies
This section describes several experiments that explore and evaluate the importance of the various components of the system.
Except where noted, the experiments use a ball return task for training and testing. A ball is launched towards the robot such that it bounces on the robot's side of the table (a standard rule in table tennis). The robot must then hit the ball back over the net so it lands on the opposite side of the table. Although other work has applied this system to more complex tasks (e.g. cooperative human rallies [2]), a simpler task isolates the variables we are interested in from complications like variability and repeatability of humans.
For real robot evaluations, making contact with the ball is worth one point and landing on the opposing side is worth another point, for a maximum episode return of 2.0. A single evaluation is the average return over 50 episodes. Simulated training runs typically have additional reward shaping applied that change the maximum episode return to 4.0 (see Appendix D).
### _Effect of Simulation Parameters on Zero-Shot Transfer_
Our goal in this section is to assess the sensitivity of policy performance to environment parameters. We focus on the zero-shot sim-to-real performance of trained policies and hope that this analysis (presented in Figure 5) sheds some light on which aspects of similar systems need to be faithfully aligned with the real world and where error can be tolerated. For the effects on training quality see Appendix H.
#### Iii-A1 Evaluation methodology
For each test in this section, 10 models were trained in simulation using BGS described in Section II-G for 10,000 training iterations (equivalent to 60m environment episodes, or roughly 6B environment steps). In order to assess how different simulated training settings affect transfer independent of how they affect training quality, we only evaluate models that trained well in simulation (i.e., achieved more than 97.5% of the maximum possible return). The resulting set of policies were evaluated on the real setup for 3 \(\times\) 50 episodes.
#### Iii-A2 Modeling latency is crucial for good performance
The latency study presented in Figure 5 (top left) show that policies are sensitive to latency. The baseline model (i.e. the model that uses latency values as measured on hardware) had a significantly higher zero-shot transfer than any of the other latency values tested. The next best model had 50% of the baseline latency, achieving an average zero-shot transfer of 1.33 compared with 1.83 for the baseline. Zero-shot transfer
scores for the other latency levels tested (0%, 20% and 150%) had very poor performance. Interestingly, some policies are lucky and transfer relatively well -- for example one policy with 0% latency had an average score of 1.54. However, performance is highly inconsistent when simulated latency is different from measured parameters.
#### V-B3 Anchoring ball distributions to the real world matters, but precision is not essential
The ball distribution study shown in Figure 5 (top right) indicate that policies are robust to variations in ball distributions provided the real world distribution (thrower) is contained within the training distribution. The medium and wide distributions were derived from the baseline distribution but are 25% and 100% larger respectively (see Appendix H). The distribution derived from a different ball thrower (thrower 2) is also larger than the baseline thrower distribution but effectively contains it. In contrast, very small training distributions (tiny) or distributions which are disjoint from the baseline distribution in one or more components (velocity offset -- disjoint in y velocity) result in performance degradation.
#### V-B4 Policies are robust to observation noise provided it has zero mean
The observation noise study in Figure 5 (bottom left) revealed that policies have a high tolerance for zero-mean observation noise. Doubling the noise to +/- 8cm (4 ball diameters in total) or removing it altogether had a minor impact on performance. However, if noise is biased performance suffers substantially. Adding a 4cm (one ball diameter) bias to the default noise results in a 36% drop in reward (approximately 80% drop in return rate).
V-B5 Policies are sensitive to physical parameters, which can have complex interactions with each other
The physical parameter ablations in Figure 5 (bottom right) reveal how sensitive policies are to all parameter values tested. Removing randomization from the table restitution coefficient (table: no R randomize) degrades performance by 14%. Increasing the ball restitution coefficient by just 2% reduces performance by 25%, whilst increasing the table restitution coefficient by 8% reduces performance by 36%.
This study also highlights a current limitation of the system. Setting key parameters in the simulator such as the table and paddle restitution coefficients, or the paddle mass to values estimated following the process described in Appendix D led to worse performance than tuned values (see measured v.s. tuned and also Appendix H for all parameter values). We hypothesize this is because ball spin is not correctly modelled in the simulator and that the tuned values compensate for this for the particular ball distributions used in the real world. One challenge of a complex system with many interacting components is that multiple errors can compensate for each other, making them difficult to notice if performance does not suffer dramatically. It was only through conducting these studies that we became aware of the drop in performance from using measured values. In future work we plan to model spin
Fig. 5: Effect of simulator parameters on zero-shot sim-to-real transfer. Policies are sensitive to latency and physical parameter values, yet surprisingly robust to ball observation noise and changes in the ball distribution. Charts show the mean (with 95% CIs) zero-shot sim-to-real transfer. 2.0 is a perfect score with a policy returning all balls. R = restitution coefficient.
Fig. 6: Perception resilience studies. Reducing FPS and increasing latency have threshold points where performance of the system is stable until they reach a point where the robot can no longer react the to ball in them. Additional noise causes graceful degradation in performance, increased by non-zero mean distributions (common in vision triangulation).
and investigate if this resolves the performance degradation from using measured values. For further discussion on this topic, see Appendix I.
### _Perception Resilience Studies_
In this section we explore important factors in the perception system and how they affect end-to-end performance of the entire system. Latency and accuracy are two major factors and typically there is a tradeoff between them. A more accurate model may take longer to process but for fast moving objects (like a table tennis ball) it may be better to have a less accurate result more quickly. Framerate also plays a role. If processing takes longer than frames are arriving, latency will increase over time and eventually require dropping frames to catch up.
For these experiments we select three high performing models from the baseline simulator parameter studies and test them on the real robot while modulating vision performance in the following ways: (1) reduce the framerate of the cameras, (2) increase latency by queuing observations and sending them to the policy at fixed intervals, and (3) reduce accuracy by injecting zero mean and non-zero mean noise to the ball position (over and above inherent noise in the system).
The results from these experiments can be seen in Figure 6. For both framerate and latency, the performance stays consistent with the baseline until there is a heavy dropoff at 50 FPS and 150ms respectively, at which point the robot likely no longer has sufficient time to react to the ball and swings too late, almost universally resulting in balls that hit the net instead of going over. There is a gentle decline in performance as noise increases, but the impact is much greater for non-zero mean noise: going from zero mean ([-4, 4] cm) noise to non-zero mean ([-1, 7] cm) is equivalent to doubling the zero mean noise ([-8, 8] cm). The interpolation of observations described in Section II-E likely serves as a buffer against low levels of zero mean noise. Qualitatively, the robot's behavior was jittery and unstable when moderate noise was introduced. Overall, the stable performance over moderate framerate and latency declines implies that designing around accuracy would be ideal for this task, although as trajectories become more varied and nuanced higher framerates may be necessary to capture their detailed behavior.
### _ES Training Studies_
BGS has been a consistent and reliable method for learning table tennis tasks on this system in simulation and fine-tuning in the real world. In this section we ablate the main components of BGS and compare it with a closely related method, ARS.
Figure 7 (top) presents a comparison of BGS and ARS on the default ball return task against a narrow ball distribution. For both methods we set number of perturbations to 200, \(\sigma\) to 0.025, and the proportion of perturbations selected as elites to 30%. We roll out each perturbation for 15 episodes and average the reward to reduce reward variance due to stochasticity in the environment. We also apply the common approach of state normalization [82, 71]. Under these settings, the methods are comparable.
Next we consider a harder _ball targeting_ task where the objective for the policy is to return the ball to a precise (randomized per episode) location on the opponent's side of the table [20]. We further increase the difficulty by increasing the range of incoming balls, i.e. using a wider ball distribution, and by decreasing the number of perturbations to 50. Tuning the step size \(\alpha\) was crucial for successful policy training with ARS (Figure 7 bottom left). An un-tuned step-size may lead to extremely slow training or fast training with sub-optimal asymptotic performance.
Figure 7 (bottom right) shows the enhancements in training made by the BGS techniques independently and collectively compared to baseline ARS. Reward differential elite-choice and orthogonal sampling leads to faster convergence. As a result, BGS is the default ES algorithm for policy training.
### _Acting and Observing in Task Space_
The previous results use joint space for observations and actions. In this section we explore policies that operate in "task space" (see Section II-F2). Task space has several benefits: it is compact, interpretable, provides a bounding cube for the end effector as a safety mechanism, and aligns the robot action and the observation spaces with ball observations. In our experiments we show that task spaces policies train faster and, more importantly, can be transferred to different robot morphologies.
Figure 8 (top left) compares training speed between joint space (JS), task space for actions -- TS(Act), and full task space policies (actions and observations) -- TS(Act&Obs). Both task spaces policies train faster than JS policies. We also
Fig. 7: BGS ablation studies. (top) BGS and ARS perform comparably on the ball return task with a narrow ball distribution. (bottom) A harder environment, ball targeting with a larger ball distribution. (left) Step-size alpha has a very significant effect on training success. (right) Improvements with reward differential elite-choice technique, orthogonal perturbation sampling and their combination (BGS).
assess task space policies on a harder (damped) environment4. Now the robot needs to learn to swing and hit the ball harder. Figure 8 (top right) shows that task space policies learn to solve the task (albeit not perfectly) while joint space policies gets stuck in a local maxima. For transfer performance of these policies see Appendix -K.
Footnote 4: Created by lowering the restitution coefficient of the paddle and ball, and increasing the linear damping of the ball.
One crucial benefit of operating in task space is the robustness to different robots or morphologies. To demonstrate this, we first take the TS(Act&Obs) model trained in the damped environment and transfer it to the real robot (Figure 8 bottom). Performance is almost perfect with a score of 1.9. Next we change the initial pose of the robot and freeze two of the arm joints. Policy performance is maintained under a pose change (ABB 120T & Modified Default Pose (MDP)) and only drops slightly when some joints are also frozen (ABB 120T & MDP + 2 Frozen Joints). We then evaluate the policy on a robot with a different morphology and ball distribution and see that performance drops substantially. However, a task space policy is easily adaptable to new settings without retraining by adding a residual to actions to shift the paddle position. This is not possible when operating in joint space. Observing the robot showed that it was swinging too low and slightly off-angle and so adding a residual of 7cm above the table and 0.2 radians of roll causes the original policy performance to be nearly recovered (ABB 1100-4 & New Ball Dist & Manual Offset).
### _Applying to a New Task: Catching_
While the system described above was designed for table tennis, it is general enough to be applied to other agile tasks. In this section, we apply it to a new task of catching a thrown ball and assess the effect of latency modelling, similar to the latency experiment from Section III-A.
We used a similar system setup with minor modifications: a single horizontal linear rail (instead of two) and a lacrosse head as the end effector. The software stack and agents are similar with small differences: simplified RewardManager and DoneManager, soft body modelling of the net in simulation, trajectory prediction inputs for agents, and handling occlusions when the ball is close to the net. The BGS agents are similarly trained in a simulator before being transferred to the real hardware, where they are fine-tuned. Agents achieve a final catching success rate of \(85\sim 90\%\). For full details on the task see related work [84].
This task has a much larger variance in sim-to-real transfer due to difficulty in accurately modelling net & ball capture dynamics. As in the table tennis study, agents were trained in simulation with latencies of \(100\%\), \(0\%\), \(20\%\), \(50\%\), and \(150\%\) of baseline latency. Experiments with lower latency (\(0\%\), \(20\%\), and \(50\%\)) all transferred poorly, between \(0\sim 10\%\) catch rate. Curiously, baseline latency and \(150\%\) latency performed similarly, with one \(150\%\) run achieving the best zero-shot transfer ever: a score equaling policies fine-tuned on the real robot. This finding contradicts the results in the table tennis task, which prompted further investigation and revealed that the latency for this task was set incorrectly in the configuration file; the real value was much closer to the \(150\%\) value.
This revelation dovetails with the \(50\%\) latency table tennis results: a close latency can still give decent performance, but accurate values are better. As such, it may be useful to generally run ablation studies such as these to challenge assumptions about the system and potentially find bugs.
## IV Related Work
### _Agile Robotic Learning_
The space of agile robotic learning systems is varied. It includes autonomous vehicles such as cars [76, 79, 70, 9, 10], legged locomotion [73, 91, 32, 78, 86, 87, 4], as well as dynamic throwing [3, 52, 29, 98], catching [84], and hitting -- which is where table tennis fits.
Many of these systems face similar challenges -- environment resets, latency, safety, sim-to-real, perception, and system running speed as exemplified in strict inference and environment step time requirements.
The benefits of automatic resets have been demonstrated in quadrupedal systems [86, 87] and throwing [98]. To our knowledge, this system is the first table tennis learning system with automatic resets, enabling autonomous training and evaluation in the real world for hours without human intervention.
Latency is a well known problem in physical learning systems [91]. The system contributes to this area by extending [91], modeling multiple latencies in simulation, and by validating its importance through extensive experiments. Orthogonally, the system also includes observation interpolation on the physical system as a useful technique for increasing the robustness of deployed policies to latency variation (e.g. from jitter). We demonstrated empirically the robustness of policies to substantial injections of latency and hypothesize that the observation interpolation plays a crucial role in this.
Fig. 8: Training policies in task space in the baseline environment (top-left) and a harder damped environment (top-right). Training converges faster in task-space for both scenarios. (bottom) A task space policy trained in the damped environment is successfully transferred to different morphologies and a new robot.
Safety is another crucial element that becomes very important with fast moving robots. Trajectory planners [54] can avoid static obstacles, neural networks can check for collisions [48], safe RL can be used to restrict state spaces [97], or a system can learn from safe demonstrations [67, 68, 40]. In contrast, this system runs a parallel simulation during deployment as a safety layer. Doing so is beneficial because the robot policy runs at a high frequency and there are several physical environments and robots and it enables (1) definition of undesirable states and (2) preventing a physical robot from reaching them. To the best of our knowledge this is also a novel component of the system.
Learning controllers from scratch in the real world can be challenging for an agile robot due to sample inefficiency and dangers in policy exploration. Training first in a simulator and then deploying to the real robot [56, 75, 91] (i.e. sim-to-real) is an effective way to mitigate both issues, but persistent differences between simulated and real world environments can be difficult to overcome [42, 72].
Perception is crucial in helping robots adapt to changes in the environment [4, 96] and interact with relevant objects [98, 52]. When objects need to be tracked at high speed such as in catching or hitting, it is typical to utilize methods such as motion-capture systems [65] however for table tennis, the ball needs to adhere to strict standards that prevent instrumentation or altering of the ball properties. Passive vision approaches for detecting the location within a video frame of a bright colored ball from a stationary camera may seem trivial, however, applying image processing techniques [92] such as color thresholding, shape fitting [37], and background subtraction are problematic. When considering the typical video captured from the cameras several factors in the scene render such approaches brittle. For example, the color of the natural light changes through out the day. Even under fixed lighting, the video stream is captured at 125Hz which is above the Nyquist frequency of the electricity powering fluorescent lights, resulting in images that flicker between frames. Additionally, there are typically several leftover balls from previous episodes around the scene which share the same color and shape as the ball in play. These distractors make data association more of a challenge for down stream tracking. Finally, extracting things that move is also a challenge when other basic visual cues are unreliable because there is always a robot and or a human moving in the scene. The perception component of the system in this paper uniquely combined all these visual cues by learning to detect the ball in an end-to-end fashion that is robust to visual ambiguities and provides both precise ball locations and velocity estimates.
Finally, prior work in robot learning varies by how much it focuses on the system compared with the problem being tackled. [22, 45, 47, 87, 66, 92, 56] are examples of works which dedicate substantial attention to the system. They provide valuable details and know-how about what mattered for a system to work in practice. This work is spiritually similar.
### _Robotic Table Tennis_
Robotic table tennis is a challenging, dynamic task [13] that has been a test bed for robotics research since the 1980s [8, 51, 34, 36, 66]. The current exemplar is the Omron robot [55]. Until recently, most methods tackled the problem by identifying a virtual hitting point for the racket [63, 64, 6, 69, 101, 39, 88, 58]. These methods depend on being able to predict the ball state at time \(t\) either from a ball dynamics model which may be parameterized [63, 64, 61, 62] or by learning to predict it [69, 101, 66]. Various methods can then generate robot joint trajectories given these target states [66, 63, 64, 61, 62, 67, 68, 40, 53, 92, 27]. More recently, Tebbe et al. [93] learned to predict the paddle target using reinforcement learning (RL).
Such approaches can be limited by their ability to predict and generate trajectories. An alternative line of research seeks to do away with hitting points and ball prediction models, instead focusing on high frequency control of a robot's joints using either RL [13, 101, 26] or learning from demonstrations [68, 17, 16]. Of these, Buchler et al. [13] is the most similar to the system in this paper. Similar to Buchler et al. [13], this system trains RL policies to control robot joints at high frequencies given ball and robot states as policy inputs. However Buchler et al. [13] uses hybrid sim and real training as well as a robot arm driven by pneumatic artificial muscles (PAMs), whilst this system uses a motor-driven arm. Motor-driven arms are a common choice and used by [17, 92, 93, 67].
## V Takeaways and Lessons Learned
Here we summarize lessons learned from the system that we hope are widely applicable to high-speed learning robotic systems beyond table tennis.
Choosing the right robots is important. The system started with a scaled down version of the current setup as a proof of concept and then graduated to full-scale, industrial robots (Appendix B). Industrial robots have many benefits such as low latency and high repeatability, but they can come with "closed-box" issues that must be worked through (Section II-B).
A safety simulator is a dynamic and customizable solution to constraining operations with high frequency control compared to high-level trajectory planners (Section II-B).
A configurable, modular, and multi-language (e.g. C++ and Python) system improves research and development velocity by making experimentation and testing easy for the researcher (Section II-B).
Latency modeling is critical for real world transfer performance as indicated by our experimental results. Other environmental factors may have varying effects that change based on the task (Section III-A). For example, ball spin is not accurately modeled in the ball return task, but can be critical when more nuanced actions are required.
Accurate environmental perception is also a key factor in transfer performance. In this system's case many factors were non-obvious to non-vision experts: camera placement, special calibration techniques, lens locks, etc. all resulted in better detection (Section II-D).
GPU data buffering, raw Bayer pattern detection, and patch based training substantially increase the performance of high frequency perception (Section II-D). Rather than using an off-the-shelf perception module, a purpose-built version allows levels of customization that may be required for high-speed tasks.
Interpolating and smoothing inputs (Section II-E) solves the problem of different devices running at different frequencies. It also guards against zero-mean noise and system latency variability, but is less effective against other types of noise.
Automatic resets and remote control increase system utilization and research velocity (Section II-E). The system originally required a human to manually collect balls and control the thrower. Now that the system can be run remotely and "indefinitely", significantly more data collection and training can occur.
ES algorithms like BGS (Section II-G) are a good starting point to explore the capabilities of a system, but they may also be a good option in general. BGS is still the most successful and reliable method applied in this system. Despite poor sample efficiency, ES methods are simple to implement, scalable, and robust optimizers that can even fine-tune real world performance.
Humans are highly variable and don't always follow instructions (on purpose or not) and require significant accommodations to address these issues and also to alleviate frustrations (e.g. time to reset) and ensure valuable human time is not wasted.
### _Limitations and Future Work_
A guiding principal of the system has been not to solve everything at once. Starting with a simple task (e.g. hitting the ball) and then scaling up to more complex tasks (e.g. playing with a human) provides a path to progress naturally prioritizes inefficiencies to be addressed. For example, a long but clean environment reset was sufficient for learning ball return tasks, but needed optimization to be sufficiently responsive to a human.
The current system struggles with a few key features. More complex play requires understanding the spin of the ball and the system currently has no way to directly read spin and it is not even included in simulation training. While it is possible to determine spin optically (i.e. by tracking the motion of the logo on the ball), it would require significantly higher frame rates and resolutions than what is currently employed. Other approaches more suited to our setup include analyzing the trajectory of the ball (which the robot may be doing implicitly) or including the paddle/thrower pose into the observation; analogous to how many humans detect spin. Additionally learning a model of the opponent if the opponent attempts to be deliberately deceptive, concealing of adding confusion to their hits.
The robot's range of motion is significant thanks to the inclusion of the gantry, but is still limited in a few key ways. Firstly, the safety simulator does not allow the paddle to go below the height of the table, preventing the robot from "scooping" low balls. This restriction prevents the robot from catching the arm between the table and gantry, which the safety sim was unable to prevent in testing. The robot is limited in side-to-side motion as well as how far forward over the table it can reach, so there may be balls that it physically cannot return. Finally, so far the robot has not made significant use of motion away from the table. We hope that training on more complex ball distributions will require the robot to make full use of the play space as professional humans do.
The sensitivity of policies also increases as the task becomes more complex. For example, slight jitter or latency in inference may be imperceptible for simple ball return tasks, but more complex tasks that require higher precision quickly revealed these gaps requiring performance optimizations. Sim-to-real gaps are also an issue; hitting a ball can be done without taking spin into account, but controlling spin is essential for high-level rallying. Environmental parameters and ball spin both become more important and incorporating domain randomization is a promising path forward to integrating them in a robust manner. Additionally, when human opponents come into play, modeling them directly or indirectly make it possible for the robot to move beyond purely reactive play and to start incorporating strategic planning into the game.
## VI Conclusion
In this paper we have explored the components of a successful, real-world robotic table tennis system. We discussed the building blocks, trade-offs, and other design decisions that went into the system and justify them with several case studies. While we do not believe the system in this paper is the perfect solution to building a learning, high-speed robotic system, we hope that this deep-dive can serve as a reference to those who face similar problems and as a discussion point to those who have found alternative approaches.
## Acknowledgments
We would like to thank Arnab Bose, Laura Downs, and Morgan Worthington for their work on improving the vision calibration system and Barry Benight for their help with video storage and encoding. We would also like to thank Yi-Hua Edward Yang and Khem Holden for improvements to the ball thrower control stack. We also are very grateful to Chris Harris and Razvan Surdulescu for their overall guidance and supervision of supporting teams such as logging and visualization. Additional thanks go to Tomas Jackson for video and photography and Andy Zeng for a thorough review of the inital draft of this paper. And finally we want to thank Huong Phan who was the lab manager for the early stages of the project and got the project headed in the right direction.
|
2309.14669 | Symmetric teleparallel cosmology with boundary corrections | We investigate the geometrodynamical effects of introducing the boundary term
in symmetric teleparallel gravity. Specifically, we consider a homogeneous and
isotropic universe in $f\left( Q, B \right) $, where $Q$ is the non-metricity
scalar, and $B$ is the boundary term that relates the non-metricity and Ricci
scalars. For the connection in the coincidence gauge, we find that the field
equations are of fourth-order, and the fluid components introduced by the
boundary are attributed to a scalar field. In the coincidence gauge, the
cosmological field equations are equivalent to those of teleparallelism with a
boundary term. Nevertheless, for the connection defined in the non-coincidence
gauge, the geometrodynamical fluid consists of three scalar fields. We focus on
the special case of $f\left( Q, B \right) = Q + F\left( B \right) $ theory, and
we determine a new analytic cosmological solution that can explain the
late-time acceleration of the universe and provide a geometric mechanism for
the unification of dark energy with dark matter. | Andronikos Paliathanasis | 2023-09-26T04:48:01Z | http://arxiv.org/abs/2309.14669v1 | # Symmetric teleparallel cosmology with boundary corrections
###### Abstract
We investigate the geometrodynamical effects of introducing the boundary term in symmetric teleparallel gravity. Specifically, we consider a homogeneous and isotropic universe in \(f\left(Q,B\right)\), where \(Q\) is the non-metricity scalar, and \(B\) is the boundary term that relates the non-metricity and Ricci scalars. For the connection in the coincidence gauge, we find that the field equations are of fourth-order, and the fluid components introduced by the boundary are attributed to a scalar field. In the coincidence gauge, the cosmological field equations are equivalent to those of teleparallelism with a boundary term. Nevertheless, for the connection defined in the non-coincidence gauge, the geometrodynamical fluid consists of three scalar fields. We focus on the special case of \(f\left(Q,B\right)=Q+F\left(B\right)\) theory, and we determine a new analytic cosmological solution that can explain the late-time acceleration of the universe and provide a geometric mechanism for the unification of dark energy with dark matter.
Symmetric teleparallel; non-metricity gravity; non-coincidence gauge; scalar field description.
## I Introduction
General Relativity is a well-tested theory [1]; however, it faces challenges on cosmological scales [3; 4; 5]. Recent cosmological observations [6; 7; 8; 9; 10] suggest that the universe is currently experiencing an acceleration phase driven by an exotic matter source known as dark energy
[11; 12]. On the other hand, to address the homogeneity and flatness problems, it has been proposed that the universe underwent an inflationary epoch in its early stages driven by the inflaton field [13].
The nature and physical properties of the inflaton and dark energy remain subjects of debate among cosmologists. Cosmologists have proposed different models to explain cosmic acceleration, which can be broadly categorized into two groups. In the first category, a matter source with negative pressure is introduced into the field equations of General Relativity [14; 15; 16; 17; 18; 19]. In the second category of models, cosmologists focus on determining a new gravitational theory by modifying the gravitational Action Integral [20; 21; 22].
The consideration of quantum-gravitational effects in the one-loop approximation in gravity [23; 24] leads to the introduction of the \(R^{2}\) term in the gravitational Lagrangian. This quadratic theory of gravity [25; 26; 27] has been used as a mechanism to explain inflation [28; 29; 30]. Indeed, when the \(R^{2}\) term dominates, it leads to a de Sitter expansion [31]. The success of the \(R^{2}\) term in describing acceleration has given rise to a family of theories known as \(f(R)\)-gravity [32]. For a comprehensive review, we recommend referring to [33].
The symmetric teleparallel \(f(Q)\)-theory of gravity has garnered the attention of cosmologists [34; 35]. \(f(Q)\)-theory is defined within the framework of symmetric teleparallel theory [36; 37], where the physical space is described by the metric tensor. However, the fundamental connection in this theory is not the Levi-Civita connection but a symmetric and flat connection, leading to the definition of a nonzero non-metricity scalar \(Q\). When \(f(Q)\) is linear, the gravitational theory becomes equivalent to General Relativity, known as Symmetric Teleparallel General Relativity (STGR) [36]. In STGR, the Ricci scalar \(R\) and the non-metricity scalar \(Q\) differ only by a boundary term \(B\), which means that the variation of the boundary term is neglected, resulting in equivalent field equations. There is a plethora of studies in the literature on \(f(Q)\)-theory; for example, see [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48], and references therein.
In the process of constructing a gravitational theory, the boundary term \(B\) has been utilized to modify the gravitational Action Integral. Inspired by teleparallelism [49], the symmetric teleparallel theory with a boundary term, expressed as \(f(Q,B)\)-gravity, was introduced in [50; 51]. It was discovered that the introduction of the boundary term further modifies the field equations of the gravitational model. In this study, we aim to investigate the effects of the boundary term on cosmological dynamics.
Because the definition of the symmetric and flat connection is not unique, there can
be multiple different connections that describe the same gravitational model. The corresponding non-metricity constants differ by a boundary term, which means that they lead to the same equations in STGR. However, this is not true in the case of a nonlinear Lagrangian. As a result, the definition of the connection affects the boundary term, implying that \(f(Q,B)\)-gravity depends on the definition of the connection.
Recently, in [52], it was proven that \(f(Q)\)-theory of gravity admits a minisuperspace description, and the theory is of fourth- or sixth-order, depending on the connection used. The higher-order derivatives can be attributed to a scalar field, and the gravitational Lagrangian can be written as that of a higher-dimensional second-order dynamical system. This approach is applied in this study in the context of \(f(Q,B)\)-gravity for a homogeneous and isotropic geometry. The structure of the paper is as follows.
In Section 2, we present the definition of symmetric teleparallel gravity and its generalization, the \(f(Q)\)-theory, where \(Q\) represents the non-metricity scalar. The extension of symmetric teleparallel theory with a boundary term \(B\) is discussed in Section 3. For the \(f(Q,B)\)-theory, we discuss when it is equivalent to teleparallel \(f(T,B_{T})\) theory and when it approaches the limit of \(f(R)\)-gravity. In Section 4, we present the gravitational field equations for a homogeneous and isotropic universe. Specifically, we employ Lagrange multipliers to demonstrate that the field equations in \(f(Q,B)\)-theory can be described within a minisuperspace framework. It's important to note that the definition of the symmetric and flat connection in a FLRW universe is not unique. Therefore, we provide the field equations for all four different families of connections. One novel aspect of the Lagrange multiplier approach is the introduction of scalar fields to account for the higher-order derivatives in the gravitational model. Consequently, \(f(Q,B)\)-cosmology is a fourth-order gravitational theory, equivalent to teleparallel \(f(T,B_{T})\)-cosmology, when the symmetric and flat connection is defined in the coincidence gauge. However, when the connection is defined in the non-coincidence gauge, the field equations become eighth-order and are described by three scalar fields.
Furthermore, in Section 5, we focus on the particular case of \(f(Q,B)=Q+F(B)\) cosmology, where the gravitational Action Integral is modified by nonlinear terms of the boundary. For this specific gravitational model, the order of the gravitational field equations is reduced by two in the non-coincidence gauge. Consequently, the geometric fluid is described by two scalar fields. In Section 6, we present a new analytic solution for symmetric teleparallel cos
mology with a boundary term in the non-coincidence gauge. This new solution is capable of describing the \(\Lambda\)CDM universe at the present time and possesses a de Sitter attractor. Finally, in Section 7, we summarize our findings and draw our conclusions.
## 2 Symmetric Teleparallel Gravity
We examine a gravitational model described by the four-dimensional metric tensor \(g_{\mu\nu}\) and the covariant derivative \(\nabla_{\lambda}\) which is defined using the generic connection \(\Gamma^{\kappa}_{\mu\nu}\,\), such that the autoparallels are defined as [53]
\[\frac{d^{2}x^{\mu}}{ds^{2}}+\Gamma^{\mu}_{\kappa\nu}\frac{dx^{\kappa}}{ds} \frac{dx^{\nu}}{ds}=0.\]
Connection \(\Gamma^{\kappa}_{\mu\nu}\) determines the nature of the geometry.
For the general connection we can define the the Riemann tensor
\[R^{\kappa}_{\;\lambda\mu\nu}=\frac{\partial\Gamma^{\kappa}_{\;\lambda\nu}}{ \partial x^{\mu}}-\frac{\partial\Gamma^{\kappa}_{\;\lambda\mu}}{\partial x^{ \nu}}+\Gamma^{\sigma}_{\;\lambda\nu}\Gamma^{\kappa}_{\;\mu\sigma}-\Gamma^{ \sigma}_{\;\lambda\mu}\Gamma^{\kappa}_{\;\mu\sigma}, \tag{1}\]
the torsion tensor
\[\mathrm{T}^{\lambda}_{\mu\nu}=\Gamma^{\lambda}_{\;\mu\nu}-\Gamma^{\lambda}_{ \;\nu\mu}, \tag{2}\]
and the non-metricity tensor [54]
\[Q_{\lambda\mu\nu}=\nabla_{\lambda}g_{\mu\nu}=\frac{\partial g_{\mu\nu}}{ \partial x^{\lambda}}-\Gamma^{\sigma}_{\;\lambda\mu}g_{\sigma\nu}-\Gamma^{ \sigma}_{\;\lambda\nu}g_{\mu\sigma}.\]
In General Relativity, \(\Gamma^{\kappa}_{\mu\nu}\) is recognized as the Levi-Civita connection, denoted as \(\tilde{\Gamma}\mu\nu^{\kappa}\). Consequently, in this framework, \(\mathrm{T}\mu\nu^{\lambda}=0\) and \(Q_{\lambda\mu\nu}=0\). Therefore, the primary scalar in General Relativity is the Ricci scalar \(R\).
On the other hand, in the Teleparallel Equivalent of General Relativity (TEGR) [22], the connection \(\Gamma^{\kappa}_{\mu\nu}\) is replaced by the antisymmetric Weitzenb"ock connection, resulting in \(R^{\kappa}_{;\lambda\mu\nu}=0\) and \(Q_{\lambda\mu\nu}=0\). In this context, the torsion scalar \(T\) takes on the role of the fundamental geometric object in teleparallel gravity.
In the theory under consideration, which is STGR, \(\Gamma^{\kappa}_{\mu\nu}\) possesses the property of being both flat and torsionless. This implies that \(R^{\kappa}_{;\lambda\mu\nu}=0\) and \(\mathrm{T}\mu\nu^{\lambda}=0\). Additionally, it
inherits the symmetries of the metric tensor \(g\mu\nu\). Thus, the nonmetricity scalar \(Q\) defined as [36]
\[Q=Q_{\lambda\mu\nu}P^{\lambda\mu\nu} \tag{3}\]
is the fundamental geometric quantity of gravity.
Tensor \(P^{\lambda\mu\nu}\) is defined as [36]
\[P^{\lambda}_{\ \mu\nu}=-\frac{1}{4}Q^{\lambda}_{\ \mu\nu}+\frac{1}{2}Q^{\ \ \ \lambda}_{(\mu\ \nu)}+\frac{1}{4}\left(Q^{\lambda}-\bar{Q}^{\lambda} \right)g_{\mu\nu}-\frac{1}{4}\delta^{\lambda}_{\ (\mu}Q_{\nu)}, \tag{4}\]
which is written with the help of the traces1\(Q_{\mu}=Q^{\ \ \ \nu}_{\mu\nu}\) and \(\bar{Q}_{\mu}=Q^{\nu}_{\ \mu\nu}\).
Footnote 1: Parenthesis in the indices denote symmetrization, that is, \(A_{(\mu\nu)}=\frac{1}{2}\left(A_{\mu\nu}+A_{\nu\mu}\right)\); and \(\delta^{\mu}_{\ \nu}\) is the Kroncker delta.
The Ricci scalar \(R\) correspond to the Levi-Civita connection \(\tilde{\Gamma}^{\kappa}_{\mu\nu}\) of the metric tensor \(g_{\mu\nu}\), and the nonmetricity scalar \(Q\) for a symmetric and flat connection \(\Gamma^{\kappa}_{\mu\nu}\) differ by a boundary term \(B\),defined as \(B=R-Q\).
The gravitational Action Integral of STGR reads [36]
\[\int d^{4}x\sqrt{-g}Q\simeq\int d^{4}x\sqrt{-g}R+\text{boundary terms}. \tag{5}\]
from where it follows that STGR is dynamically equivalent to GR.
However, when nonlinear components of the non-metricity scalar \(Q\) are introduced in the gravitational Action, as in \(f(Q)\)-gravity, this equivalence is lost. Nevertheless, the corresponding gravitational theory does not possess any dynamical equivalence with General Relativity or its generalization, \(f(R)\)-gravity.
The Action Integral in symmetric teleparallel \(f(Q)\)-gravity is defined [34; 35]
\[S_{f(Q)}=\int d^{4}x\sqrt{-g}f(Q). \tag{6}\]
The resulting field equations are
\[\frac{2}{\sqrt{-g}}\nabla_{\lambda}\left(\sqrt{-g}f_{,Q}P^{\lambda}_{\ \mu\nu}\right)-\frac{1}{2}f(Q)g_{\mu\nu}+f_{,Q}\left(P_{\mu\rho\sigma}Q^{\ \rho\sigma}_{\ \nu}-2Q_{\rho\sigma\mu}P^{\rho\sigma}_{\ \ \ \nu}\right)=0, \tag{7}\]
\[\nabla_{\mu}\nabla_{\nu}\left(\sqrt{-g}f_{,Q}P^{\mu\nu}_{\ \ \ \sigma}\right)=0. \tag{8}\]
Equations (7) represent the modified Einstein field equations in \(f(Q)\)-gravity, while equation (8) corresponds to the equation of motion for the connection.. When equation (8) holds true for a specific connection at all times, that connection is referred to as the "coincidence gauge". However, if the equation (8) is not always satisfied for a particular connection, then that connection is defined within the so-called "non-coincidence gauge"as discussed in [35].
## 3 Symmetric Teleparallel Boundary Gravity
Recently, a generalization of the \(f(Q)\) theory was introduced [50; 51] by incorporating a boundary term into the gravitational Action Integral. More precisely, this extended framework includes the gravitational Action Integral as follows
\[S_{f(Q,B)}=\int d^{4}x\sqrt{-g}f(Q,B) \tag{9}\]
where \(B=R-Q\).
In the notation of [51], the boundary term is noted as \(C\). This gravitational theory is equivalent to General Relativity, when \(f\left(Q,B\right)\) is a linear function, that is, \(f\left(Q,B\right)=f_{1}Q+f_{2}B-2\Lambda\); the limit of \(f\left(Q\right)\) is recovered when \(f\left(Q,B\right)=f\left(Q\right)+f_{2}B\); while the fourth-order \(f\left(R\right)\) theory of gravity is recovered when \(f\left(Q,B\right)=f\left(Q+B\right)\).
The gravitational field equations which correspond to the Action Integral (9) are
\[0 = \frac{2}{\sqrt{-g}}\nabla_{\lambda}\left(\sqrt{-g}f_{,Q}P_{\ \mu\nu}^{\lambda}\right)-\frac{1}{2}f(Q,B)g_{\mu\nu}+f_{,Q}\left(P_{\mu\rho \sigma}Q_{\nu}^{\ \rho\sigma}-2Q_{\rho\sigma\mu}P^{\rho\sigma}_{\ \ \nu}\right) \tag{10}\] \[+ \left(\frac{B}{2}g_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}+g_{\mu\nu}g ^{\kappa\lambda}\nabla_{\kappa}\nabla_{\lambda}-2P^{\lambda}_{\ \ \ \mu\nu}\nabla_{\lambda}\right)f_{,B}\]
\(f\left(Q,B\right)\) has been inspired by the teleparallel boundary gravity \(f\left(T,B_{T}\right)\)[49], where now \(B_{T}\) is the boundary term relates the Ricciscalar \(R\) and the torsion scalar \(T\) for the Weitzenbock connection [55]. In a similar way, \(f\left(T,B_{T}\right)\) recovers \(f\left(R\right)\)-gravity when \(f\left(T,B_{T}\right)=f\left(T+B_{T}\right)\)[49], while the limit of GR is recovered when \(f\left(T,B_{T}\right)=f_{1}T+f_{2}B_{T}-2\Lambda\); and \(f\left(T\right)\)-theory follows when \(f\left(T,B_{T}\right)=f\left(T\right)+f_{2}B_{T}\). There are many astrophysical and cosmological applications of \(f\left(T,B_{T}\right)\)-theory in the literature [56; 57; 58; 59; 60; 61]. From these results it is clear that the boundary \(B_{T}\) plays an important role in geometric description of dark energy. Thus, the generalization if symmetric teleparallel theory seems
natural.
\(f\left(T,B_{T}\right)\) theory of gravity is a fourth-order theory, similar to \(f\left(R\right)\)-theory, while when \(f\left(T,B_{T}\right)=f\left(T\right)+f_{2}B_{T}\) the resulting field equations are of second-order [62]. The order of symmetric teleparallel \(f\left(Q\right)\)-theory depends on the connection which is used for the definition of the nonmetricity scalar \(Q.\) For the connection defined in the coincidence gauge \(f\left(Q\right)\) is a second-order theory, while for a connection in the non-coincidence gauge \(f\left(Q\right)\)-theory is a sixth-order theory of gravity.
To comprehend the degrees of freedom within \(f(Q,B)\)-theory, we turn our attention to a cosmological model representing an isotropic and homogeneous universe. Employing the Lagrange multipliers method, we introduce scalar fields that account for the dynamical degrees of freedom within the \(f(Q,B)\)-theory. Consequently, our analysis reveals that the theory introduces either one or three scalar fields.
## IV Isotropic and homogeneous universe
The isotropic and homogeneous universe is described by the FLRW line element
\[ds^{2}=-N(t)^{2}dt^{2}+a(t)^{2}\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}\left(d \theta^{2}+\sin^{2}\theta d\varphi^{2}\right)\right], \tag{11}\]
in which \(N\left(t\right)\) is the lapse function and \(a\left(t\right)\) is the scale factor denotes the radius of the universe. Hence, \(H=\frac{1}{N}\frac{\dot{a}}{a},\) where \(\dot{a}=\frac{da}{dt}\) is the Hubble function. Parameter \(k\) is the spatial curvature, for \(k=0,\) the universe is spatially flat, \(k=+1\) corresponds to a closed FLRW geometry and \(k=-1\) describes an open universe.
FLRW spacetime admits a sixth-dimensional Killing algebra consisted by the vector fields
\[\zeta_{1}=\sin\varphi\partial_{\theta}+\frac{\cos\varphi}{\tan\theta}\partial _{\varphi},\quad\zeta_{2}=-\cos\varphi\partial_{\theta}+\frac{\sin\varphi}{ \tan\theta}\partial_{\varphi},\quad\zeta_{3}=-\partial_{\varphi} \tag{12}\]
\[\begin{array}{l}\xi_{1}=\sqrt{1-kr^{2}}\sin\theta\cos\varphi \partial_{r}+\frac{\sqrt{1-kr^{2}}}{r}\cos\theta\cos\varphi\partial_{\theta} -\frac{\sqrt{1-kr^{2}}}{r}\frac{\sin\varphi}{\sin\theta}\partial_{\varphi}\\ \xi_{2}=\sqrt{1-kr^{2}}\sin\theta\sin\varphi\partial_{r}+\frac{\sqrt{1-kr^{2 }}}{r}\cos\theta\sin\varphi\partial_{\theta}+\frac{\sqrt{1-kr^{2}}}{r}\frac{ \cos\varphi}{\sin\theta}\partial_{\varphi}\\ \xi_{3}=\sqrt{1-kr^{2}}\cos\theta\partial_{r}-\frac{\sqrt{1-kr^{2}}}{r}\sin \theta\partial_{\varphi}.\end{array} \tag{13}\]
### Non-zero spatial curvature \(k\neq 0\)
For the FLRW with \(k\neq 0\), there exist a unique connection defined in the non-coincidence gauge with non-zero components [63, 64]
\[\Gamma^{r}_{\;tr}=\Gamma^{r}_{\;rt}=\Gamma^{\theta}_{\;t\theta}= \Gamma^{\theta}_{\;\theta t}=\Gamma^{\varphi}_{\;t\varphi}=-\frac{k}{\gamma(t )},\quad\Gamma^{r}_{\;rr}=\frac{kr}{1-kr^{2}},\] \[\Gamma^{r}_{\;\theta\theta}=-r\left(1-kr^{2}\right),\quad\Gamma^ {r}_{\;\varphi\varphi}=-r\sin^{2}(\theta)\left(1-kr^{2}\right)\quad\Gamma^{ \theta}_{\;r\theta}=\Gamma^{\theta}_{\;\theta r}=\Gamma^{\varphi}_{\;r\varphi} =\Gamma^{\varphi}_{\;\varphi r}=\frac{1}{r}, \tag{14}\] \[\Gamma^{\theta}_{\;\varphi\varphi}=-\sin\theta\cos\theta,\quad \Gamma^{\varphi}_{\;\theta\varphi}=\Gamma^{\varphi}_{\;\varphi\theta}=\cot\theta,\]
and
\[\Gamma^{t}_{\;tt}=-\frac{k+\dot{\gamma}(t)}{\gamma(t)},\quad\Gamma^{t}_{\;rr }=\frac{\gamma(t)}{1-kr^{2}}\quad\Gamma^{t}_{\;\theta\theta}=\gamma(t)r^{2}, \quad\Gamma^{t}_{\;\varphi\varphi}=\gamma(t)r^{2}\sin^{2}(\theta).\]
For this connection, the calculation of the non-metricity scalar is produced
\[Q_{k}=-6\left(H^{2}-\frac{k}{a^{2}}\right)+\frac{3}{a^{3}N}\left(aN\gamma-k \frac{a^{3}}{\gamma N}\right)^{\cdot}. \tag{15}\]
From the Levi-Civita connection of spacetime (11) we calculate the Ricciscalar
\[R_{k}=6\left(2H^{2}+\frac{k}{a^{2}}\right)+\frac{6}{N}\dot{H}. \tag{16}\]
Consequently, the boundary term \(B=R-Q\)[51] is
\[B_{k}=R_{k}-Q_{k}=3\left(6H^{2}+\frac{2}{N}\dot{H}-\frac{1}{a^{3}N}\left(aN \gamma-k\frac{a^{3}}{\gamma N}\right)^{\cdot}\right). \tag{17}\]
In order to derive the gravitational field equations we apply the mathematical manipulation introduced in [52] and we introduce the scalar field \(\Psi\) such that \(\gamma=\frac{1}{\dot{\Psi}}\).
We introduce in (9) the Lagrange multipliers \(\lambda_{1}\) and \(\lambda_{2}\) such that
\[S_{f(Q,B)}=\int d^{4}x\sqrt{-g}\left(f(Q,B)\ -\lambda_{1}\left(Q-Q_{k}\right)- \lambda_{2}\left(B-B_{k}\right)\right). \tag{18}\]
Variation with respect to the non-metricity scalar \(Q\) and the boundary term \(B\), gives \(\lambda_{1}=f_{,Q}\) and \(\lambda_{2}=f_{,B}\).
Thus, by replacing in (18) it follows
\[S_{f(Q,B)}=\int dt\left(Na^{3}\left(f-f_{,Q}-f_{,B}\right)+Na^{3}f_{,Q}Q_{k}+Na^{3 }f_{,B}B_{k}\right). \tag{19}\]
Hence, integration by parts gives
\[\int dt\left(Na^{3}f_{,Q}Q_{k}\right)=\int dt\left(-6Na^{3}f_{,Q}\left(H^{2}- \frac{k}{a^{2}}\right)+3f_{,Q}\left(a\frac{N}{\dot{\Psi}}-k\frac{a^{3}}{N}\dot {\Psi}\right)^{\cdot}\right),\]
\[\int dt\left(Na^{3}f_{,B}B_{k}\right)=\int dt\left(18Na^{3}f_{,B}H^{2}+6a^{3}f _{,B}\dot{H}-3f_{,B}\left(aN\gamma-k\frac{a^{3}}{\gamma N}\right)^{\cdot} \right).\]
Then
\[S_{f(Q,B)}=\int dt\left(\begin{array}{c}Na^{3}\left(f-f_{,Q}-f_{,B}\right)-6 \left(f_{,Q}-3f_{,B}\right)\left(Na^{3}H^{2}\right)\\ +6Na^{3}f_{,Q}\frac{k}{a^{2}}+6f_{,B}a^{3}\dot{H}+3\left(f_{,Q}-f_{,B}\right) \left(a\frac{N}{\dot{\Psi}}-k\frac{a^{3}}{N}\dot{\Psi}\right)^{\cdot}\end{array} \right).\]
It follows
\[\int 6f_{,B}a^{3}\dot{H}dt=\int\left(-18Nf_{,B}a^{3}H^{2}-6\dot{f}_{,B}a^{3}H \right)dt,\]
\[\int 3\left(f_{,Q}-f_{,B}\right)\left(a\frac{N}{\dot{\Psi}}-k\frac{a^{3}}{N} \dot{\Psi}\right)^{\cdot}dt=\int-3\left(\dot{f}_{,Q}-\dot{f}_{,B}\right) \left(a\frac{N}{\dot{\Psi}}-k\frac{a^{3}}{N}\dot{\Psi}\right)dt.\]
The minisuperspace Lagrangian is
\[L\left(N,a,\dot{a},Q,\dot{Q},B,\dot{B},\Psi,\dot{\Psi}\right)=- \frac{6}{N}f_{,Q}a\dot{a}^{2}+6Naf_{,Q}k-\frac{6}{N}a^{2}\dot{f}_{,B}\dot{a}\\ -3\left(\dot{f}_{,Q}-\dot{f}_{,B}\right)\left(a\frac{N}{\dot{\Psi}}-k \frac{a^{3}}{N}\dot{\Psi}\right)+Na^{3}\left(f-f_{,Q}-f_{,B}\right) \tag{20}\]
or equivalently
\[L\left(N,a,\dot{a},\phi,\dot{\phi},\zeta,\dot{\zeta},\Psi,\dot{\Psi}\right)=- \frac{6}{N}\phi a\dot{a}^{2}+6Na\phi k-\frac{6}{N}a^{2}\dot{a}\dot{\zeta}-3 \left(\dot{\phi}-\dot{\zeta}\right)\left(a\frac{N}{\dot{\Psi}}-k\frac{a^{3}}{ N}\dot{\Psi}\right)+Na^{3}V\left(\phi,\zeta\right). \tag{21}\]
in which \(\phi=f_{,Q}\), \(\zeta=f_{,B}\) and \(V\left(\phi,\zeta\right)=(f-f_{,Q}-f_{,B})\).
In the four-dimensional space \(a,\phi,\zeta,\Psi\), we calculate \(\left|\frac{\partial^{2}L}{\partial\partial\partial q}\right|=\frac{324a^{6} }{N^{4}\dot{\Psi}^{4}}(N^{2}+ka^{2}\dot{\Psi}^{2})^{2}\neq 0\). This implies that the field equations are of eighth-order and are described by the three scalar fields \(\phi,\zeta,\Psi\). The Lagrangian function (21) represents a singular dynamical system in which
variation with respect to the lapse function \(N\) yields the modified Friedmann equation.
Moreover, variation with respect to the dynamical variables \(\left\{a,\phi,\zeta,\Psi\right\}\) gives four second-order differential equations.
For a constant lapse function, i.e. \(N=1\), the cosmological field equation are
\[0=3\phi H^{2}+3\phi\frac{k}{a^{2}}+3H\dot{\zeta}-\frac{3}{2}\left(\dot{\phi}- \dot{\zeta}\right)\left(\frac{1}{a^{2}\dot{\Psi}}+k\dot{\Psi}\right)+\frac{1}{ 2}V\left(\phi,\zeta\right), \tag{22}\]
\[0=\dot{H}+\frac{3}{2}H^{2}+\frac{k}{2a^{2}}+\frac{V}{4\phi}+H\frac{\dot{\phi}} {\phi}+\frac{\dot{\zeta}-\dot{\phi}}{\phi}\left(\frac{1}{4a^{2}\dot{\Psi}}- \frac{3k}{4}\dot{\Psi}\right)+\frac{1}{2}\ddot{\zeta}, \tag{23}\]
\[0=3\left(1+ka^{2}\dot{\Psi}^{2}\right)\frac{\ddot{\Psi}}{\Psi}-\left(3H+6k \dot{\Psi}-a^{2}\dot{\Psi}\left(6H^{2}+9kH\dot{\Psi}-V_{,\phi}\right)\right), \tag{24}\]
\[0=6\dot{H}+9H\left(2H+k\dot{\Psi}\right)+3k\ddot{\Psi}+V_{,\zeta}-\frac{3}{a^{ 2}\dot{\Psi}^{2}}\left(H\dot{\Psi}-\ddot{\Psi}\right), \tag{25}\]
\[0=3a\dot{\Psi}\left(1+a^{2}k\dot{\Psi}^{2}\right)\left(\ddot{\zeta}-\ddot{ \phi}\right)+a\left(\dot{\zeta}-\dot{\phi}\right)\left(\dot{\Psi}H\left(1+3a^ {2}k\dot{\Psi}^{2}\right)-2\ddot{\Psi}\right). \tag{26}\]
The modified Friedmann's equations (22), (23) can be written in the equivalent form
\[3\left(H^{2}+\frac{k}{a^{2}}\right)=G_{eff}\rho_{f(Q,B)}^{\Gamma _{k}}, \tag{27}\] \[-2\dot{H}-3H-\frac{k}{a^{2}}=G_{eff}\rho_{f(Q,B)}^{\Gamma_{k}}, \tag{28}\]
in which \(\rho_{f(Q,B)}\), \(p_{f(Q,B)}\) are the components for the geometric fluid which follows by the nonlinear \(f\left(Q,B\right)\)-theory defined as
\[\rho_{f(Q,B)}^{\Gamma_{k}}=-\left(3H\dot{\zeta}-\frac{3}{2}\left( \dot{\phi}-\dot{\zeta}\right)\left(\frac{1}{a^{2}\ddot{\Psi}}+k\dot{\Psi} \right)+\frac{1}{2}V\left(\phi,\zeta\right)\right), \tag{29}\] \[p_{f(Q,B)}^{\Gamma_{k}}=\frac{V}{2}+2H\dot{\phi}+\frac{\dot{ \zeta}-\dot{\phi}}{2}\left(\frac{1}{4a^{2}\dot{\Psi}}-\frac{3k}{4}\dot{\Psi} \right). \tag{30}\]
and
\[G_{eff}=\frac{1}{\phi}. \tag{31}\]
We remark that scalar field \(\phi\) is defined in the Jordan frame.
### Spatially flat case \(k=0\)
For the spatially flat case, \(k=0\), it has been found that there exist three families of connections. The common non-zero coefficients of the three connections are
\[\Gamma_{\theta\theta}^{r}=-r\text{, }\Gamma_{\varphi\varphi}^{r}=-r\sin^{2}\theta\]
\[\Gamma_{\varphi\varphi}^{\theta}=-\sin\theta\cos\theta\text{, }\Gamma_{\theta \varphi}^{\varphi}=\Gamma_{\varphi\theta}^{\varphi}=\cot\theta\]
\[\Gamma_{r\theta}^{\theta}=\Gamma_{\theta_{r}}^{\theta}=\Gamma_{r\varphi}^{ \varphi}=\Gamma_{\varphi r}^{\varphi}=\frac{1}{r}\]
while the additional components for connections \(\Gamma_{1},\text{ }\Gamma_{2}\) and \(\Gamma_{3}\) are [63, 64]
\[\Gamma_{1}:\Gamma_{tt}^{t}=\gamma(t),\]
\[\Gamma_{2}:\Gamma_{tt}^{t}=\frac{\dot{\gamma}(t)}{\gamma(t)}+\gamma(t),\quad \Gamma_{tr}^{r}=\Gamma_{rt}^{r}=\Gamma_{t\theta}^{\theta}=\Gamma_{\theta t}^{ \theta}=\Gamma_{t\varphi}^{\varphi}=\Gamma_{\varphi t}^{\varphi}=\gamma(t),\]
and
\[\Gamma_{tt}^{t}=-\frac{\dot{\gamma}(t)}{\gamma(t)},\quad\Gamma_{rr}^{t}= \gamma(t),\quad\Gamma_{\theta\theta}^{t}=\gamma(t)r^{2},\quad\Gamma_{\varphi \varphi}^{t}=\gamma(t)r^{2}\sin^{2}\theta.\]
The non-metricity scalars for each connection are calculated
\[Q_{1}\left(\Gamma_{1}\right)=-6H^{2} \tag{32}\]
\[Q_{2}\left(\Gamma_{2}\right)=-6H^{2}+\frac{3}{a^{3}N}\left(\frac{a^{3}\gamma} {N}\right)^{\cdot} \tag{33}\]
and
\[Q_{3}\left(\Gamma_{3}\right)=-6H^{2}+\frac{3}{a^{3}N}\left(aN\gamma\right)^{ \cdot}. \tag{34}\]
Connection \(\Gamma_{1}\) is the one defined in the coincidence gauge, while connections \(\Gamma_{2}\) and \(\Gamma_{3}\) are defined in the non-coincidence gauge. We observe that connection \(\Gamma_{k}\) in the limit \(k=0\), reduces to that of \(\Gamma_{3}\), that is, \(\Gamma_{k}\left(k\to 0\right)=\Gamma_{3}\) and \(Q_{k}\left(k\to 0\right)=Q_{3}\left(\Gamma_{3}\right)\).
We proceed with the derivation of the minisuperspace Lagrangian and the field equations for each family of connections in \(f\left(Q,B\right)\)-theory of gravity.
#### 4.2.1 Connection \(\Gamma_{1}\)
For connection \(\Gamma_{1}\), and the Ricciscalar (16) for the spatially flat FLRW geometry we derive the boundary term
\[B_{1}=B\left(\Gamma_{1}\right)=3\left(6H^{2}+\frac{2}{N}\dot{H}\right). \tag{35}\]
For the coincidence gauge, scalars \(Q_{1}\) and \(B_{1}\) have the same functional form with the torsion scalar \(T\) and the boundary \(B_{T}\). As a result \(f\left(Q,B\right)-\)gravity for the connection \(\Gamma_{1}\) is equivalent with the teleparallel \(f\left(T,B_{T}\right)\)-gravity.
The Lagrangian of the field equations is
\[L\left(N,a,\dot{a},Q,B,\dot{B}\right)=-\frac{6}{N}f_{,Q}a\dot{a}^{2}-\frac{6}{N }a^{2}\dot{f}_{,B}\dot{a}+Na^{3}\left(f-f_{,Q}-f_{,B}\right) \tag{36}\]
or equivalently in scalar field description
\[L\left(N,a,\dot{a},\phi,\zeta,\dot{\zeta}\right)=-\frac{6}{N}\phi a\dot{a}^{2} -\frac{6}{N}a^{2}\dot{a}\dot{\zeta}+Na^{3}V\left(\phi,\zeta\right). \tag{37}\]
where similarly as before \(\phi=f_{,Q}\), \(\zeta=f_{,B}\) and \(V\left(\phi,\zeta\right)=\left(f-f_{,Q}-f_{,B}\right)\).
For the Lagrangian (37) in the three-dimensional space \(\left\{a,\phi,\zeta\right\}\) we derive \(\left|\frac{\partial^{2}L}{\partial q\partial q}\right|=0\). Hence, the field equations are of fourth-order.
We select the constant lapse function \(N=1\), and we derive the field equations
\[0=6H\left(\phi H+\dot{\zeta}\right)+V\left(\phi,\zeta\right), \tag{38}\]
\[0=2\phi\left(2\dot{H}+3H^{2}\right)+4H\dot{\phi}+2\ddot{\zeta}+V\left(\phi, \zeta\right), \tag{39}\]
\[0=6H^{2}-V_{,\phi}, \tag{40}\]
\[0=\dot{H}+3H^{2}+\frac{1}{6}V_{,\zeta}. \tag{41}\]
Modified Friedmann's equations are written in the equivalent form
\[3H^{2} =G_{eff}\rho_{f(Q,B)}^{\Gamma_{1}},\] \[-2\dot{H}-3H^{2} =G_{eff}p_{f(Q,B)}^{\Gamma_{1}},\]
with energy density \(\rho_{f(Q,B)}^{\Gamma_{1}}\) and pressure \(p_{f(Q,B)}^{\Gamma_{1}}\) for the effective fluid
\[\rho_{f(Q,B)}^{\Gamma_{1}} =-\left(3\dot{\zeta}H+\frac{1}{2}V\left(\phi,\zeta\right)\right), \tag{42}\] \[p_{f(Q,B)}^{\Gamma_{1}} =\left(2H\dot{\phi}+\ddot{\zeta}+\frac{1}{2}V\left(\phi,\zeta \right)\right). \tag{43}\]
and \(G_{eff}=\frac{1}{\phi}\).
#### 4.2.2 Connection \(\Gamma_{2}\)
For the non-coincidence connection \(\Gamma_{2}\) it follows the boundary term
\[B_{2}=B\left(\Gamma_{2}\right)=3\left(6H^{2}+\frac{2}{N}\dot{H}-\frac{3}{a^{3 }N}\left(\frac{a^{3}\gamma}{N}\right)^{\cdot}\right). \tag{44}\]
Hence, by introducing Lagrange multipliers as we did for the generic connection \(\Gamma_{k}\), we determine the point-like Lagrangian
\[L\left(N,a,\dot{a},Q,\dot{Q},B,\dot{B},\psi,\dot{\psi}\right)=-\frac{6}{N}f_{ Q}a\dot{a}^{2}-\frac{6}{N}a^{2}\dot{f}_{,B}\dot{a}+3\left(\dot{f}_{,Q}-\dot{f}_{,B }\right)\frac{a^{3}\dot{\psi}}{N}+Na^{3}\left(f\left(Q,B\right)-f_{,Q}-f_{,B} \right), \tag{45}\]
or equivalently
\[L\left(N,a,\dot{a},\phi,\dot{\phi},\zeta,\dot{\zeta},\psi,\dot{\psi}\right)=- \frac{6}{N}\phi a\dot{a}^{2}-\frac{6}{N}a^{2}\dot{a}\dot{\zeta}+3\left(\dot{ \phi}-\dot{\zeta}\right)\frac{a^{3}\dot{\psi}}{N}+Na^{3}V\left(\phi,\zeta \right), \tag{46}\]
in which \(\gamma=\dot{\psi}\), \(\phi=f_{,Q}\), \(\zeta=f_{,B}\) and \(V\left(\phi,\zeta\right)=\left(f-f_{,Q}-f_{,B}\right)\).
The field equations are of eight-order described by three scalar fields. For \(N=1\), the equations of motions for the scale factor and the three scalar fields are
\[0=6H\left(\phi H+\dot{\zeta}\right)-3\left(\dot{\phi}-\dot{\zeta}\right)\dot{ \psi}+NV\left(\phi,\zeta\right), \tag{47}\]
\[0=2\phi\left(2\dot{H}+3H^{2}\right)+4H\dot{\phi}-3\left(\dot{\phi}-\dot{\zeta} \right)\dot{\psi}+2\ddot{\zeta}+V\left(\phi,\zeta\right), \tag{48}\]
\[0=3\ddot{\psi}+9H\dot{\psi}-6H^{2}+V_{,\phi} \tag{49}\]
\[0=6\dot{H}+18H^{2}-9H\dot{\psi}-3\ddot{\psi}+V_{\zeta}\]
\[0=\ddot{\phi}-\ddot{\zeta}+3H\left(\dot{\phi}-\dot{\zeta}\right). \tag{50}\]
Hence. the effective geometric fluid has the following energy density and pressure components
\[\rho_{f(Q,B)}^{\Gamma_{2}}=-\left(3\dot{\zeta}H+\frac{3}{2}\left( \dot{\phi}-\dot{\zeta}\right)\dot{\psi}+\frac{1}{2}V\left(\phi,\zeta\right) \right), \tag{51}\] \[p_{f(Q,B)}^{\Gamma_{2}}=\left(2H\dot{\phi}-\frac{3}{2}\left(\dot {\phi}-\dot{\zeta}\right)\dot{\psi}+\ddot{\zeta}+\frac{1}{2}V\left(\phi,\zeta \right)\right), \tag{52}\]
such that
\[3H^{2}=G_{eff}\rho_{f(Q,B)}^{\Gamma_{2}}\] \[-2\dot{H}-3H^{2}=G_{eff}\rho_{f(Q,B)}^{\Gamma_{2}},\]
and \(G_{eff}=\frac{1}{\phi}\).
#### 4.2.3 Connection \(\Gamma_{3}\)
We set \(k=0\) in (21), thus, the minisuperspace Lagrangian function is
\[L\left(N,a,\dot{a},\phi,\dot{\phi},\zeta,\dot{\zeta},\Psi,\dot{\Psi}\right)=- \frac{6}{N}\phi a\dot{a}^{2}-\frac{6}{N}a^{2}\dot{a}\dot{\zeta}-3Na\frac{ \left(\dot{\phi}-\dot{\zeta}\right)}{\dot{\Psi}}+Na^{3}V\left(\phi,\zeta \right). \tag{53}\]
in which \(\phi=f_{,Q}\), \(\zeta=f_{,B}\) and \(V\left(\phi,\zeta\right)=\left(f-f_{,Q}-f_{,B}\right)\), and the field equations are
\[0=3\phi H^{2}+3H\dot{\zeta}-\frac{3}{2}\left(\dot{\phi}-\dot{\zeta}\right) \left(\frac{1}{a^{2}\dot{\Psi}}\right)+\frac{1}{2}V\left(\phi,\zeta\right), \tag{54}\]
\[0=\dot{H}+\frac{3}{2}H^{2}+\frac{V}{4\phi}+H\frac{\dot{\phi}}{\phi}+\frac{\dot {\zeta}-\dot{\phi}}{\phi}\left(\frac{1}{4a^{2}\dot{\Psi}}\right)+\frac{1}{2} \ddot{\zeta}, \tag{55}\]
\[0 = 3\frac{\ddot{\Psi}}{\Psi}-\left(3H-a^{2}\dot{\Psi}\left(6H^{2}+9kH \dot{\Psi}-V_{,\phi}\right)\right), \tag{56}\] \[0 = 6\dot{H}+18H^{2}+V_{,\zeta}-\frac{3}{a^{2}\dot{\Psi}^{2}}\left(H \dot{\Psi}-\ddot{\Psi}\right),\] (57) \[0 = 3a\dot{\Psi}\left(\ddot{\zeta}-\ddot{\phi}\right)+a\left(\dot{ \zeta}-\dot{\phi}\right)\left(\dot{\Psi}H-2\ddot{\Psi}\right). \tag{58}\]
We conclude that the field equations are of eighth-order. Furthermore, the geometric fluid has the energy density and pressure given by expressions (29) and (30).
## V \(f\left(Q,B\right)=Q+F\left(B\right)\)-Cosmology
Based on the above analysis, we observe that \(f\left(Q,B\right)\)-theory introduces a varying parameter \(G_{eff}=\frac{1}{\phi}\), \(\phi=f_{,Q}\). Of special interest are the \(f\left(Q,B\right)=Q+F\left(B\right)\) models, in which \(G_{eff}=const.\) and \(\phi\) is always a constant. In this theory the nonlinear terms of the Action Integral follow correspond to boundary corrections. This approach has been previously explored in teleparallel \(f(T,B_{T})\)-gravity, yielding numerous interesting results. Specifically, \(f(T,B_{T})=T+F(B_{T})\) has the potential to explain both the late and early-time acceleration phases of the universe [60].
In the \(f(Q,B)=Q+F(B)\) theory, the number of field equations is reduced by one, as \(\phi\) is not a dynamic parameter but a constant, i.e., \(\phi=1\). Below, we provide the sets of field equations in \(f(Q,B)=Q+F(B)\)-theory for the four different families of connections.
### Connection \(\Gamma_{1}\)
For the connection \(\Gamma_{1}\) defined in the coincidence gauge, the minisuperspace Lagrangian is
\[L\left(N,a,\dot{a},\phi,\zeta,\dot{\zeta}\right)=-\frac{6}{N}a\dot{a}^{2}- \frac{6}{N}a^{2}\dot{a}\dot{\zeta}+Na^{3}V\left(\zeta\right). \tag{59}\]
where \(\zeta=f_{,B}\) and \(V\left(\zeta\right)=\left(f-f_{,B}\right)\).
Thus, for \(N=1\), the field equations are
\[3H^{2} = \rho_{f(Q,B)}^{\Gamma_{1}}, \tag{60}\] \[-2\dot{H}-3H^{2} = \rho_{f(Q,B)}^{\Gamma_{1}}, \tag{61}\]
in which
\[\rho_{f(Q,B)}^{\Gamma_{1}} =-\left(3\dot{\zeta}H+\frac{1}{2}V\left(\zeta\right)\right), \tag{62}\] \[p_{f(Q,B)}^{\Gamma_{1}} =\left(\ddot{\zeta}+\frac{1}{2}V\left(\zeta\right)\right), \tag{63}\]
and the scalar field \(\zeta\) satisfies the equation of motion (41). Because the theory is equivalent to the teleparallel \(f\left(T,B_{T}\right)=T+F\left(B_{T}\right)\) model, the results of the latter theory are valid for the the symmetric teleparallel theory with boundary term.
### Connection \(\Gamma_{2}\)
For connection \(\Gamma_{2}\) defined in the non-coincidence gauge, the minisuperspace Lagrangian reads
\[L\left(N,a,\dot{a},\phi,\dot{\phi},\zeta,\dot{\zeta},\psi,\dot{\psi}\right)=- \frac{6}{N}a\dot{a}^{2}-\frac{6}{N}a^{2}\dot{a}\dot{\zeta}-3a^{3}\frac{\dot{ \zeta}\dot{\psi}}{N}+Na^{3}V\left(\zeta\right), \tag{64}\]
in which \(\gamma=\dot{\psi}\), \(\zeta=f_{,B}\) and \(V\left(\phi,\zeta\right)=\left(f-f_{,B}\right)\).
For \(N=1\), modified Friedmann's equations are
\[3H^{2} =\rho_{f(Q,B)}^{\Gamma_{2}} \tag{65}\] \[-2\dot{H}-3H^{2} =p_{f(Q,B)}^{\Gamma_{2}}, \tag{66}\]
with
\[\rho_{f(Q,B)}^{\Gamma_{2}} =-\left(3\dot{\zeta}H-\frac{3}{2}\dot{\zeta}\dot{\psi}+\frac{1}{2 }V\left(\zeta\right)\right), \tag{67}\] \[p_{f(Q,B)}^{\Gamma_{2}} =\left(\frac{3}{2}\dot{\zeta}\dot{\psi}+\ddot{\zeta}+\frac{1}{2} V\left(\zeta\right)\right), \tag{68}\]
where the scalar fields \(\zeta\) and \(\psi\) satisfy the field equations
\[0 =6\dot{H}+18H^{2}-9H\dot{\psi}-3\ddot{\psi}+V_{,\zeta} \tag{69}\] \[0 =\ddot{\zeta}+3H\dot{\zeta}. \tag{70}\]
### Connection \(\Gamma_{3}\)
For the connection \(\Gamma_{3}\) the minisuperspace Lagrangian becomes
\[L\left(N,a,\dot{a},\phi,\dot{\phi},\zeta,\dot{\zeta},\Psi,\dot{\Psi}\right)=- \frac{6}{N}a\dot{a}^{2}-\frac{6}{N}a^{2}\dot{a}\dot{\zeta}+3Na\frac{\dot{\zeta}} {\dot{\Psi}}+Na^{3}V\left(\zeta\right). \tag{71}\]
with \(\gamma=\frac{1}{\Psi},,\), \(\zeta=f_{,B}\) and \(V\left(\zeta\right)=\left(f-f_{,B}\right)\).
Furthermore, for \(N=1\), the gravitational field equations are
\[3H^{2} = \rho_{f(Q,B)}^{\Gamma_{3}} \tag{72}\] \[-2\dot{H}-3H^{2} = p_{f(Q,B)}^{\Gamma_{3}}, \tag{73}\]
where
\[\rho_{f(Q,B)}^{\Gamma_{k}} = -\left(3H\dot{\zeta}+\frac{3}{2}\frac{\dot{\zeta}}{a^{2}\ddot{ \Psi}}+\frac{1}{2}V\left(\zeta\right)\right), \tag{74}\] \[p_{f(Q,B)}^{\Gamma_{k}} = \frac{V}{2}+\frac{\dot{\zeta}}{8a^{2}\dot{\Psi}}. \tag{75}\]
\[0=3\phi H^{2}+3H\dot{\zeta}-\frac{3}{2}\left(\dot{\phi}-\dot{\zeta}\right) \left(\frac{1}{a^{2}\dot{\Psi}}\right)+\frac{1}{2}V\left(\phi,\zeta\right), \tag{76}\]
where the scalar fields \(\zeta\) and \(\Psi\) satisfy the equations of motion
\[0=6\dot{H}+18H^{2}+V_{,\zeta}-\frac{3}{a^{2}\dot{\Psi}^{2}}\left(H\dot{\Psi}- \ddot{\Psi}\right), \tag{77}\]
\[0=3\dot{\Psi}\ddot{\zeta}+\dot{\zeta}\left(\dot{\Psi}H-2\ddot{\Psi}\right). \tag{78}\]
### Connection \(\Gamma_{k}\)
Finally, for the fourth connection \(\Gamma_{k}\) where curvature is nonzero, the minisuperspace Lagrangian is written
\[L\left(N,a,\dot{a},\phi,\dot{\phi},\zeta,\dot{\zeta},\Psi,\dot{\Psi}\right)=- \frac{6}{N}a\dot{a}^{2}+6Nak-\frac{6}{N}a^{2}\dot{a}\dot{\zeta}+3\dot{\zeta} \left(a\frac{N}{\dot{\Psi}}-k\frac{a^{3}}{\dot{\Psi}}\right)+Na^{3}V\left( \zeta\right). \tag{79}\]
in which \(\gamma=\frac{1}{\Psi},\), \(\zeta=f_{,B}\) and \(V\left(\zeta\right)=\left(f-f_{,B}\right)\).
The gravitational field equations in the presence of curvature are
\[3\left(H^{2}+\frac{k}{a^{2}}\right) = G_{eff}\rho_{f\left(Q,B\right)}^{\Gamma_{k}}, \tag{80}\] \[-2\dot{H}-3H-\frac{k}{a^{2}} = G_{eff}p_{f\left(Q,B\right)}^{\Gamma_{k}}. \tag{81}\]
The energy density and pressure components of the geometric fluid are defined as
\[\rho_{f\left(Q,B\right)}^{\Gamma_{k}} = -\left(3H\dot{\zeta}+\frac{3}{2}\dot{\zeta}\left(\frac{1}{a^{2} \dot{\Psi}}+k\dot{\Psi}\right)+\frac{1}{2}V\left(\zeta\right)\right), \tag{82}\] \[p_{f\left(Q,B\right)}^{\Gamma_{k}} = \frac{V}{2}+\frac{\dot{\zeta}}{2}\left(\frac{1}{4a^{2}\dot{\Psi}} -\frac{3k}{4}\dot{\Psi}\right). \tag{83}\]
For the scalar fields we derive the equations of motion
\[0=6\dot{H}+9H\left(2H+k\dot{\Psi}\right)+3k\ddot{\Psi}+V_{,\zeta}-\frac{3}{a^ {2}\dot{\Psi}^{2}}\left(H\dot{\Psi}-\ddot{\Psi}\right), \tag{84}\]
\[0=3a\dot{\Psi}\left(1+a^{2}k\dot{\Psi}^{2}\right)\ddot{\zeta}+\dot{\zeta} \left(\dot{\Psi}H\left(1+3a^{2}k\dot{\Psi}^{2}\right)-2\ddot{\Psi}\right). \tag{85}\]
From the above results we remark that in \(f\left(Q,B\right)=Q+F\left(B\right)\) gravity the field equations are of fourth-order in the coincidence gauge and of sixth-order in the non-coincidence gauge.
## VI New solution in the non-coincidence gauge
We focus on the field equations (65)-(70) for the connection \(\Gamma_{2}\) in the case of \(f\left(Q,B\right)=Q+F\left(B\right)\) theory. From equation (70) we construct the conservation law
\[I_{0}=a^{3}\dot{\zeta}. \tag{86}\]
For the exponential potential \(V\left(\zeta\right)=V_{0}e^{\lambda\zeta}\), i.e. \(F\left(B\right)=-\frac{B}{\lambda}\ln\left(-\frac{B}{\lambda V_{0}}\right)- \frac{B}{\lambda}\); we are able to write the second conservation law
\[I_{1}=a^{2}\left(2\left(\lambda-3\right)\dot{a}+a\left(\lambda\dot{\zeta}-3 \dot{\psi}\right)\right). \tag{87}\]
In order to determine the latter conservation law we applied the method of variational
symmetries which has been widely used in modified theories of gravity, for more details we refer the reader to [65; 66] and references therein.
With the use of the two conservation laws and of the constraint equation (65), the second Friedmann equation reads
\[2a^{5}\ddot{a}-2a^{2}\dot{a}\left(I_{0}\lambda+a^{2}\dot{a}\right)-I_{0}\left(I_ {0}\lambda-I_{1}\right)=0. \tag{88}\]
For \(I_{0}\lambda-I_{1}=0\), we are able to determine the closed form solution
\[a\left(t\right)=\left(a_{1}e^{aqt}+I_{0}\lambda\right)^{\frac{1}{3}}. \tag{89}\]
Consequently, the Hubble function is derived
\[H^{2}\left(a\right)=\sqrt{\left(\frac{a_{0}}{3}\right)^{2}-\frac{2a_{0}I_{0} \lambda}{9}a^{-3}+\frac{\left(I_{0}\lambda\right)^{2}}{9}a^{-6}}. \tag{90}\]
This analytic solution describes a universe with a cosmological constant, dark matter and a stiff fluid. Indeed when \(\frac{\left(I_{0}\lambda\right)^{2}}{9}a^{-6}\) is neglected, i.e. \(\frac{\left(I_{0}\lambda\right)^{2}}{9}a^{-6}\to 0\), the limit of \(\Lambda\)CDM universe is recovered. Furthermore, we calculate the deceleration parameter
\[q\left(a\right)=2-\frac{3a^{3}}{a^{3}-I_{0}\lambda},\]
from where it follows that for large values of \(a\), \(q\left(a\right)\rightarrow-1\), that is, the de Sitter universe is recovered. Acceleration point is occurred when \(2<\frac{3a^{3}}{a^{3}-I_{0}\lambda}\).
In order to solve equation (88) we apply the Lie symmetry analysis [65]. We find that equation (88) is invariant under the action of the elements of a two-dimensional Lie algebra consisted by the vector fields \(X_{1}=\partial_{t}\) and \(X_{2}=3t\partial_{t}+a\partial_{a}\). The application of the Lie invariants of \(X_{2}\) indicate the existence of the exact solution \(a\left(t\right)=\bar{a}_{0}t^{\frac{1}{3}}\) with constraint equation \(2a_{0}^{3}\left(a_{0}^{3}+\lambda I_{0}\right)=3I_{0}\left(I_{0}\lambda-I_{1}\right)\).
On the other hand, the application of the Lie symmetry vector \(X_{1}\) provides the reduced equation
\[\frac{dA}{da}=\frac{I_{0}\left(I_{1}-I_{0}\lambda\right)}{2a^{5}}A^{3}-\frac{ I_{0}\lambda}{a^{3}}A^{2}-\frac{1}{a}A\text{, }A\left(t\right)=\frac{1}{\dot{a}}\text{.} \tag{91}\]
This is an Abel type equation.
In Figs. 1 and 2 we present the qualitative evolution of the deceleration parameter \(q=-1-\frac{\dot{H}}{H^{2}}\) and of the function \(\gamma=\dot{\psi}\), as it is given after the numerical simulation of equation (88). We observe that the de Sitter universe is a future attractor for the cosmological model, and in the limit of the de Sitter universe connection \(\Gamma_{2}\) takes the form of \(\Gamma_{1}\).
## VII Conclusions
We conducted a study on FLRW cosmology within the framework of symmetric teleparallel theory, considering boundary corrections in the gravitational Lagrangian. In this context, and for the four different families of connections, we determined the minisuperspace description of the field equations. To achieve this, we introduced Lagrange multipliers, enabling us to express the higher-order derivatives of the field equations in terms of scalar fields. As a result, we were able to recast the cosmological field equations into the equivalent form of multi-scalar field cosmology. In the case of connections defined in the non-coincidence gauge, \(f(Q,B)\)-gravity is characterized by three scalar fields. However, in the limiting case of the \(f(Q,B)=Q+F(B)\) model, the field equations are described by two scalar fields. Conversely, for the connection defined in the coincidence gauge, there exists only one scalar field,
Figure 1: Qualitative evolution for the effective deceleration parameter \(q=-1-\frac{\dot{H}}{H^{2}}\) as it is given by the numerical solution of equation (88). The plots are for different values of the free parameters.
and the field equations are of fourth-order. It's worth noting that \(f(Q)\)-gravity introduces two scalar fields when the connection is defined in the non-coincidence gauge.
This scalar field description and the derivation of the minisuperspace representation are crucial for further investigations into the dynamic evolution of physical variables within the theory. Moreover, the minisuperspace Lagrangian can be employed to establish the Hamiltonian formalism of the model and derive the Wheeler-DeWitt equation of quantum cosmology.
To illustrate the practical application of the minisuperspace description, we employed the method of variational symmetries and successfully determined an integrable cosmological model. We were able to express the analytic solution in terms of the Abel equation. This particular cosmological model not only accounts for cosmic acceleration but also includes a dark matter component in the Hubble function.
These results suggest that \(f(Q,B)\)-theory holds promise as a viable cosmological framework. However, one notable implication is the significant increase in degrees of freedom introduced by this theory. Therefore, the new scalar fields must be capable of describing a wide range of cosmological phenomena. In future work, we plan to investigate whether
Figure 2: Qualitative evolution for the function \(\gamma=\dot{\psi}\) as it is given by the numerical solution of equation (88). The plots are for different values of the free parameters. We observe that as the solution reaches the de Sitter universe connection \(\Gamma_{2}\) reach to the limit of \(\Gamma_{1}\).
boundary correction terms in symmetric teleparallel theory can resolve cosmological tensions and whether the theory can provide explanations for eras in the cosmological history beyond late-time acceleration.
**Data Availability Statements:** Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
###### Acknowledgements.
The author thanks the support of Vicerrectoria de Investigacion y Desarrollo Tecnologico (Vridt) at Universidad Catolica del Norte through Nucleo de Investigacion Geometria Diferencial y Aplicaciones, Resolucion Vridt No - 098/2022.
|
2303.18008 | Capacity of Finite-State Channels with Delayed Feedback | In this paper, we investigate the capacity of finite-state channels (FSCs) in
presence of delayed feedback. We show that the capacity of a FSC with delayed
feedback can be computed as that of a new FSC with instantaneous feedback and
an extended state. Consequently, graph-based methods to obtain computable upper
and lower bounds on the delayed feedback capacity of unifilar FSCs are
proposed. Based on these methods, we establish that the capacity of the
trapdoor channel with delayed feedback of two time instances is given by
$\log_2(3/2)$. In addition, we derive an analytical upper bound on the delayed
feedback capacity of the binary symmetric channel with a no consecutive ones
input constraint. This bound also serves as a novel upper bound on its
non-feedback capacity, which outperforms all previously known bounds. Lastly,
we demonstrate that feedback does improve the capacity of the dicode erasure
channel. | Bashar Huleihel, Oron Sabag, Haim H. Permuter, Victoria Kostina | 2023-03-31T12:28:18Z | http://arxiv.org/abs/2303.18008v2 | # Capacity of Finite-State Channels with Delayed Feedback
###### Abstract
In this paper, we investigate the capacity of finite-state channels (FSCs) in presence of delayed feedback. We show that the capacity of a FSC with delayed feedback can be computed as that of a new FSC with instantaneous feedback and an extended state. Consequently, graph-based methods to obtain computable upper and lower bounds on the delayed feedback capacity of unifilar FSCs are proposed. Based on these methods, we establish that the capacity of the trapdoor channel with delayed feedback of two time instances is given by \(\log_{2}\left(\frac{3}{2}\right)\). In addition, we derive an analytical upper bound on the delayed feedback capacity of the binary symmetric channel with a no consecutive ones input constraint. This bound also serves as a novel upper bound on its non-feedback capacity, which outperforms all previously known bounds. Lastly, we demonstrate that feedback does improve the capacity of the dicode erasure channel.
## I Introduction
A finite-state channel (FSC) is a widely used statistical model for a channel with memory [2, 3, 4]. The memory of this channel is represented by an underlying channel state that takes values from a finite set. This model has been used in many practical applications, including wireless communication [5, 6, 7], molecular communication [8, 9], and magnetic recording [10]. An example of its versatility is the ability to model a memoryless channel with an input constraint by introducing a finite-state machine that tracks the forbidden constraint, and a sink state whose capacity is zero and is reached when the constraint is violated. Generally speaking, the capacity formula of a FSC, whether or not feedback is allowed, is given by a multi-letter expression which is hard to evaluate. The main focus of this paper is on an important class of FSCs, known as unifilar FSCs. For these channels, the new channel state is determined by a time-invariant function of the previous channel state, the current channel input, and the current channel output.
The capacity of a unifilar FSC with instantaneous feedback has been broadly investigated in the literature, while instantaneous feedback refers to the case where at time \(t\), the encoder has access to the channel outputs up to time \(t-1\). This has resulted in several powerful methodologies that have been employed to derive simple capacity expressions and optimal coding schemes for well-known instances of unifilar FSCs with feedback [11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. We mention here three essential works that will be utilized in the current paper: in [21, 22], for a given \(Q\)_-graph1_, single-letter upper and lower bounds on the feedback capacity of unifilar FSCs were introduced, as well as a methodology to evaluate the bounds; in [23], an alternative methodology to derive computable capacity upper bounds was proposed. In particular, in [23] it was shown that the dual capacity upper bound can be formulated as a simple Markov decision process (MDP) with the MDP states, actions, and disturbances taking values within finite sets. The main advantage of the duality-based upper bound compared to the single-letter \(Q\)-graph upper bound is the simplicity of deriving analytical upper bounds. However, the duality-based bounds require the specification of a \(Q\)-graph and a corresponding test distribution, while in the case of the \(Q\)-graph single-letter bounds, only
the \(Q\)-graph itself needs to be determined. Finding the optimal auxiliary parameters for the test distribution to yield tight bounds can be a challenging task. Nevertheless, in our current work, we are able to identify both the \(Q\)-graph and its test distribution so that the duality technique is utilized to derive analytical upper bounds.
In this paper, we investigate the delayed feedback capacity of FSCs. This means that, in the case of a delay of \(d\) time instances, the encoder has access to the channel outputs up to time \(t-d\), in contrast to the standard feedback definition of access up to time \(t-1\) (see Fig. 1). Our objective is two-fold: first, to investigate the capacity of FSCs in various feedback scenarios; and second, to derive upper bounds on the feedforward capacity (i.e., the capacity without feedback), as it is a known fact that feedback can only increase the feedforward capacity. While studying the delayed feedback capacity is important task on its own, it also helps us achieve our second objective.
### _Main Contributions_
Our first contribution is to show that a unifilar FSC with delayed feedback can be transformed into a unifilar FSC with instantaneous feedback to which the methodologies from [21, 22, 23] apply:
* We show that the capacity of a general FSC with delayed feedback can be computed as the capacity of a transformed FSC with instantaneous feedback. We define the new channel state as \(\hat{S}_{t-1}=\left(S_{t-d},X_{t-d+1}^{t-1}\right)\), the new channel output as \(\hat{Y}_{t}=Y_{t-d+1}\), and leave the channel input unchanged, i.e., \(\hat{X}_{t}=X_{t}\). We prove that this new channel is an FSC and that its capacity with instantaneous feedback is equal to the capacity of the original FSC with delayed feedback of \(d\) time instances.
* We demonstrate that if the original channel is a unifilar FSC, then the transformed FSC is a unifilar FSC as well.
We investigate the delayed feedback capacity of several important FSCs, and provide novel results on both their delayed feedback capacity and feedforward capacity:
* Despite the extensive research efforts [24, 25, 26, 27, 12] dedicated to the trapdoor channel [29], see Fig. 2, its feedforward capacity has remained an open problem for over sixty years. In [12], it was shown that the feedback capacity of the trapdoor channel is equal to \(\mathrm{C}_{1}^{\mathrm{fb}}=\log_{2}\left(\frac{1+\sqrt{5}}{2}\right)\approx 0.6942\). In this paper, we consider the trapdoor channel with delayed feedback of \(d=2\) time instances, and show that the capacity in this scenario is equal to \(\mathrm{C}_{2}^{\mathrm{fb}}=\log_{2}\left(\frac{3}{2}\right)\approx 0.5850\). Compared to the feedback capacity, this value is much closer to the lower bound on the feedforward capacity of \(0.572\)[26]. Further, by investigating a greater delay of the feedback, we provide a new upper bound on its feedforward capacity, which is approximately equal to \(0.5765\).
* We study the capacity of the binary symmetric channel (BSC) in the case where the input sequence is not allowed to contain two consecutive ones. The feedforward capacity in this scenario is still unknown [30, 31, 19, 32, 33]. We derive an analytical upper bound on its capacity with delayed feedback of \(2\) time instances that also serves as a novel upper bound on the feedforward capacity.
Fig. 1: Finite-state channel with delayed feedback of \(d\) time instances.
* We demonstrate that the feedback capacity of the dicode erasure channel (DEC) [34, 35], see Fig. 3, is not equal to its feedforward capacity, by providing a new upper bound on its feedforward capacity that lies slightly below the feedback capacity.
### _Organization_
The remainder of this paper is organized as follows. Section II introduces the notation and the model definition. Section III introduces computable upper and lower bounds on the capacity of unifilar FSCs with instantaneous feedback. Section IV presents our demonstration of the fact that the delayed feedback capacity problem can be reduced into an instantaneous feedback capacity problem, by appropriately reformulating the channel. Subsequently, we also introduce computable upper and lower bounds on the delayed feedback capacity of unifilar FSCs. Section V presents our main results regarding the capacity of the trapdoor channel. Section VI provides novel results concerning the feedforward capacities of the input-constrained BSC and the DEC, by investigating their delayed feedback capacity. Finally, our conclusions appear in Section VII. To preserve the flow of the presentation, some of the proofs are given in the appendices.
## II Notation and Preliminaries
In this section, we introduce the notation, the model definition, and our MDP framework.
### _Notation_
Throughout this paper, random variables will be denoted by capital letters and their realizations will be denoted by lower-case letters, e.g. \(X\) and \(x\), respectively. Calligraphic letters denote sets, e.g. \(\mathcal{X}\). We use the notation \(X^{n}\) to denote the random vector \((X_{1},X_{2},\ldots,X_{n})\) and \(x^{n}\) to denote the realization of such a random vector. The probability \(\Pr[X=x]\) is denoted by \(P_{X}(x)\). When the random variable is clear from
Fig. 3: The DEC. The inputs take values from the binary alphabet while the outputs take values in \(\mathcal{Y}=\{-1,0,1,?\}\). Given an input \(x_{t}\), the output of the DEC is \(y_{t}=x_{t}-x_{t-1}\) with probability \(1-p\), or \(y_{t}=?\) with probability \(p\), where \(p\in[0,1]\) is the channel parameter. The channel state is the previous input, i.e. \(s_{t-1}=x_{t-1}\).
Fig. 2: The trapdoor channel. The channel can be viewed as a box in which at time \(t\) a labelled ball \(s_{t-1}\) (channel state) lies. Then, a new ball \(x_{t}\) (channel input) is inserted into the box, and the channel output \(y_{t}\) is chosen with equal probability as either \(s_{t-1}\) or \(x_{t}\). The remaining ball in the box (either \(s_{t-1}\) or \(x_{t}\)) is now called \(s_{t}\) and serves as the channel state for the next time-instance.
the context, we write it in shorthand as \(P(x)\). For a real number \(\alpha\in[0,1]\), we define \(\bar{\alpha}=1-\alpha\). We use the convention that \(0\log 0=0\).
The directed information between \(X^{n}\) and \(Y^{n}\) is defined as
\[I(X^{n}\to Y^{n})=\sum_{i=1}^{n}I(X^{i};Y_{i}|Y^{i-1}).\]
The probability mass function of \(X^{n}\)_causally conditioned_ on \(Y^{n-d}\) is defined as
\[P(x^{n}\|y^{n-d})=\prod_{i=1}^{n}P(x_{i}|x^{i-1},y^{i-d}).\]
We denote by \(\mathrm{C}\), \(\mathrm{C}_{1}^{\mathrm{fb}}\), and \(\mathrm{C}_{\mathrm{d}}^{\mathrm{fb}}\), the feedforward capacity (i.e. no feedback), the feedback capacity (the capacity with instantaneous feedback, i.e. \(d=1\) in Fig. 1), and the \(d\) time instances delayed feedback capacity, respectively.
### _Finite-state Channels_
A FSC is defined statistically by a time-invariant transition probability kernel, \(P_{S^{+},Y|X,S}\), where \(X\), \(Y\), \(S\), \(S^{+}\) denote the channel input, output, and state before and after one transmission, respectively. The cardinalities \(\mathcal{X},\mathcal{Y},\mathcal{S}\) are assumed to be finite. Formally, given a message \(m\), the channel has the following property:
\[P(s_{t},y_{t}|x^{t},y^{t-1},s^{t-1},m)=P_{S^{+},Y|X,S}(s_{t},y_{t}|x_{t},s_{t-1 }). \tag{1}\]
A unifar FSC has the additional property that the state evolution is given by a time-invariant function, \(f(\cdot)\), such that \(s_{t}=f(s_{t-1},x_{t},y_{t})\).
As shown in the theorem below, the feedback capacity of a strongly connected2 FSC is given by a multi-letter expression that cannot be computed directly.
Footnote 2: A FSC is strongly connected if for any states \(s,s^{\prime}\in\mathcal{S}\), there exit an integer \(T\) and an input distribution \(\{P_{X_{t}|S_{t-1}}\}_{t=1}^{T}\) such that \(\sum_{t=1}^{T}P_{S_{t}|S_{0}}(s|s^{\prime})>0\).
**Theorem 1** ([12], Th. 3).: _The feedback capacity of a strongly connected FSC is_
\[\mathrm{C}_{1}^{\mathrm{fb}}=\lim_{n\to\infty}\frac{1}{n}\max_{P(x^{n}\|y^{n -1})}I(X^{n}\to Y^{n}). \tag{2}\]
In this paper, we consider a communication setting with delayed feedback as depicted in Fig. 1. The encoder has access to the message \(M\) and the channel outputs delayed by \(d\) time instances. That is, the encoder outputs \(x_{t}\) as a function of \(M\) and the channel outputs up to time \(t-d\), where \(d\geq 1\) is a finite integer. This setting captures the conventional instantaneous feedback case when \(d=1\). An interesting problem beyond the scope of the current paper is to consider a delay parameter \(d\) that can scale with the block length communication. The channel input \(x_{t}\) then goes through a FSC, and the resulting output \(y_{t}\) enters the decoder. The encoder then receives the feedback sample with a delay of \(d\) time instances. When the feedback has a delay of \(d\) time instances, the maximization over the directed information in Theorem 1 is performed over \(P(x^{n}\|y^{n-d})\) instead of over \(P(x^{n}\|y^{n-1})\)[36].
### _MDP Framework_
MDP provide a mathematical framework for modeling decision-making problems in which the outcomes of actions are uncertain and dependent on the current state of the system. We consider an MDP problem with a state space \(\mathcal{Z}\), an action space \(\mathcal{U}\), and a disturbance space \(\mathcal{W}\). The initial state \(z_{0}\in\mathcal{Z}\) is randomly drawn from a distribution \(P_{Z}\). At each time step \(t\), the system is in a state \(z_{t-1}\in\mathcal{Z}\), the decision-maker selects an action \(u_{t}\in\mathcal{U}\), and a disturbance \(w_{t}\in\mathcal{W}\) is drawn according to a conditional distribution \(P_{w}(\cdot|z_{t-1},u_{t})\). The state \(z_{t}\) then evolves according to a transition function \(F:\mathcal{Z}\times\mathcal{U}\times\mathcal{W}\rightarrow\mathcal{Z}\), i.e., \(z_{t}=F(z_{t-1},u_{t},w_{t})\).
The decision-maker selects the action \(u_{t}\) according to a function \(\mu_{t}\), which maps histories \(h_{t}=(z_{0},w_{0},\ldots,w_{t-1})\) onto actions, i.e. \(u_{t}=\mu_{t}(h_{t})\). Given a policy \(\pi=\{\mu_{1},\mu_{2},...\}\) and a bounded reward function \(g:\mathcal{Z}\times\mathcal{U}\rightarrow\mathbb{R}\), the goal is to maximize the average reward over an infinite time horizon. The average reward achieved by policy \(\pi\) is defined as
\[\rho_{\pi}=\liminf_{n\rightarrow\infty}\frac{1}{n}\mathbb{E}_{\pi}\left[\sum _{t=0}^{n-1}g\left(Z_{t},\mu_{t+1}(h_{t+1})\right)\right].\]
The optimal average reward is denoted by \(\rho^{*}\) and is achieved by the policy that maximizes the expected sum of rewards over time, i.e., \(\rho^{*}=\sup_{\pi}\rho_{\pi}\).
The following theorem presents the Bellman equation in the context of the formulation defined above. The Bellman equation provides a sufficient condition for determining whether a given average reward is optimal.
**Theorem 2** (Bellman equation, [37]).: _If a scalar \(\rho\in\mathbb{R}\) and a bounded function \(h:\mathcal{Z}\rightarrow\mathbb{R}\) satisfy_
\[\rho+h(z)=\sup_{u\in\mathcal{U}}\left(g\left(z,u\right)+\int P_{w}(dw|z,u)h \left(F\left(z,u,w\right)\right)\right),\ \ \forall z\in\mathcal{Z}\]
_then \(\rho=\rho^{*}\)._
## III Bounds on Feedback Capacity
In this section we introduce computable bounds on the feedback capacity of unifilar FSCs. These bounds were introduced for unifilar FSCs with instantaneous feedback in [21, 23]. In Section IV, we demonstrate that they can be extended to the case of delayed feedback as well.
### _The \(Q\)-graph Bounds_
We begin by introducing an auxiliary tool known as the \(Q\)_-graph_. For a fixed \(Q\)-graph, we then present the single-letter upper and lower bounds that were established in [21]. The \(Q\)-graph is a directed, connected, and labeled graph, for which each of its nodes have \(|\mathcal{Y}|\) outgoing edges with distinct labels from the channel output alphabet. Given an initial node, an output sequence, \(y^{t}\), is mapped onto a unique node by walking along the labeled edges. An example of a \(Q\)-graph is provided in Fig. 4. The induced mapping is denoted by \(\Phi_{t}:\mathcal{Y}^{t}\rightarrow\mathcal{Q}\), which can be presented alternatively as a function \(\phi:\mathcal{Q}\times\mathcal{Y}\rightarrow\mathcal{Q}\). Namely, a new graph node can be computed as a time-invariant function of the previous node and a channel output.
**Remark 1**.: A special case of a \(Q\)-graph is a _\(k\)th-order Markov \(Q\)-graph_, which is defined on the set of nodes \(\mathcal{Q}=\mathcal{Y}^{k}\); for each node \(q=(y_{1},y_{2},\ldots,y_{k})\), the outgoing edge labeled \(y\in\mathcal{Y}\) goes to the node \((y_{2},\ldots,y_{k},y)\). For instance, Fig. 4 shows a Markov \(Q\)-graph with \(\mathcal{Y}=\{0,1\}\) and \(k=1\).
For a fixed FSC and a given \(Q\)-graph, we construct the \((S,Q)\)-graph, an additional directed graph that combines both the information on the \(Q\)-graph and the evolution of the channel states. Specifically, split each node in the \(Q\)-graph into \(|\mathcal{S}|\) new nodes, which are represented by pairs \((s,q)\in\mathcal{S}\times\mathcal{Q}\). Then, an edge labeled \((x,y)\) from node \((s,q)\) to node \((s^{+},q^{+})\) exists if and only if there is a pair \((x,y)\) such
that \(s^{+}=f(s,x,y)\), \(q^{+}=\phi(q,y)\), and \(P(y|x,s)>0\). The pair of functions \((f,\phi)\) are given by the channel state transition and the fixed \(Q\)-graph. For any choice of input distribution \(P_{X|S,Q}\), the transition probabilities on the edges of the \((S,Q)\)-graph are computed as
\[P(s^{+},q^{+}|s,q) =\sum_{x,y}P(x,y,s^{+},q^{+}|s,q)\] \[\stackrel{{(a)}}{{=}}\sum_{x,y}P(x|s,q)P(y|x,s) \mathbbm{1}\left\{q^{+}=\phi(q,y)\right\}\mathbbm{1}\left\{s^{+}=f(s,x,y)\right\}, \tag{3}\]
where \((a)\) follows by the channel law and by the fact that \(q^{+}\) is a deterministic function of \((q,y)\). We define the notation \(\mathcal{P}_{\pi}\) as the set of input distributions \(P_{X|S,Q}\) that induce a unique stationary distribution on \((S,Q)\), namely, their corresponding \((S,Q)\)-graph is irreducible and aperiodic.
In the following theorem we introduce the upper bound.
**Theorem 3**.: _[_21_, Theorem \(2\)]_ _The feedback capacity of a strongly connected unifilar FSC, where the initial state is available to both the encoder and the decoder, is bounded by_
\[\mathrm{C}_{1}^{\mathrm{fb}}\leq\sup_{P_{X|S,Q}\in\mathcal{P}_{\pi}}I(X,S;Y|Q), \tag{4}\]
_for all \(Q\)-graphs for which the \((S,Q)\)-graph has a single and aperiodic closed communicating class. The joint distribution is \(P_{Y,X,S,Q}=P_{Y|X,S}P_{X|S,Q}\pi_{S,Q}\), where \(\pi_{S,Q}\) is the stationary distribution of the \((S,Q)\)-graph._
We proceed to describe the lower bound. Let us first define a property called the _BCJR-invariant input_. An input distribution \(P_{X|S,Q}\) is said to be an _aperiodic input_ if its \((S,Q)\)-graph is aperiodic, and an aperiodic input distribution is said to be _BCJR-invariant_ if the Markov chain \(S^{+}-Q^{+}-(Q,Y)\) holds. A simple verification of the Markov chain is given by the following equation:
\[\pi(s^{+}|q^{+})=\frac{\sum_{x,s}\pi(s|q)P(x|s,q)P(y|x,s)\mathbbm{1}_{\left\{ s^{+}=f(x,y,s)\right\}}}{\sum_{x^{\prime},s^{\prime}}\pi(s^{\prime}|q)P(x^{ \prime}|s^{\prime},q)P(y|x^{\prime},s^{\prime})}, \tag{5}\]
which needs to hold for all \((s^{+},q,y)\) and \(q^{+}=\phi(q,y)\).
Now that we have defined a BCJR-invariant input distribution, the lower bound can be introduced.
**Theorem 4**.: _[_21_, Theorem \(3\)]_ _If the initial state \(s_{0}\) is available to both the encoder and the decoder, then the feedback capacity of a strongly connected unifilar FSC is bounded by_
\[\mathrm{C}_{1}^{\mathrm{fb}}\geq I(X,S;Y|Q), \tag{6}\]
_for all aperiodic inputs \(P_{X|S,Q}\in\mathcal{P}_{\pi}\) that are BCJR-invariant, and for all irreducible \(Q\)-graphs with \(q_{0}\) such that \((s_{0},q_{0})\) lies in an aperiodic closed communicating class._
Henceforth, we refer to a pair of a \(Q\)-graph and an input distribution \(P_{X|S,Q}\) that satisfies the BCJR-invariant property as a _graph-based encoder_.
**Remark 2**.: The upper bound in Theorem 3 can be formulated as a convex optimization [22]. As a result, for a fixed \(Q\)-graph, the upper bound can be efficiently evaluated. On the other hand, the lower bound optimization results in a non-convex optimization problem, but it still has the advantage that any feasible point (i.e. BCJR-invariant input distribution) induces a graph-based encoder. It was shown in [22] that any graph-based encoder implies a simple coding scheme that achieves the lower bound. The scheme is based on an extension of posterior matching [38] to channels with memory [19, 22].
**Remark 3**.: The selection of the \(Q\)-graph significantly impacts the performance of the bounds. To identify a \(Q\)-graph that will result in capacity-achieving bounds, we suggest conducting an exhaustive search over all possible \(Q\)-graphs, as explained in detail in [22]. While such an exploration can become computationally expensive with increasing \(Q\)-graph size, it is often sufficient to consider a small cardinality graph to obtain tight or nearly tight bounds. Another approach is to evaluate the performance of Markov \(Q\)-graphs, which often provide good bounds. A Matlab implementation of the optimization problems, including the \(Q\)-graph search methods, is available in [39].
### _Upper Bounds via Duality_
Here we present computable upper bounds on the capacity of unifilar FSCs from [23], that are based on the dual capacity upper bound [40]. For the sake of clarity, we first introduce the dual capacity upper bound for a discrete memoryless channel. Specifically, for a memoryless channel, \(P_{Y|X}\), and for any choice of a test distribution, \(T_{Y}\), on the channel output alphabet, the dual capacity upper bound states that
\[\mathrm{C}\leq\max_{x\in\mathcal{X}}D\left(P_{Y|X=x}\|T_{Y}\right). \tag{7}\]
The choice of the test distribution is crucial since it directly affects the performance of the bound. If the test distribution is equal to the optimal output distribution, then the upper bound is tight. For FSCs, the dual upper bound depends on a test distribution, \(T_{Y^{n}}\), with memory. In [23, 28], test distributions that are structured on a \(Q\)-graph were proposed, that is, the following equality holds:
\[T_{Y^{n}}(y^{n})=\prod_{t=1}^{n}T_{Y|Q}(y_{t}|q_{t-1}), \tag{8}\]
where \(q_{t-1}=\Phi(y^{t-1})\). We refer to such test distributions as _graph-based test distributions_. The use of graph-based test distributions yielded the result in the theorem below.
**Theorem 5** (Computable upper bounds).: _[_23_, Theorem 4]_ _For any graph-based test distribution \(T_{Y|Q}\), the feedback capacity of a strongly connected unifilar FSC is bounded by_
\[\mathrm{C}_{1}^{\mathrm{fb}}\leq\lim_{n\rightarrow\infty}\max_{f(x^{n}\|y^{n- 1})}\min_{s_{0},q_{0}}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[D\left(P_{Y|X,S }(\cdot|x_{i},S_{i-1})\bigg{\|}T_{Y|Q}(\cdot|Q_{i-1})\right)\right], \tag{9}\]
_where \(f(x^{n}\|y^{n-1})\) stands for causal conditioning of deterministic functions, i.e._
\[f(x^{n}\|y^{n-1})=\prod_{i}\mathbbm{1}\{x_{i}=f_{i}(x^{i-1},y^{i-1})\}.\]
_Additionally, the upper bound in (9) defines an infinite-horizon average reward MDP that is presented in Table I._
The following theorem is a simplification of the Bellman equation in Theorem 2 for the case of the MDP formulation in Table I.
**Theorem 6** (Bellman equation).: _If there exists a scalar \(\rho\in\mathbb{R}\) and a bounded function \(h:\mathcal{S}\times\mathcal{Q}\rightarrow\mathbb{R}\) such that_
\[\rho+h(s,q)=\max_{x\in\mathcal{X}}\left(D\left(P_{Y|X,S}(\cdot|x,s)\big{\|}T_{Y| Q}(\cdot|q)\right)+\sum_{y\in\mathcal{Y}}P(y|x,s)h\left(f(s,x,y),\phi(q,y) \right)\right), \tag{10}\]
_for all \((s,q)\), then \(\rho=\rho^{*}\)._
Following Theorem 6, it is sufficient to solve the Bellman equation associated with the MDP problem in Table I to show that the feedback capacity of a given FSC is upper bounded by the induced average reward of the MDP.
**Remark 4**.: The \(Q\)-graph upper bound, presented in Section III-A, provides an efficient approach for evaluating numerical upper bounds since it relies on a convex optimization problem [28]. However, obtaining analytical bounds using this methodology may be tedious as the KKT conditions need to be verified to derive the bounds. Therefore, to obtain analytical bounds (not just numerical ones), we prefer using the duality-based technique presented in this section. In particular, note that the MDP formulation in Table I consists of finite MDP states, actions, and disturbances. Thus, given an optimal policy, we only need to solve a finite set of linear equations to derive a conjectured solution for \((\rho^{*},h(\cdot))\).
## IV Computable Bounds On Delayed Feedback Capacity
In this section we start with a general result, in which the delayed feedback capacity of any FSC can be computed as the instantaneous feedback capacity of a transformed FSC. By utilizing this reduction, we derive computable upper and lower bounds on the delayed feedback capacity of unifilar FSCs that are a straightforward extension of the \(Q\)-graph bounds from Section III-A. These new bounds are introduced in Section IV-B.
### _Delayed Feedback Capacity as Instantaneous Feedback Capacity_
For a finite integer \(d\geq 1\) and for any FSC given by a transition kernel \(P_{Y,S^{+}|X,S}\), we define the following transformation:
* The channel state is \(\tilde{S}_{t}\triangleq\left(S_{t-d+1},X_{t-d+2}^{t}\right)\).
* The channel output is \(\tilde{Y}_{t}\triangleq Y_{t-d+1}\).
* The channel input remains the same, i.e. \(\tilde{X}_{t}\triangleq X_{t}\).
First, we will show that the above transformation defines a new FSC with a transition kernel \(P_{\tilde{Y},\tilde{S}^{+}|\tilde{X},\tilde{S}^{*}}^{d}\) where the superscript \(d\) emphasizes the transformation dependence on the delay \(d\). That is, the new channel follows the time-invariant Markov property of FSCs in (1). Second, we show in the following theorem the relation between the channel capacity of the original FSC and its transformation.
**Theorem 7**.: _The capacity of a FSC \(P_{Y,S^{+}|X,S}\) with delayed feedback of \(d\) time instances is equal to the instantaneous feedback capacity of the FSC \(P_{\tilde{Y},\tilde{S}^{+}|\tilde{X},\tilde{S}}^{d}\). Furthermore, if the original FSC is unifilar, then the new transformed FSC is unifilar as well._
It is important to note that the cardinality of the new channel state is \(|\tilde{\mathcal{S}}|=|\mathcal{S}|\cdot|\mathcal{X}|^{d-1}\), while the cardinality of the channel state in the original channel is \(|\mathcal{S}|\). That is, we pay the price of having a larger channel state space as a result of the delay.
Proof of Theorem 7.: Given a FSC, \(P_{Y,S^{+}|X,S}\), define the new channel state as \(\tilde{S}_{t-1}=\left(S_{t-d},X_{t-d+1}^{t-1}\right)\), let the channel output be \(\tilde{Y}_{t}=Y_{t-d+1}\), and let the channel input remain the same, i.e. \(\tilde{X}_{t}=X_{t}\). In the following, we show that the new channel is a FSC such that its capacity with instantaneous feedback is equal to the capacity of the original channel with delayed feedback of \(d\) time instances.
* Conditioned on the previous channel state \(\tilde{S}_{t-1}\), the channel input \(\tilde{X}_{t}\), and the channel output \(\tilde{Y}_{t}\), the new channel state \(\tilde{S}_{t}\) is independent of any other previous states, inputs, and outputs. That is, \[P(\tilde{s}_{t}|\tilde{x}^{t},\tilde{y}^{t},\tilde{s}^{t-1})=P( \tilde{s}_{t}|\tilde{s}_{t-1},\tilde{x}_{t},\tilde{y}_{t}).\] (11) This equation holds due to the Markov chain \((S_{t-d+1},X_{t-d+2}^{t})-(S_{t-d},X_{t-d+1}^{t},Y_{t-d+1})-(X^{t-d},Y^{t-d})\), which follows directly by the Markov chain property of the original channel. In particular, since \(\tilde{S}_{t}=(S_{t-d+1},X_{t-d+2}^{t})\), and since \((\tilde{S}_{t-1},\tilde{X}_{t},\tilde{Y}_{t})\) include \((S_{t-d},X_{t-d+1}^{t},Y_{t-d+1})\), (11) holds. In addition, we show below that, if the original channel is a unifilar FSC, then the unfair property also holds for the new induced channel. That is, we show that the new channel state \(\tilde{s}_{t}\) is a time-invariant function of \(\tilde{s}_{t-1}\), \(\tilde{x}_{t}\), and \(\tilde{y}_{t}\): \[\tilde{s}_{t} =\left(s_{t-d+1},x_{t-d+2}^{t}\right)\] \[=\left(f\left(s_{t-d},x_{t-d+1},y_{t-d+1}\right),x_{t-d+2}^{t}\right)\] \[\triangleq\tilde{f}\left(\tilde{s}_{t-1},\tilde{x}_{t},\tilde{y}_ {t}\right),\] where, clearly, \(\tilde{f}:\tilde{\mathcal{X}}\times\tilde{\mathcal{Y}}\times\tilde{\mathcal{S }}\rightarrow\tilde{\mathcal{S}}\) is a time-invariant function of \((\tilde{s}_{t-1},\tilde{x}_{t},\tilde{y}_{t})\). Thus, the unifilar property holds in this case.
* Conditioned on the previous channel state \(\tilde{S}_{t-1}\) and the channel input \(\tilde{X}_{t}\), the channel output \(\tilde{Y}_{t}\) is independent of any other previous states, inputs, and outputs. Specifically, note that \(\tilde{s}_{t-1}\) includes the pair \((s_{t-d},x_{t-d+1})\), and therefore it implies that \[P(\tilde{y}_{t}|\tilde{x}^{t},\tilde{y}^{t-1},\tilde{s}^{t-1})=P( \tilde{y}_{t}|\tilde{x}_{t},\tilde{s}_{t-1}),\] due to the fact that the redefined channel output \(\tilde{y}_{t}\) is the original channel output at time \((t-d+1)\).
* The initial state \(\tilde{S}_{0}\) is known both to the encoder and to the decoder, as required. As shown above, the redefined channel is a FSC. Additionally, at each time-step \(t\), the encoder knows all previous channel outputs \(\tilde{y}^{t-1}\), as required in the case of instantaneous feedback.
We note that following our proposed transformation, the decoder does not have access to the last \(d\) channel outputs. Thus, the resulting capacity of the transformed channel is a lower bound on the delayed feedback capacity of the original channel. However, this reduction in the block length has a vanishing effect on the limiting capacity as \(d\) is assumed to be finite while \(n\) grows large. Now, given an input sequence, the corresponding outputs of the redefined FSC are drawn according to the statistics of the original channel model. Furthermore, maximizing over \(P(\tilde{x}^{n}\|\tilde{y}^{n-1})\) is equivalent to maximizing over \(P(x^{n}\|y^{n-d})\). Therefore, we can deduce that the capacity of the original channel with delayed feedback can be computed as
\[\text{C}_{\text{d}}^{\text{fb}}=\lim_{n\rightarrow\infty}\max_{P( \tilde{x}^{n}\|\tilde{y}^{n-1})}\frac{1}{n}I(\tilde{X}^{n}\rightarrow\tilde{Y} ^{n}), \tag{12}\]
while reformulating the channel model as described above.
A similar formulation appeared in [41, 42], but only for the case where the channel state \(s_{t}\) is a deterministic function of \(s_{t-1}\) and \(x_{t}\). Here, we presented a general formulation that holds for any FSC. The trapdoor channel, for instance, does not fall into the framework of [41] and [42] since the channel state depends on the channel outputs as well.
**Remark 5**.: Following our formulation, it is interesting to observe that the channel output \(\tilde{Y}_{t}\) is independent of \(\tilde{X}_{t}\) conditioned on \(\tilde{S}_{t-1}\). In other words, the channel output solely depends on the channel state and not on the channel input. However, the choice of the channel input \(\tilde{x}_{t}\) is still of significant importance since it directly affects the evolution of the next channel state.
### \(Q\)_-graph Bounds on Delayed Feedback Capacity_
In Section III we introduced two powerful methodologies to compute upper and lower bounds on the capacity of unifilar FSCs with instantaneous feedback. Based on these approaches, we establish computable upper and lower bounds on the capacity of the unifilar FSC with delayed feedback. Specifically, following Theorem 7, since the delayed feedback capacity of a unifilar FSC can be computed as the capacity of a new unifilar FSC with instantaneous feedback, the computable bounds from Section III can be directly adapted for the case of delayed feedback, just by redefining the channel and then applying the bounds on the new unifilar FSC. We emphasize that a delay of at least two time instances is assumed here. Otherwise, we have the standard instantaneous feedback scenario. In the following theorem, we present the \(Q\)-graph lower bound for the case of delayed feedback.
**Theorem 8**.: _If the initial state \(\tilde{s}_{0}\) is available to both the encoder and the decoder, then the \(d\) time instances delayed feedback capacity of a strongly connected unifilar FSC is bounded by_
\[\mathrm{C}_{\mathrm{d}}^{\mathrm{fb}}\geq I(\tilde{S};\tilde{Y}|Q), \tag{13}\]
_where \(\tilde{X}\), \(\tilde{Y}\), \(\tilde{S}\) are the new channel input, output, and state, respectively (as defined in Section IV-A). The bound holds only for aperiodic inputs \(P_{\tilde{X}|\tilde{S},Q}\in\mathcal{P}_{\pi}\) that are BCJR-invariant, and for all irreducible \(Q\)-graphs with \(q_{0}\) such that \((\tilde{s}_{0},q_{0})\) lies in an aperiodic closed communicating class._
Following Remark 2, it is evident that any graph-based encoder (comprising of a \(Q\)-graph and a BCJR input distribution) for the delayed feedback capacity problem provides a lower bound (Theorem 8) and a coding scheme that achieve this lower bound. The construction details of the coding scheme are precisely given in [19, 22], where the new transformed channel is considered for the construction. Further, the \(Q\)-graph upper bound for the case of delayed feedback is given in the theorem below.
**Theorem 9**.: _The \(d\) time instances delayed feedback capacity of a strongly connected unifilar FSC, where the initial state is available to both the encoder and the decoder, is bounded by_
\[\mathrm{C}_{\mathrm{d}}^{\mathrm{fb}}\leq\sup_{P_{\tilde{X}|\tilde{S},Q}\in \mathcal{P}_{\pi}}I(\tilde{S};\tilde{Y}|Q), \tag{14}\]
_where \(\tilde{X}\), \(\tilde{Y}\), \(\tilde{S}\) are the new channel input, output, and state, respectively (as defined in Section IV-A). The bound holds for all \(Q\)-graphs for which the \((\tilde{S},Q)\)-graph has a single and aperiodic closed communicating class. The joint distribution is \(P_{\tilde{Y},\tilde{X},\tilde{S},Q}=P_{\tilde{Y}|\tilde{X},\tilde{S}}P_{ \tilde{X}|\tilde{S},Q}\pi_{\tilde{S},Q}\), where \(\pi_{\tilde{S},Q}\) is the stationary distribution of the \((\tilde{S},Q)\)-graph._
Proof of Theorems 9 and 8.: First, as shown in Section IV-A, after reformulating the FSC, we obtain an equivalent instantaneous feedback capacity problem. Since the new induced channel is a unifilar FSC, Theorems 3 and 4 can be directly applied on the new unifilar channel. Accordingly, it is only left to show
that it is sufficient to optimize \(I(\tilde{S};\tilde{Y}|Q)\) instead of \(I(\tilde{X},\tilde{S};\tilde{Y}|Q)\). The latter holds by the trivial Markov chain \(\tilde{Y}-\tilde{S}-\tilde{X}\). In particular, according to the new formulation, the new channel state already include the new channel input.
## V Trapdoor Channel with Delayed Feedback
The trapdoor channel (Fig. 2) has had a long history in information theory since its introduction by David Blackwell in \(1961\)[29]. The channel has attracted much interest since its representation is very simple, yet its capacity computation is highly non-trivial. The channel can be viewed as a (causal) permutation channel since the weight of the input sequence is equal to the weight of the output sequence. This channel is also termed the _chemical channel_, which alludes to a physical system in which chemical concentrations are used to communicate [43]. A detailed discussion on the trapdoor channel can be found in Robert Ash's book [44] (which even uses the channel for his book cover).
The trapdoor channel is a unifilar FSC whose operation can be described as follows. At time \(t\), let \(x_{t}\in\{0,1\}\) be the channel input and \(s_{t-1}\in\{0,1\}\) be the previous channel state. The channel input, \(x_{t}\), is transmitted through the channel. The channel output, \(y_{t}\), is equal to the previous state \(s_{t-1}\) or to the input \(x_{t}\), with the same probability. The new channel state is evaluated according to \(s_{t}=x_{t}\oplus y_{t}\oplus s_{t-1}\), where \(\oplus\) denotes the XOR operation.
Although the capacity of the trapdoor channel without feedback is unknown, it is known in two important variations of the original capacity problem. In [24, 25], it was shown that the zero-error capacity of the trapdoor channel is \(C_{0}=0.5\) bits per channel use. This provides a lower bound on the feedforward capacity which is known to be non-tight (e.g. [26]). The other variation is the feedback capacity, which is equal to \(\mathrm{C}_{1}^{\text{fb}}=\log_{2}\left(\frac{1+\sqrt{5}}{2}\right)\approx 0.6942\), as shown in [12]. It is also known that feedback _does increase_ the capacity for the trapdoor channel (e.g. [27, 28]).
The following theorem states our main result concerning the delayed feedback capacity of the trapdoor channel.
**Theorem 10**.: _The capacity of the trapdoor channel with delayed feedback of two time instances is_
\[\mathrm{C}_{2}^{\text{fb}}=\log_{2}\left(\frac{3}{2}\right).\]
The proof of Theorem 10 is given in Appendix A. This result concludes the delayed feedback capacity, and we will now discuss several implications of our capacity result. The above capacity is approximately \(\mathrm{C}_{2}^{\text{fb}}\approx 0.5849\), while the instantaneous feedback capacity is approximately \(\mathrm{C}_{1}^{\text{fb}}\approx 0.6942\). The best lower bound to date on the feedforward capacity is \(\mathrm{C}\geq 0.572\)[26]. It is interesting to note that even a single time-instance delay leads to a sharp decrease in the capacity towards the feedforward capacity.
The delayed feedback capacity in Theorem 10 also serves as an upper bound on the feedforward capacity. Overall, the best bounds on the feedforward capacity are given by
\[0.572\leq C\leq 0.5849.\]
While the delayed feedback capacity is equal to the best upper bound on the feedforward capacity, it does not establish a new upper bound. In particular, a recent paper proposed using duality-based upper bounds on the feedforward capacity and established the same bound [28]. However, their bound is for the feedforward capacity only, and therefore we still need to show a converse proof for Theorem 10.
An interesting question is whether the delayed feedback capacity is, indeed, the feedforward capacity. Simulations of the delayed feedback capacity with a delay greater than two time instances suggest that this is not the case. In particular, by operational considerations, we have the following chain of inequalities:
\[C\leq\ldots\leq\mathrm{C}_{3}^{\text{fb}}\leq\mathrm{C}_{2}^{\text{fb}}\leq \mathrm{C}_{1}^{\text{fb}}. \tag{15}\]
As clarified, the upper bound in Theorem 9 can be formulated as a convex optimization problem, and its evaluation for a greater delay of the feedback gives
\[\mathrm{C}_{3}^{\mathrm{fb}}\leq 0.5782,\ \ \ \mathrm{C}_{4}^{\mathrm{fb}}\leq 0.5765. \tag{16}\]
Accordingly, these simulations suggest that the feedforward capacity satisfies \(C\leq 0.5765\). In other words, the delayed feedback capacity in Theorem 10 does not seem to be the feedforward capacity, which remains an open problem.
**Remark 6**.: The achievability proof of Theorem 10 is based on the \(Q\)-graph lower bound, which was presented in Section IV-B. That is, the lower bound was established by showing that a particular graph-based encoder, given by the \(Q\)-graph in (25) and the input distribution in (26), provides an achievable rate of \(\log_{2}(3/2)\). This graph-based encoder implies a simple coding scheme that, in our case, achieves the capacity. As explained in Remark 2, the scheme is based on the posterior matching principle, and the exact details regarding the constriction of the coding scheme are given in [22].
**Remark 7**.: Following the formulation in Section IV-A, we present in Fig. 5 the trapdoor channel with delayed feedback of two time instances as a new unifilar FSC with instantaneous feedback. For the new FSC, it is interesting to note that the capacity of each individual channel (per state) is zero. Nevertheless, as we already demonstrated, the capacity of the overall FSC is not zero. This follows due to the fact that, at each time \(t\), the output depends only on the previous channel state, and the choice of the current input will only participate in the evolution of the next channel state.
## VI Upper Bounds on Feedforward Capacity
In this section, we demonstrate that the investigation of the delayed feedback capacity plays an important role in deriving upper bounds on the feedforward capacity. Specifically, besides the trapdoor channel, we present here two additional FSCs for which we derive novel results concerning their feedforward capacity by investigating their delayed feedback capacity. It is important to note that as the delay \(d\) grows, the upper bounds on the feedforward capacity may improve. In practice, we show that even small values of \(d\geq 2\) lead to very good upper bounds when compared to corresponding lower bounds on the feedforward capacity. Thus, an instantaneous feedback is significant in terms of capacity.
### _Input-Constrained BSC_
Regardless of whether feedback is allowed or not, memoryless channels have the same simple single-letter capacity expression [45]. When the inputs are constrained, however, the capacity problem is very challenging. The feedforward capacity in the presence of constrained inputs has been extensively investigated, e.g. [30, 32, 33, 46, 47], but is still given by a multi-letter expression.
Here, we consider the BSC with crossover probability \(p\), denoted by BSC(\(p\)), where the inputs are constrained to satisfy the \((1,\infty)\)-RLL constraint. Namely, the input sequence does not contain two consecutive ones. Even though this setting does not fall under the classical definition of a unifilar FSC,
Fig. 5: The trapdoor channel with delayed feedback of two time instances as a new unifilar FSC with instantaneous feedback.
it is straightforward to convert input constraints by defining a dummy sink state whose capacity is zero in the case that the constraint is violated. For this setting, while the feedforward capacity is still open, the feedback capacity was established in [19], and it is known that feedback does increase the capacity [19, 31].
In the theorem below we present a novel result concerning the capacity of the BSC with a no consecutive ones input constraint.
**Theorem 11**.: _The capacity of the input-constrained BSC(\(p\)) with a \((1,\infty)\)-RLL constraint and delayed feedback of two time instances satisfies_
\[\mathrm{C}_{2}^{\mathrm{fb}}(p)\leq\min\log_{2}\left(\frac{p^{p}\bar{p}^{ \bar{p}}a^{(p^{3}-3p^{2}+3p-1)}(\bar{b}\bar{c}d)^{(p^{3}-p^{2})}}{(\bar{a}bc)^{ (p^{3}-2p^{2}+p)}\bar{d}\bar{p}^{3}}\right),\]
_where the minimum is over all \((a,b,c,d)\in[0,1]^{4}\) that satisfy:_
\[1 \leq\frac{a^{(4p^{3}-12p^{2}+11p-3)}(\bar{b}d)^{(4p^{3}-6p^{2}+2p )}\bar{c}^{(4p^{3}-4p^{2}+p)}}{(\bar{a}c)^{(4p^{3}-8p^{2}+5p-1)}(b^{4p^{3}-10p ^{2}+6p-1)}\bar{d}^{(4p^{3}-2p^{2})}},\] \[1 \leq\frac{a^{(4p^{3}-10p^{2}+8p-2)}(\bar{b}d)^{(4p^{3}-4p^{2}+p)} \bar{c}^{(4p^{3}-2p^{2}-2p+1)}}{(\bar{a}c)^{(4p^{3}-6p^{2}+2p)}b^{(4p^{3}-8p^{2 }+5p-1)}\bar{d}^{(4p^{3}-p)}}. \tag{17}\]
The proof of Theorem 11 is given in Appendix B.
In Fig. 6, the delayed feedback capacity upper bound in Theorem 11 is plotted along with the feedback capacity from [19], the best upper bound on the feedforward capacity from [31], a numerical upper bound on the delayed feedback capacity of two time instances, and a lower bound on the feedforward capacity obtained using the simulation method in [48]. Here as well, it is surprising to note from the plot
Fig. 6: Upper and lower bounds on the feedforward capacity of the input-constrained BSC(\(p\)). The top brown line is the feedback capacity from [19]. The blue line is the best known upper bound on the feedforward capacity from [31]. The black line represents our analytical upper bound on the delayed feedback capacity from Theorem 11. The red line represents a numerical upper bound on the delayed feedback capacity, which was evaluated using a \(3\)-order Markov \(Q\)-graph. Finally, the bounds are compared to a lower bound on the feedforward capacity (yellow line).
that the difference between the capacity with instantaneous feedback and the capacity with an additional time-instance delay is quite significant. Further, although our analytical upper bound in Theorem 11 was introduced for the case of delayed feedback, it also serves as a novel upper bound on the feedforward capacity, and outperforms all previously known bounds. Nonetheless, our upper bound almost coincides with the lower bound on the feedforward capacity.
We would like to emphasize that our upper bound in Theorem 11 can be further improved. Specifically, the dashed red line in Fig. 6 was obtained by evaluating the upper bound in Theorem 9 with a \(3\)-order Markov \(Q\)-graph, which consist of eight nodes, and its evolution function is given by the vector representation
\[\underline{\phi}(\underline{q},\tilde{y}=0)=[1,3,5,7,1,3,5,7]\] \[\underline{\phi}(\underline{q},\tilde{y}=1)=[2,4,6,8,2,4,6,8]. \tag{18}\]
This vector representation implies, for instance, \(\phi(q=1,\tilde{y}=0)=1\) and \(\phi(q=1,\tilde{y}=1)=2\). It can be noted from the figure that the induced upper bound provides an even tighter bound on the delayed feedback capacity of two time instances. However, the upper bound in Theorem 11 already achieves remarkable performance, and therefore we do not provide here an analytical expression for the additional bound.
### _The Dicode Erasure Channel_
The DEC was studied in [21, 28, 34, 35] and is a simplified version of the well-known dicode channel with additive white Gaussian noise. The operation of the DEC is illustrated in Fig. 3. The feedback capacity of the DEC was established in [21], and is given in the theorem below. However, in the absence of feedback the capacity is still unknown.
**Theorem 12** ([21], Th. 5).: _The feedback capacity of the DEC is_
\[\mathrm{C}_{1}^{\text{fb}}(p)=\max_{\epsilon\in[0,1]}(1-p)\frac{\epsilon+pH_{ 2}(\epsilon)}{p+(1-p)\epsilon} \tag{19}\]
_for any channel parameter \(p\in[0,1]\)._
In the following theorem, we derive an upper bound on the delayed feedback capacity of the DEC for \(p=0.5\). This bound serves as a novel upper bound on the feedforward capacity, and it also demonstrates that feedback does increase the capacity of the DEC for \(p=0.5\).
**Theorem 13**.: _The capacity of the DEC with delayed feedback of two time instances is upper bounded by_
\[\mathrm{C}_{2}^{\text{fb}}(0.5)\leq\max_{a\in(0,0.5)}\frac{1}{4}\cdot\log_{2} \left(\frac{2-3a}{(1-2a)\cdot\big{(}1+8a^{2}\bar{a}-3a-(1-4a\bar{a})\sqrt{1+4a ^{3}}\big{)}}\right).\]
The proof of Theorem 13 is given in Appendix C. The upper bound is derived by using a particular \(Q\)-graph with eight nodes, which is given within the proof of the theorem.
In Fig. 7, we present bounds on the feedforward capacity of the DEC. In particular, the red line is the feedback capacity from Theorem 12, which serves as a non-trivial upper bound. The black line is an achievable rate from [49], obtained by considering a first-order Markov input process. Finally, the blue line shows our upper bound on the two time instances delayed feedback capacity. This bound is a numerical evaluation of Theorem 9 with the same \(Q\)-graph that is used for the proof of Theorem 13. Accordingly, for \(p=0.5\), the plot provides a numerical evaluation of the analytical upper bound in Theorem 13. In [28], the authors derived an upper bound on the feedforward capacity, which turned out to be exactly equal to the feedback capacity. This fact led to the question of whether the feedback capacity is equal to the feedforward capacity, which, indeed, is negligibly different from the first-order Markov achievable
rate. Following our numerical upper bound, we could see that this bound improves the feedback capacity for any \(p\in(0,1)\), which indicates that feedback increases the capacity of the DEC over the entire region of the erasure parameter.
## VII Conclusions
In this paper, we investigated the delayed feedback capacity of FSCs. It was shown that the capacity of a FSC with delayed feedback can be computed as that of a new FSC with instantaneous feedback. Accordingly, several graph-based methods to obtain computable bounds on the capacity of unifilar FSCs, which were introduced for the case of instantaneous feedback, could be adapted to the case of delayed feedback as well. Using these bounds, we could establish that the capacity of the trapdoor channel with delayed feedback of two time instances is equal to \(\log_{2}\left(\frac{3}{2}\right)\). In addition, we derived an upper bound on the delayed feedback capacity of the input-constrained BSC, which also serves as a novel upper bound on its feedforward capacity. Finally, we demonstrate that feedback increases capacity for the DEC.
## Appendix A Trapdoor Channel -- Proof of Capacity (Theorem 10)
Proof.: The proof of the capacity result in Theorem 10 consists of two parts. In Section A-A, we prove the converse, that is, \(\mathrm{C}_{2}^{\mathrm{fb}}\leq\log_{2}\left(\frac{3}{2}\right)\), and in Section A-B, we show a corresponding lower bound. The proof is based on the methodology in Section III. As clarified, the upper and lower bounds hold for instantaneous feedback, but using the formulation in Section IV-A, we are able to transform the delayed feedback capacity into a capacity problem with instantaneous feedback.
We begin with presenting the formulation of the trapdoor channel with delayed feedback as a new unifilar FSC with instantaneous feedback. The new FSC (see Fig. 5) is defined as follows: the channel state consists of the pair of the previous channel state and channel input, that is, \(\tilde{s}_{t-1}\triangleq(s_{t-2},x_{t-1})\). The channel input is \(\tilde{x}_{t}=x_{t}\) and the channel output is \(\tilde{y}_{t}=y_{t-1}\). If \(\tilde{s}_{t-1}=(0,1)\) or \(\tilde{s}_{t-1}=(1,0)\), then we have a BSC\((0.5)\). Otherwise, for any \(\tilde{x}_{t}\), if \(\tilde{s}_{t-1}=(0,0)\) then \(\tilde{y}_{t}=0\), and if \(\tilde{s}_{t-1}=(1,1)\) then \(\tilde{y}_{t}=1\).
Fig. 7: Upper and lower bound on the capacity of the DEC. The feedback bound is the feedback capacity from [21]. The delayed feedback bound (blue line) is our upper bound. The black line is an achievable rate on the feedforward capacity.
### _Upper Bound_
Here, we will show that \(\mathrm{C}_{2}^{\mathrm{fb}}\leq\log_{2}\left(\frac{3}{2}\right)\). The proof is based on fixing a particular graph-based test distribution, and then solving the MDP problem of the dual capacity upper bound (for additional details, see Section III-B). The MDP formulation is presented in Table I.
Consider the \(Q\)-graph in Fig. 4, which consists of two nodes, and the following graph-based test distribution:
\[T(\tilde{y}=0|\underline{q})=\left[\frac{2}{3},\frac{1}{3}\right]. \tag{20}\]
To present the solution for the Bellman equation, define the constant
\[\rho^{*}=\log_{2}\left(\frac{3}{2}\right), \tag{21}\]
and the value function
\[h(\tilde{s},q)=\begin{cases}1,&(\tilde{s}=(0,0),q=2)\text{ or }(\tilde{s}=(1,1),q=1),\\ 0,&\text{else}.\end{cases} \tag{22}\]
**Remark 8**.: The conjectured solution \((\rho^{*},h(\cdot))\) has been obtained by using the value iteration algorithm with the MDP defined in Table I. Specifically, applying the value iteration algorithm provides the optimal policy for any possible MDP state. Then, it is only left to solve a finite set of linear equations in order to derive closed-form expressions for \(\rho^{*}\) and \(h(\cdot)\).
We proceed to show that \(\rho^{*}\) in (21) and the value function in (22) solve the Bellman equation. This directly implies, by Theorem 5, that \(\mathrm{C}_{2}^{\mathrm{fb}}\leq\rho^{*}=\log_{2}(3/2)\). For the MDP state \((\tilde{s}=(0,0),q=1)\), the right-hand side of the Bellman equation is a maximum over \(\tilde{x}\) of
\[D\left(P_{\tilde{Y}|\tilde{X},\tilde{S}}(\cdot|\tilde{x},\tilde{ s})\Big{\|}T_{\tilde{Y}|Q}(\cdot|q)\right)+\sum_{\tilde{y}\in\mathcal{Y}}P( \tilde{y}|\tilde{x},\tilde{s})h\left(\tilde{f}(\tilde{s},\tilde{x},\tilde{y}),\phi(q,\tilde{y})\right)\] \[\overset{(a)}{=}\begin{cases}D\left([1,0]\left\|\left[\frac{2}{ 3},\frac{1}{3}\right]\right)+h((0,0),1),&\text{if }\tilde{x}=0,\\ D\left([1,0]\left\|\left[\frac{2}{3},\frac{1}{3}\right]\right)+h((0,1),1),& \text{if }\tilde{x}=1.\end{cases}\] \[\overset{(b)}{=}\log_{2}\left(\frac{3}{2}\right),\quad\text{for all } \tilde{x} \tag{23}\]
where \((a)\) follows by the channel definition and our choice of the test distribution, and \((b)\) follows since according to (22) we have \(h((0,0),1)=h((0,1),1)=0\), and also \(D\left([1,0]\left\|\left[\frac{2}{3},\frac{1}{3}\right]\right)=\log_{2}(3/2)\). Further, the left-hand side of the Bellman equation is \(\rho^{*}+h((0,0),1)\), which is equal to \(\log_{2}\left(\frac{3}{2}\right)\) as well. Therefore, we can conclude that the Bellman equation holds for \((\tilde{s}=(0,0),q=1)\).
For the MDP state \((\tilde{s}=(0,0),q=2)\), the right-hand side of the Bellman equation is a maximum over \(\tilde{x}\) of
\[D\left(P_{\tilde{Y}|\tilde{X},\tilde{S}}(\cdot|\tilde{x},\tilde{ s})\Big{\|}T_{\tilde{Y}|Q}(\cdot|q)\right)+\sum_{\tilde{y}\in\mathcal{Y}}P( \tilde{y}|\tilde{x},\tilde{s})h\left(\tilde{f}(\tilde{s},\tilde{x},\tilde{y}),\phi(q,\tilde{y})\right)\] \[=\begin{cases}D\left([1,0]\left\|\left[\frac{1}{3},\frac{2}{3} \right]\right)+h((0,0),1),&\text{if }\tilde{x}=0,\\ D\left([1,0]\left\|\left[\frac{1}{3},\frac{2}{3}\right]\right)+h((0,1),1),& \text{if }\tilde{x}=1.\end{cases} \tag{24}\]
Also here, in both cases of \(\tilde{x}\), the equation is equal to \(\log_{2}\left(3\right)\), while the left-hand side of the Bellman equation is \(\rho^{*}+h((0,0),2)=\log_{2}(3)\). Thus, the Bellman equation holds for this case too. The verification for the remaining MDP states can be done similarly.
### _Lower Bound_
The lower bound is derived using Theorem 8 with a particular graph-based encoder that induces the BCJR-invariant property. We show that the achievable rate induced by the graph-based encoder is \(R=\log_{2}\left(\frac{3}{2}\right)\), and therefore \(C\geq\log_{2}\left(\frac{3}{2}\right)\).
A graph-based encoder consists of a \(Q\)-graph and an input distribution \(P_{X|S,Q}\) that is BCJR-invariant. We choose a \(Q\)-graph consisting of four nodes, and its evolution function is given by the vector representation
\[\begin{split}&\underline{\phi}(\underline{q},\tilde{y}=0)=[1,3,1,3] \\ &\underline{\phi}(\underline{q},\tilde{y}=1)=[2,4,2,4].\end{split} \tag{25}\]
For the \(Q\)-graph in (25), we define the following input distribution:
\[P_{\tilde{X}|\tilde{S},Q}(0|\tilde{s},q)=\begin{array}{|c|c|c|c|} \hline&\tilde{s}=(0,0)&\tilde{s}=(0,1)&\tilde{s}=(1,0)&\tilde{s}=(1,1)\\ \hline q=1&2/3&1/3&1/3&0\\ \hline q=2&1&2/3&0&1/3\\ \hline q=3&2/3&1&1/3&0\\ \hline q=4&1&2/3&2/3&1/3\\ \hline\end{array} \tag{26}\]
According to (3), the Markov transition probability can now be computed as
\[P(\tilde{s}^{+},q^{+}|\tilde{s},q)=\sum_{\tilde{x},\tilde{y}}P(\tilde{x}| \tilde{s},q)P(\tilde{y}|\tilde{x},\tilde{s})\mathbb{1}_{\{q^{+}=\phi(q,\tilde{ y})\}}\mathbb{1}_{\{\tilde{s}^{+}=\tilde{f}(\tilde{s},\tilde{x},\tilde{y})\}}.\]
Consequently, standard computation of the stationary distribution \(\pi(\tilde{s},q)\) provides that
\[\pi_{\tilde{S},Q}(\tilde{s},q)=\begin{array}{|c|c|c|c|c|}\hline&\tilde{s}=(0,0)&\tilde{s}=(0,1)&\tilde{s}=(1,0)&\tilde{s}=(1,1)\\ \hline q=1&1/6&1/12&1/36&1/18\\ \hline q=2&1/36&1/18&0&1/12\\ \hline q=3&1/12&0&1/18&1/36\\ \hline q=4&1/18&1/36&1/12&1/6\\ \hline\end{array}.\]
We now verify that the proposed graph-based encoder satisfies the BCJR-invariant property in (5). Let us show this explicitly for the case where \((q,\tilde{y})=(1,1)\) and \(\tilde{s}^{+}=(0,0)\). Since \(\phi(1,1)=2\), the left-hand side of Eq. (5) is equal to \(\pi_{\tilde{S}|Q}((0,0)|2)\), while the right-hand side is equal to
\[\frac{\sum_{x,s}\pi_{\tilde{S}|Q}(s|1)P_{\tilde{X}|\tilde{S},Q}(x |s,1)P_{\tilde{Y}|\tilde{X},\tilde{S}}(1|x,s)\mathbb{1}_{\{(0,0)=\tilde{f}(x,1,s)\}}}{\sum_{x^{\prime},s^{\prime}}\pi_{\tilde{S}|Q}(s^{\prime}|1)P_{\tilde{ X}|\tilde{S},Q}(x^{\prime}|s^{\prime},1)P_{\tilde{Y}|\tilde{X},\tilde{S}}(1|x^{ \prime},s^{\prime})}\] \[=\frac{1}{6},\]
which, indeed, is equal to \(\pi_{\tilde{S}|Q}((0,0)|2)\), as required. The verification of the other cases can be done similarly.
Finally, the achievable rate of the graph-based encoder is
\[R =I(\tilde{S};\tilde{Y}|Q)\] \[=\sum_{q\in\mathcal{Q}}\pi_{Q}(q)\cdot I(\tilde{S};\tilde{Y}|Q=q)\] \[=\sum_{q\in\mathcal{Q}}\pi_{Q}(q)\cdot\left[H_{2}\left(\tilde{Y}| Q=q\right)-H_{2}(\tilde{Y}|\tilde{S},Q=q)\right]\] \[\stackrel{{(a)}}{{=}}\sum_{q\in\mathcal{Q}}\pi_{Q}(q) \cdot\left[H_{2}\left(\frac{2}{3}\right)-H_{2}(\tilde{Y}|\tilde{S},Q=q)\right]\]
\[=H_{2}\left(\frac{2}{3}\right)-\sum_{q\in\mathcal{Q}}\pi_{Q}(q)\cdot H _{2}(\tilde{Y}|\tilde{S},Q=q)\] \[=H_{2}\left(\frac{2}{3}\right)-\frac{1}{3}\] \[=\log_{2}\left(\frac{3}{2}\right),\]
where \((a)\) follows due to the fact that
\[P_{\tilde{Y}|Q}(0|q) =\sum_{\tilde{x},\tilde{s}}\pi(\tilde{s}|q)P(\tilde{x}|\tilde{s}, q)P_{\tilde{Y}|\tilde{X}\tilde{S}}(0|\tilde{x},\tilde{s})\] \[=\begin{cases}2/3,&q=1\text{ {or }}q=3,\\ 1/3,&q=2\text{ {or }}q=4.\end{cases}\]
## Appendix B Input-Constrained BSC -- Proof of Theorem 11
Proof.: Here we provide the proof of Theorem 11 regarding the upper bound on the capacity of the \((1,\infty)\)-input constrained BSC(\(p\)). We begin with the formulation of the channel with delayed feedback of two time instances as a new unifilar FSC with instantaneous feedback. The channel state is defined as \(\tilde{s}_{t-1}\triangleq(x_{t-2},x_{t-1})\), the channel input is \(\tilde{x}_{t}=x_{t}\), and the channel output is \(\tilde{y}_{t}=y_{t-1}\). If \(\tilde{s}_{t-1}=(0,0)\) or \(\tilde{s}_{t-1}=(1,0)\), then \(\tilde{y}_{t}=0\) with probability \(1-p\) or \(\tilde{y}_{t}=1\) with probability \(p\). Otherwise, if \(\tilde{s}_{t-1}=(0,1)\), then \(\tilde{y}_{t}=0\) with probability \(p\) or \(\tilde{y}_{t}=1\) with probability \(1-p\). Due to the input constraint, if \(\tilde{s}_{t-1}=(0,1)\), then the transmitted input \(\tilde{x}_{t}\) must be zero.
For a particular graph-based test distribution, we solve the MDP problem of the dual capacity upper bound. Here too, consider the \(Q\)-graph in Eq. (25), and the following parameterized graph-based test distribution:
\[T(\tilde{y}=0|\underline{q})=\left[a,b,c,d\right], \tag{27}\]
where \((a,b,c,d)\in(0,1)^{4}\). Define the constant
\[\rho^{*}=\log_{2}\left(\frac{p^{p}\bar{p}^{p}a^{(p^{3}-3p^{2}+3p-1)}(\bar{b} \bar{c}d)^{(p^{3}-p^{2})}}{(\bar{a}bc)^{(p^{3}-2p^{2}+p)}\bar{d}^{p^{3}}} \right). \tag{28}\]
Further, define the value function \(h(\tilde{s},q)\) as follows:
\[h((0,0),1) =h((1,0),1)=\log_{2}\left(\frac{\bar{c}^{p}d\bar{d}^{p}c^{1-2p}}{ \bar{a}^{2p}b^{p}a^{2-3p}}\cdot\left(\frac{\bar{a}bc\bar{d}}{a\bar{b}\bar{c}d }\right)^{p^{2}}\right)\] \[h((0,0),2) =h((1,0),2)=\log_{2}\left(\frac{d}{b}\cdot\left(\frac{b\bar{d}}{ \bar{b}d}\right)^{p}\right)\] \[h((0,0),3) =h((1,0),3)=\log_{2}\left(\frac{a^{2p-1}d\bar{d}^{p}}{(\bar{a}bc )^{p}}\cdot\left(\frac{\bar{a}bc\bar{d}}{a\bar{b}\bar{c}d}\right)^{p^{2}}\right)\] \[h((0,0),4) =h((1,0),4)=0\] \[h((0,1),1) =\log_{2}\left(\frac{a^{6p^{2}-6p+1}\bar{b}^{2p^{2}-p}\bar{c}^{2 p^{2}}d^{2p^{2}-p+1}\bar{d}^{p}}{\bar{a}^{4p^{2}-2p+1}b^{4p^{2}-3p+1}c^{4p^{2}-2p}} \cdot\left(\frac{\bar{a}bc\bar{d}}{a\bar{b}\bar{c}d}\right)^{2p^{3}}\right)\]
\[h((0,1),2) =\log_{2}\left(\frac{a^{5p^{2}-4p+1}(\bar{a}cd)^{p}(\bar{b}\bar{c} d\bar{d})^{p^{2}}}{\bar{b}^{1-p}(\bar{a}bc)^{3p^{2}}}\cdot\left(\frac{\bar{a} bc\bar{d}}{\bar{a}\bar{b}\bar{c}d}\right)^{2p^{3}}\right)\] \[h((0,1),3) =\log_{2}\left(\frac{a^{6p^{2}-5p+1}\bar{b}^{2p^{2}-p}\bar{c}^{2 p^{2}+p-1}d^{2p^{2}-p+1}\bar{d}^{p}}{\bar{a}^{4p^{2}-p}pb^{4p^{2}-3p+1}C^{4p^{2}-p}} \cdot\left(\frac{\bar{a}bc\bar{d}}{\bar{a}\bar{b}\bar{c}d}\right)^{2p^{3}}\right)\] \[h((0,1),4) =\log_{2}\left(\frac{a^{5p^{2}-4p+1}(\bar{b}\bar{c}d\bar{d})^{p^ {2}}}{(\bar{a}bc)^{3p^{2}-p}\bar{d}^{1-p}}\cdot\left(\frac{\bar{a}bc\bar{d}}{ \bar{a}\bar{b}\bar{c}d}\right)^{2p^{3}}\right). \tag{29}\]
To complete the proof it is left to show that, under the constraints given in (17), the scalar \(\rho^{*}\) in (28) and the value function in (29) solve the Bellman equation. For the MDP state \((\tilde{s}=(0,0),q=1)\), the right-hand side of the Bellman equation is a maximum over \(\tilde{x}\) of
\[D\left(P_{\tilde{Y}|\tilde{X},\tilde{S}}(\cdot|\tilde{x},\tilde{ s})\Big{\|}T_{\tilde{Y}|Q}(\cdot|q)\right)+\sum_{\tilde{y}\in\mathcal{Y}}P( \tilde{y}|\tilde{x},\tilde{s})h\left(\tilde{f}(\tilde{s},\tilde{x},\tilde{y}),\phi(q,\tilde{y})\right)\] \[=\begin{cases}D\left([\bar{p},p]\right)\left\|[a,\bar{a}] \right)+\bar{p}\cdot h((0,0),1)+p\cdot h((0,0),2),&\text{if }\tilde{x}=0\\ D\left([\bar{p},p]\right)\left\|[a,\bar{a}]\right)+\bar{p}\cdot h((0,1),1)+p \cdot h((0,1),2),&\text{if }\tilde{x}=1.\end{cases} \tag{30}\]
Under the constraints given in (17), it can be verified that \(\tilde{x}=0\) attains the maximum in (30). Further, the left-hand side of the Bellman equation is \(\rho^{*}+h((0,0),1)\), which, after being simplified, is exactly equal to the right-hand side of the Bellman equation. Hence, the Bellman equation holds for the case that \((\tilde{s}=(0,0),q=1)\). The verification for the remaining MDP states is omitted here and follows similar calculations.
## Appendix C DEC -- Proof of Theorem 13
Proof.: Here we provide the proof of Theorem 13 regarding the upper bound on the capacity of the DEC. As before, we start with the formulation of the channel with delayed feedback of two time instances as a new unifilar FSC with instantaneous feedback. The channel state is defined as \(\tilde{s}_{t-1}\triangleq(x_{t-2},x_{t-1})\), the channel input is \(\tilde{x}_{t}=x_{t}\), and the channel output is \(\tilde{y}_{t}=y_{t-1}\). The output of the DEC is \(\tilde{y}_{t}=x_{t-1}-x_{t-2}\) with probability \(1-p\), or \(\tilde{y}_{t}=?\) with probability \(p\), where \(p\in[0,1]\) is the channel parameter.
Also here, for a particular graph-based test distribution, we solve the MDP problem of the dual capacity upper bound. Specifically, consider the following \(Q\)-graph:
\[\underline{\phi}(\underline{q},\tilde{y}=-1) =[1,1,1,1,1,1,1]\] \[\underline{\phi}(\underline{q},\tilde{y}=0) =[1,3,3,4,4,6,8,8]\] \[\underline{\phi}(\underline{q},\tilde{y}=1) =[6,6,6,6,6,6,6]\] \[\underline{\phi}(\underline{q},\tilde{y}=?) =[2,7,7,7,7,5,7,7]. \tag{31}\]
For \(a,\gamma_{1},\gamma_{2}\in(0,0.5)\) and the \(Q\)-graph in (31), consider the graph-based test distribution \(T_{Y|Q}(y|q)\) that is defined by the following table:
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \(q=1\) & \(q=2\) & \(q=3\) & \(q=4\) & \(q=5\) & \(q=6\) & \(q=7\) & \(q=8\) \\ \hline \(y=-1\) & \(0\) & \(\gamma_{2}/2\) & \(a/2\) & \(a/2\) & \(\gamma_{2}/2\) & \(0.5-\gamma_{1}\) & \(\gamma_{2}/2\) & \(a/2\) \\ \hline \(y=0\) & \(\gamma_{1}\) & \(0.5-\gamma_{2}\) & \(0.5-a\) & \(0.5-a\) & \(0.5-\gamma_{2}\) & \(\gamma_{1}\) & \(0.5-\gamma_{2}\) & \(0.5-a\) \\ \hline \(y=1\) & \(0.5-\gamma_{1}\) & \(\gamma_{2}/2\) & \(a/2\) & \(a/2\) & \(\gamma_{2}/2\) & \(0\) & \(\gamma_{2}/2\) & \(a/2\) \\ \hline \(y=?\) & \(0.5\) & \(0.5\) & \(0.5\) & \(0.5\) & \(0.5\) & \(0.5\) & \(0.5\) & \(0.5\) \\ \hline \end{tabular}
The proposed graph-based test distribution follows by first numerically optimizing over the test distribution. Then, we observed that the optimal test distribution can be represented by three parameters, which are denoted here as \(\gamma_{1},\gamma_{2}\), and \(a\).
Define the constant
\[\rho^{*}=\frac{1}{4}\log_{2}\left(\frac{2-3a}{(1-2a)\cdot\left(1+8a^{2}\bar{a}-3 a-(1-4a\bar{a})\sqrt{1+4a^{3}}\right)}\right). \tag{32}\]
Also, define \(h(\tilde{s},q)\) as follows:
\[h((0,0),1) =h((0,0),6)=\frac{1}{4}\log_{2}\left(\frac{a(1-2a)\gamma_{2}}{4(1 -2\gamma_{2})\gamma_{1}^{2}}\right)\] \[h((0,0),2) =h((0,0),5)=h((0,0),7)=\frac{1}{4}\log_{2}\left(\frac{4a\gamma_{1 }^{2}\gamma_{2}}{(1-2a)(1-2\gamma_{2})^{3}}\right)\] \[h((0,0),3) =h((0,0),4)=h((0,0),8)=\frac{1}{4}\log_{2}\left(\frac{4a\gamma_{1 }^{2}\gamma_{2}}{(1-2a)^{3}(1-2\gamma_{2})}\right)\] \[h((0,1),1) =\frac{1}{4}\log_{2}\left(\frac{a(1-2a^{2})}{(1-2\gamma_{1})^{3}}\right)\] \[h((0,1),2) =h((0,1),5)=h((0,1),7)=\frac{1}{4}\log_{2}\left(\frac{a(1-2a)^{2} }{\gamma_{2}^{2}(1-2\gamma_{1})}\right)\] \[h((0,1),3) =h((0,1),4)=h((0,1),8)=\frac{1}{4}\log_{2}\left(\frac{(1-2a)^{2} }{a(1-2\gamma_{1})}\right)\] \[h((0,1),6) =\frac{1}{4}\log_{2}\left(\frac{a(1-2a)^{2}}{1-2\gamma_{1}}\right)\] \[h((1,0),1) =\frac{1}{4}\log_{2}\left(\frac{a\gamma_{2}(1-2a)}{1-2\gamma_{2}}\right)\] \[h((1,0),2) =h((1,0),5)=h((1,0),7)=\frac{1}{4}\log_{2}\left(\frac{a(1-2a)}{ \gamma_{2}(1-2\gamma_{2})}\right)\] \[h((1,0),3) =h((1,0),4)=h((1,0),8)=\frac{1}{4}\log_{2}\left(\frac{a(1-2\gamma _{2})}{\gamma_{2}(1-2a)}\right)\] \[h((1,0),6) =\frac{1}{4}\log_{2}\left(\frac{a\gamma_{2}(1-2a)}{(1-2\gamma_{1 })^{2}(1-2\gamma_{2})}\right)\] \[h((1,1),1) =\log_{2}\left(\frac{1-2a}{2\gamma_{1}}\right)\] \[h((1,1),2) =h((1,1),5)=h((1,1),7)=\frac{1}{2}\log_{2}\left(\frac{1-2a}{1-2 \gamma_{2}}\right)\] \[h((1,1),3) =h((1,1),4)=h((1,1),8)=0\] \[h((1,1),6) =\frac{1}{4}\log_{2}\left(\frac{a(1-2a)^{2}}{4\gamma_{1}^{2}(1- 2\gamma_{1})}\right). \tag{33}\]
Let us assume that the optimal policy is given by
\[u^{*}(\tilde{s},q)=\begin{cases}1,&(\tilde{s}=(1,1),q=1),\\ 0,&\textit{else}.\end{cases} \tag{34}\]
The policy above was obtained by solving numerically the MDP problem using the value iteration algorithm. Assuming (34), it can be noted that the Bellman equation is based on a finite set of linear equations, and it can be verified that if
\[\gamma_{1} =\frac{1}{4a}\left((2-4a)\cdot\sqrt{a^{2}+0.25}+4a\bar{a}-1\right)\] \[\gamma_{2} =\frac{1}{4-6a}\left((4a^{2}-4a+1)\sqrt{1+4a^{2}}-8a^{2}\bar{a}+1 \right), \tag{35}\]
then the Bellman equation holds under our choice of \(\rho^{*}\) in (32) and the function \(h(\tilde{s},q)\) in (33). The verification follows from straightforward calculations, as we did in the previous sections, and therefore the details are omitted here. In Eq. (35), we write analytical expressions for \(\gamma_{1},\gamma_{2}\) as a function of \(a\). These expressions were derived by observing that, for particular MDP states, the optimal solution (in terms of the test distribution's parameters) is achieved when the right-hand side of the Bellman equation does not depend on the action. Namely, for particular MDP states we require that the right-hand side of the Bellman equation is equal for \(u=0\) and \(u=1\). Such a requirement results in linear equality constraints that are satisfied with \(\gamma_{1}\) and \(\gamma_{2}\) in (35).
|
2309.15577 | An Evaluation of ChatGPT-4's Qualitative Spatial Reasoning Capabilities
in RCC-8 | Qualitative Spatial Reasoning (QSR) is well explored area of Commonsense
Reasoning and has multiple applications ranging from Geographical Information
Systems to Robotics and Computer Vision. Recently many claims have been made
for the capabilities of Large Language Models (LLMs). In this paper we
investigate the extent to which one particular LLM can perform classical
qualitative spatial reasoning tasks on the mereotopological calculus, RCC-8. | Anthony G Cohn | 2023-09-27T11:23:15Z | http://arxiv.org/abs/2309.15577v1 | # An Evaluation of ChatGPT-4's
###### Abstract
Qualitative Spatial Reasoning (QSR) is well explored area of Commonsense Reasoning and has multiple applications ranging from Geographical Information Systems to Robotics and Computer Vision. Recently many claims have been made for the capabilities of Large Language Models (LLMs). In this paper we investigate the extent to which one particular LLM can perform classical qualitative spatial reasoning tasks on the mereotopological calculus, RCC-8.
## Introduction
Qualitative Spatial Reasoning (QSR1) [14] is a well developed field which is concerned with the representation of qualitative spatial information and reasoning with it. In natural language, spatial information is usually represented qualitatively (using prepositions such as _on, in, left of, part of, under, touching,..._) and many calculi have been developed to represent such information. There are calculi for mereological relations (such as RCC-5 [17]), mereotopological relations (such as RCC-8 [18, 19]), directions (such as OPRA [19]), size [1] for example as well as calculi combining two different aspects of spatial information, such as the Rectangle Algebra [10, 2] which can represent both mereotopological information as well as directional. What is common to all these calculi is that they consist of a set of _jointly exhaustive and pairwise disjoint_ (JEPD) _base_ relations. For example, RCC-8 contains eight JEPD _base_ relations, illustrated in 2D in Fig. 1.
Footnote 1: We may use QSR as shorthand for both Qualitative Spatial Reasoning and Qualitative Spatial Representation; context should usually make clear which is intended.
_Large Language Models_ (LLMs) [14, 15], such as ChatGPT-4 [16] are a recent example of so called _Foundation Models_ which have been trained on very large textual corpora in order to generate text in response to a prompt. This is not the place to survey this burgeoning field, but we note that many claims have been made for the power and apparent intelligent behaviour that these models can display. In particular their performance on some benchmarks may lead one to believe that they possess, at least to some degree, the ability to perform commonsense reasoning. Spatial reasoning is usually regarded as one core aspect of common sense so it is natural to ask whether LLMs can reason about qualitative spatial information. This is the question that we address here.
In earlier work [14] we extended dialogues with an LLM to try to map the boundaries of spatial commonsense in some LLMs, addressing a variety spatial challenges, and examining not only the response given but also the explanation/justification of the response, but did not specifically focus on existing QSRs, though some questions were asked which do correspond to particular reasoning steps in an existing QSR. Here we focus on one specific QSR and ask the question as to what extent an LLM can perform reasoning in that calculus, and conduct a more exhaustive evaluation, but looking at the ability to perform compositions between relations and also to reason about the conceptual neighbourhood diagram of the calculus. Weaknesses in the reasoning powers of LLMs have previously been noted (e.g. [11]) so one might not expect LLMs to perform well in this regard. But on the other hand, there are a large number of papers about QSR in the literature and these are likely to have formed part of the training corpus of an LLM, and thus might facilitate correctly responding to prompts - though the information concerning the actual reasoning steps are often given in tables (in particular _composition tables_ - see below) and thus
Figure 1: The eight relations of the RCC-8 calculus illustrated in 2D.
might be hard for LLM training procedures to process well.
There are now many LLMs in the literature. Some of these are open source and are explicit about the training corpus; others are closed and give no specific information about the training, or the precise corpus, such as the GPT family of LLMs. Nevertheless since we observed previously [1] that ChatGPT-4 and GPT4 were the most performant for spatial reasoning, we use ChatGPT-4 as the LLM with which we perform our experiments. In each of the experiments below, an initial prompt gave the problem setting and the task to be performed. Subsequent prompts in the conversation probed about one specific inference (e.g. one cell in a composition table). Each experiment was a separate conversation and the version number of ChatGPT-4 was given as "May 24" (presumably, the version of May 24 2023). The conversations can be found at the following location: [https://tinyurl.com/qr23sup](https://tinyurl.com/qr23sup).
## 6 Compositional Reasoning
The most researched form of reasoning with Qualitative Spatial Relations is that of composition: i.e. given a two facts R1(\(x,y\)), and R2(\(y,z\)), then what relations are possible between \(x\) and \(z\)? In general more than one base relation is possible, so the answer may take the form of a disjunction of relations, as can be seen in the composition table for RCC-8 in Fig. 2. Given the ubiquity of RCC-8 in the QSR literature, in this paper we focus on the abilities of an LLM to reason with RCC-8.
### Experiment 1: Compositional Reasoning in RCC-8
In this experiment the standard names of the eight relations were given as well as a sentence stating that the relations were part of the RCC-8 calculus:
_Consider the following set of eight pairwise disjoint and mutually exhaustive binary spatial relations._
_These relations form part of the well known RCC-8 qualitative spatial reasoning calculus. DC(x,y) means that \(x\) and \(y\) are disconnected and share no spatial parts. EC(x,y) means that \(x\) and \(y\) touch at a boundary but do not share any interior parts. PO(x,y) means that \(x\) and \(y\) share a spatial part, but neither is part of the other. TPP(x,y) means that \(x\) is part of \(y\) and touches y's boundary. NTPP(x,y) means that \(x\) is part of \(y\) but does not touch y's boundary. TPPi(x,y) is the same as TPP(y,x). NTPPi(x,y) is the same as NTPP(y,x). NTPPi(x,y) is the same as NTPP(y,x). Finally, EQ(x,y) means that \(x\) and \(y\) are coincident. I will now ask you a series of questions about these relations. There may be more than one possible relation, in which case name all of the possible answers. If all eight relations are possible, just say ALL._
As an example of a prompt and ChatGPT-4's response, consider the first one:
**Prompt:**_If DC(x,y) and DC(y,z) then what are the possible relationships between \(x\) and \(z\)._ **Response:**_If DC(x,y) and DC(y,z), then \(x\) and \(z\) can either be disconnected (DC) or externally connected (EC). This is because the disconnection between \(x\) and \(y\), and \(y\) and \(z\), only guarantees that neither \(x\) nor \(z\) is connected to \(y\). It does not preclude the possibility that \(x\) and \(z\) themselves might be disconnected or touching at a boundary. However, the other relations like part-of (TPP, NTPP) and their inverses (TPPi, NTPPi), partial overlap (PO), and equality (EQ) are not possible because they would imply some level of connection between \(x\) and \(z\) through \(y\), which contradicts our initial conditions. So, the possible relationships between \(x\) and \(z\) are: DC(x,z): \(x\) and \(z\) are disconnected. EC(x,z): \(x\) and \(z\) are externally connected._
Notice that in fact the composition of DC with DC is the union of all eight RCC-8 relations, so the reasoning is incorrect here.
The results of the 7*7=49 prompts issued (compositions involving EQ were omitted since they should be trivial) are summarised in Fig. 3. There is a total of 85 relations correctly predicted (those in black), 197 relations which are correctly not predicted (those not in the table). Thus there are a total of 282 correctly predicted (both present and missing) relations, i.e. 71.94% of the total possible (7*7*8 = 392). 11 of the cells (out of 49) are completely correct. This is clearly much better than chance but far from a correct composition table. Fig. 4 shows the absolute number and relative percentages of the 8 relations in the proportion that they are correctly or incorrectly predicted. As can be seen, DC is never missed as a correct prediction, and PO is never incorrectly predicted as present. DC has the highest accuracy which is perhaps not surprising since DC might be regarded as intuitively easier to reason about (and this fits with the predictions about human preferred models in the section below). Including both correctly predicted and correctly missing relationships, all eight relations have broadly similar accuracies.
In order to test whether the result was influenced by prior knowledge of RCC-8 gained as part of its training, we also performed the same experiment, but with all the relation names prefixed by an X to disguise the connection to RCC-8. The prompt was the same as above except for the change of relation names and the omission of the second sentence. The results are given in Fig. 5 while Fig. 6 shows the absolute number and relative percentages of the 8 relations in the proportion that they are correctly or incorrectly predicted. As can be seen, DC again is never missed as a correct prediction, and EC is only missed twice; again PO is never incorrectly predicted as present. As before, DC, EC, and POhave the highest accuracies, along with EQ, but EQ is never predicted as present correctly, only incorrectly. The overall average of correctly predicted relations (present and missing) drops from 71.94% in the non-anonymous case above to 67.09% so there is some loss of performance though whether is due to the anonymisation of the relations or the stochasticity of ChatGPT-4 is not clear.
### Experiment 2: Preferred Compositions in RCC-8
As noted above, in general a composition of two relations will yield more than one possible base relations, but it turns
out that humans tend to have a "preferred" relation. For example, Ragni et al (2007) report on experiments performed on native German speakers and native Mongolian speakers for RCC-8. In their experiments the relations were described, but the human subjects were not allowed to draw possible configurations, so the setting is essentially equivalent to an LLM setting.
Given that humans may struggle to see all the possible relations2, determining whether there is agreement about the most preferred is good question to ask. It turns out that there is good agreement in general across and within the two cultures, with the the percentage of people agreeing with the same preferred relation ranging from 30% to 87.5% (a random choice would yield 12.5% on average since there are eight relations to choose from). (They did not query cases where the composition yields a unique relation, nor did they consider EQ as one of the two relations as this should be a trivial task.) This agreement is perhaps surprising since the two languages are linguistically very different. Ragni et al (2007) do report some differences though - for example although both language speakers preferred DC whenever it was consistent, Mongolians preferred PO over NTPPi whereas for Germans the converse was true. Both cultures only chose EQ when composing a relation with its inverse (e.g. TPP with TPP).
Footnote 2: The fact that some humans may struggle to compute the composition table does not stop it being a valid question to see if an LLM can determine the correct entries.
The theory of _preferred mental models_(Knauff, Rauh, and Schlieder 1995) states that people construct the simplest (computationally cheapest) model consistent with the premises. Their experiments showed that humans prefer models with the smallest overlapping complexity which explains the preference for DC noted above.
Given the difficulty reported in Experiment 1 in ChatGPT-4 correctly inferring all possible relations in a composition,
Figure 4: Relation statistics for the Composition Table for RCC-8 produced by ChatGPT-4. The upper chart shows the absolute number of relations, and the lower the relative percentage for each relation.
Figure 3: The Composition Table for RCC-8 produced by ChatGPT-4. The entry in each cell uses the following coding: D (DC), E(EC), P(PO), T(TPP), N(NTPP), t(TPPi), n(NTPPi), Q(EQ). Black means that relation is correctly predicted (85 times), red means that relation is incorrectly predicted (61 times), blue means that the relation was incorrectly not predicted (49 times).
Figure 2: The RCC-8 Composition Table (Cohn et al. 1997)
asking this more specific query seems a natural task to try; this gives some measure of whether the cognitive preferences of humans correspond at all to the choices made by ChatGPT-4.
The initial prompt given to ChatGPT-4 was identical to the initial prompt above when querying about composition. In its responses, sometimes ChatGPT-4 gave a clear preference for one relation, for example:
**Prompt:**_If DC(x,y) and DC(y,z) then what is your preferred relationship between x and z?_ **Response:**_The relations DC(x,y) and DC(y,z) mean that region x is disconnected from region y and region y is disconnected from region z. These relationships don't provide any information about the spatial relationship between x and z. x and z could be disconnected, they could touch at a boundary, or they could partially overlap, among other possibilities. However, in absence of any other information and considering spatial continuity, I would say the most likely relation is DC(x, z), assuming x and z are also disconnected. Please note that this is just a statistical assumption; it's possible for x and z to have any of the other relations defined in RCC-8._
In other cases ChatGPT-4 justified its choice by saying it was the "cautious" choice, or the "safest choice". The humans in the [10] experiment were not asked to justify their choices, but the claim made by the authors of that paper noted above is that choices were based on computational/simplicity considerations - which differs from the reasons claimed here by ChatGPT-4.
The initial prompt to ChatGPT-4 given was:
_Consider the following set of eight pairwise disjoint and mutually exhaustive binary spatial relations, between two one piece spatial 2D regions. These relations form part of the well known RCC-8 qualitative spatial reasoning calculus. DC(x,y) means that x and y are disconnected and share no spatial parts. EC(x,y) means that x and y touch at a boundary but do not share any interior parts. PO(x,y) means that x and y share a spatial part, but neither is part of the other. TPP(x,y) means that x is part of y and touches y's boundary. NTPP(x,y) means that x is part of y but does not touch y's boundary. TPPi (x,y) is the same as TPP(y,x). NTPPi(x,y) is the same as NTPP(y,x). Finally, EQ(x,y) means that x and y are coincident. I will now ask you a series of questions about these relations. It is possible that in some cases there could be more than one relation that might hold; in these cases just give me your single most preferred relationship._
As can be seen in Fig. 7, ChatGPT-4 only agreed with the average human on 20/49 (40.82%) though in a further three cases it agreed with one of the language groups (twice Mongolian, once German) giving a total of 23/49 (46.93%). In seven cases it actually predicted an impossible relation as its preferred composition3, and in all cases where the composition is unique it failed to note that. In all the other cases, 18/49 (36.63%) its preferred relation was possible but not preferred by either language group or overall.
Footnote 3: Sometimes the humans in the [10] experiment also predicted an impossible relation as can be seen in Table 2 of their publication.
As can be seen, sometimes ChatGPT-4 made a logically inconsistent prediction, for example with EC/NTPPi; looking at the ChatGPT-4's response in this case we can see that this is because it seems to misunderstand the meaning of NTPPi:
_If EC(x,y) and NTPPi(y,z), this means that region x touches the boundary of region y but does not overlap with it, and region y is a part of region z but does
Figure 5: The Composition Table for RCC-8 produced by ChatGPT-4 using anonymised relations. The entry in each cell uses the following coding: D (DC), E (EC), P (PO), T (TPP), N (NTPP), t (TPPi), n (NTPPi), Q (EQ). Black means that relation is correctly predicted (95 times), red means that relation is incorrectly predicted (57 times), blue means that the relation was incorrectly not predicted (72 times).
Figure 6: Relation statistics for the Composition Table for RCC-8 with anonymised relations produced by ChatGPT-4. The upper chart shows the absolute number of relations, and the lower the relative percentage for each relation.
not touch \(z\)'s boundary, with the perspective of \(y\) being contained in \(z\). In this scenario, it's likely that \(x\) and \(z\) are either disconnected (DC) or touch at the boundary (EC). Since \(y\), which is part of \(z\) but does not touch \(z\)'s boundary, is touching \(x\) at the boundary, there's a possibility for indirect spatial contact between \(x\) and \(z\). Without additional context, it's safer to assume that \(x\) and \(z\) touch at the boundary (EC). However, this is just an assumption, and the actual relationship between \(x\) and \(z\) could be different depending on the specific spatial arrangement._
In a different case, TPPi/TPPi, the initial understanding of the relations is correct inferring that \(z\) is part of \(x\), but then ChatGPT-4 makes a wrong step in the final part of its reasoning, and claims that TPP\((x,y)\):
_If TPPi(x,y) and TPPi(y,z), this means that region \(y\) is a part of region \(x\) and touches \(x\)'s boundary, and region \(z\) is a part of region \(y\) and touches \(y\)'s boundary. Given these conditions, it's likely that \(z\) is a part of \(x\) and touches \(x\)'s boundary (TPP). This is because \(z\), which is a part of \(y\) and touches \(y\)'s boundary, is also indirectly a part of \(x\) through \(y\), which is a part of \(x\) and touches \(x\)'s boundary. Therefore, my preferred relationship between \(x\) and \(z\) would be TPP(x,z), assuming that \(z\) is a part of \(x\) and touches \(x\)'s boundary. Please note that this is an assumption and the actual relationship between \(x\) and \(z\) could be different depending on the specific spatial arrangement._
which means that the two regions a and b are just touching at a boundary but do not share any interior parts, then the immediate next possible relations as the two regions translate or change their shape or size could be either DC(a,b) or PO(a,b). DC(a,b): This would occur if the regions move away from each other and are no longer in contact at all, becoming fully disconnected. PO(a,b): This would occur if one region begins to overlap with the other, but neither region is completely contained within the other. They would then be partially overlapping, sharing some but not all of their spatial parts. The transitions to these relations would involve either moving the regions apart or moving them further into each other. In the case of transition to PO(a,b), there could also be some deformation of the regions if they change shape to allow for the partial overlap._
An example of a faulty prediction is from PO:
_If PO(a,b) holds, indicating that regions a and b partially overlap but neither is fully contained within the other, the immediate next relations could be EC(a,b), DC(a,b), TPP(a,b), TPPPi(a,b), or EQ(a,b). EC(a,b): This would occur if the overlapping parts of a and b are reduced to just boundary parts, meaning the regions are touching but no longer overlapping. DC(a,b): This would occur if regions a and b are moved or deformed such that they no longer overlap or touch at all. TPP(a,b) or TPPi(a,b): This would occur if one region becomes fully contained within the other, either by moving into it or shrinking. TPP(a,b) means a is part of b and touches b's boundary. TPPi(a,b) means b is part of a and touches a's boundary. EQ(a,b): This could occur if the regions are reshaped or resized such that they coincide exactly, becoming the same region in space. The transitions to these relations would involve either moving the regions, resizing them, or reshaping them in some way._
Here, ChatGPT-4 has mistakenly said that DC is a next relation, failing to notice that it has to pass through EC on the way.
As can be seen, the predictions are mostly correct with just five errors: three where a link is falsely predicted and two where a link is missing. The two missing links are both from EQ (to NTPP and to NTPPi). Interestingly the links in the reverse direction are correctly predicted, so ChatGPT-4 is not able to reason that a link in one direction implies the link in the other direction should also be present. Similarly whilst the absence of a link from DC to PO is correctly predicted, the inverse case is not, and is also the case for PO to/from NTPP and NTPPi.
In order to test whether the result was influenced by prior knowledge of RCC-8 gained as part of its training, we also performed the same experiment, but with all the relation names prefixed by an X to disguise the connection to RCC-8. The prompt was the same as above except for the change of relation names and the omission of the second sentence. The results are given in Fig. 10. There are 3 incorrectly predicted links, 3 missing links, 19 correctly predicted links and 31 correct missing links, giving an accuracy of 50/56 (89.2%). This is slightly worse than the case above. There is one more missing link but the missing links are all different in the two cases. Although there are the same number of wrong links, only one of these is in common (PO to DC). Overall the results are broadly similar and may be due to the stochastic nature of ChatGPT-4's responses, suggesting that either the disguise was not very effective, or that prior training did not really affect the response and it was able to reason from 'first principles' (if not always correctly) in response to each prompt.
## 5 Concluding Remarks and Future Work
This investigation has supported the widely-held view that LLMs can struggle to do reasoning tasks4. In the case of Experiment 1, in which ChatGPT-4 was asked to compute the entire composition table for RCC-8, this is a non trivial task even for humans, so it is perhaps not surprising that ChatGPT-4 did not achieve 100% accuracy - the scores of 71.94% (and 67.09% for the anonymised relatins) are clearly much better than chance and do suggest a reasonable facility to perform such computations. A detailed analysis of the actual conversations in the supplementary material shows that
Figure 10: The Continuity Table for RCC-8 produced by ChatGPT-4 using disguised relation names. The meaning of the colouring is the same as in Fig. 9.
Figure 9: The Continuity Table for RCC8 produced by ChatGPT-4. An ‘x’ means that the relation in that column is predicted as an immediate neighbour of the relation in that row. An empty box means that the relation is not predicted as an immediate neighbour. Green means that the prediction was correct and red that it was incorrect. The leading diagonal is white since a relation is not a next relation of itself.
sometimes ChatGPT-4 does appear able to do some interesting (qualitative) spatial reasoning, but often fails, sometimes making elementary mistakes. It also shows inconsistency in being able to reason correctly about a relation but not its inverse. It also sometimes confuses a relation with its inverse. It is possible that fine tuning, explicit chain-of-thought prompting, or more carefully engineered prompts might improve performance; however, given the stochastic nature of LLMs it seems unlikely that the results would be as good as logical reasoning (the experiment on preferred relations is of course not strictly a logical reasoning exercise, except for the requirement not to predict spatially impossible relations).
There are a variety of avenues for further work which present themselves. Other calculi could be experimented with - for example the coarser calculus RCC-5, or calculi for reasoning about direction or size [12]. Other LLMs could be evaluated - though since new LLMs and new LLM versions are continually being released, this is a challenge with no definite stopping point. Tracking the change in performance of a particular LLM across releases would also be of interest - though in the case of closed LLMs such as ChatGPT-4 where the owners have the right to harvest user conversations and use them for future training, it will not be clear if any improvement is the result of leakage from the previous conversation or more general performance improvement5. It has already been observed [12] that different LLMs have different strengths - determining which LLMs are better at which spatial reasoning tasks would also be worth of future investigation. The overall conclusion that LLMs in general struggle with more complex spatial reasoning tasks is likely to remain the case, at least for the foreseeable future. In the API version of GPT, different temperatures could be tried, and multiple runs with averages computed. Different prompts and prompting strategies could be tried, though arguably since QSR has always been viewed as a form of commonsense reasoning, it should not be necessary to devise specific prompts to elicit commonsense behaviour.
Footnote 5: However, note that no feedback was given to ChatGPT-4 as to whether the proffered response was correct or not.
It is not clear how successful the anonymisation was - in one case I mistyped an X relation and it was able to suggest the intended relation name, suggesting that it has the ability to dissect relation names; thus more sophisticated anonymisation might be tried. In earlier work [12] we had already done some limited experimentation asking an LLM to reason about spatial relations in a real world context rather than the purely abstract setting used in the experiments in this paper - it would be interesting to conduct more extensive tests LLMs doing compositional reasoning in a more realistic setting, and similarly for the continuity experiment.
Experiment 2 above already investigated how LLM performance compared to human performance to a limited extent but further investigation would be worthwhile, including a head-to-head comparison rather than simply taking a result from the literature originally intended to investigate a different question. Another interesting avenue for further work will be to explore the use of multimodal FMs - when humans perform spatial reasoning tasks including the challenge of building a composition table, it is natural to use pencil and paper to sketch diagrams and possible scenarios - investigating whether a multi-modal FM with such abilities (including the ability to analyse its own drawings) would be of great interest to the spatial reasoning community.
As mentioned above, another possible avenue of research is to investigate different prompting strategies, including k-shot [13], chain-of-thought[14] and tree-of-thought[14] strategies. Not doing so was deliberate in this paper as I was interested in exploring in how the "vanilla" LLM would perform. Whilst for specific downstream tasks, fine-tuning or employing specific prompting strategies may reasonable, there is an argument to be made that for commonsense reasoning, this is not a reasonable strategy since the task is a general one rather than a specific downstream task.
## Data statement
All the conversations with ChatGPT-4 that support the summary tables in this paper can be found at [http://tinyurl.com/qr23sup](http://tinyurl.com/qr23sup).
## Acknowledgments
This work was supported by: The Alan Turing Institute; the Economic and Social Research Council (ESRC) under grant ES/W003473/1; the Turing's Defence and Security programme through a partnership with the UK government in accordance with the framework agreement between GCHQ and The Alan Turing Institute.
|
2309.12218 | SR-PredictAO: Session-based Recommendation with High-Capability
Predictor Add-On | Session-based recommendation, aiming at making the prediction of the user's
next item click based on the information in a single session only, even in the
presence of some random user's behavior, is a complex problem. This complex
problem requires a high-capability model of predicting the user's next action.
Most (if not all) existing models follow the encoder-predictor paradigm where
all studies focus on how to optimize the encoder module extensively in the
paradigm, but they overlook how to optimize the predictor module. In this
paper, we discover the critical issue of the low-capability predictor module
among existing models. Motivated by this, we propose a novel framework called
*Session-based Recommendation with Predictor Add-On* (SR-PredictAO). In this
framework, we propose a high-capability predictor module which could alleviate
the effect of random user's behavior for prediction. It is worth mentioning
that this framework could be applied to any existing models, which could give
opportunities for further optimizing the framework. Extensive experiments on
two real-world benchmark datasets for three state-of-the-art models show that
*SR-PredictAO* out-performs the current state-of-the-art model by up to 2.9% in
HR@20 and 2.3% in MRR@20. More importantly, the improvement is consistent
across almost all the existing models on all datasets, and is statistically
significant, which could be regarded as a significant contribution in the
field. | Ruida Wang, Raymond Chi-Wing Wong, Weile Tan | 2023-09-20T14:59:15Z | http://arxiv.org/abs/2309.12218v2 | # SR-PredictAO: Session-based Recommendation with High-Capability Predictor Add-On
###### Abstract
Session-based recommendation, aiming at making the prediction of the user's next item click based on the information in a single session only even in the presence of some random user's behavior, is a complex problem. This complex problem requires a high-capability model of predicting the user's next action. Most (if not all) existing models follow the encoder-predictor paradigm where all studies focus on how to optimize the encoder module extensively in the paradigm but they ignore how to optimize the predictor module. In this paper, we discover the existing critical issue of the low-capability predictor module among existing models. Motivated by this, we propose a novel framework called _Session-based Recommendation with Predictor Add-On_ (SR-PredictAO). In this framework, we propose a high-capability predictor module which could alleviate the effect of random user's behavior for prediction. It is worth mentioning that this framework could be applied to any existing models, which could give opportunities for further optimizing the framework. Extensive experiments on two real benchmark datasets for three state-of-the-art models show that _SR-PredictAO_ out-performs the current state-of-the-art model by up to 2.9% in HR@20 and 2.3% in MRR@20. More importantly, the improvement is consistent across almost all the existing models on all datasets, which could be regarded as a significant contribution in the field.
session-based recommendation, recommender system, neural decision forest, tree-based method
## I Introduction
Next-item recommender systems show their importance in the current age of e-commerce by accurately predicting the user's subsequent behavior. _Session-based recommendation_ is one recent hot topic of the next-item recommender. It is different from the _general next-item recommendation systems_, which put great attention on a specific group of existing users with a large number of historical behavior records to perform the next-item prediction. The _session-based recommendation_, as its name indicates, groups all the activities in the basic unit of the session and is based only on the information within a single session. The idea of session-based recommendation systems comes from [1]. It shows that intra-session-dependencies have a more significant impact than inter-session dependencies on the user's final decision to view the next item. In particular, the user's next-item behavior is usually related to behaviors in the current session. For example, a user's behavior in buying phone accessories in one session has a relatively low connection to his/her action of buying clothes two days ago but has a strong relationship with his/her visit to a phone charger in the same session.
Due to the highly practical value in the field of modern commerce, the session-based recommendation attracts researchers' interest. In recent years, most (if not all) proposed models followed the _encoder-predictor paradigm_, involving 2 components. The first component is the _session encoder module_, and the second component is the _predictor module_. The session encoder module transforms the input session (represented in the form of a sequence of items) into an \(n^{\prime}\)-dimensional vector called the _latent variable_, where \(n^{\prime}\) is a positive integer denoting a model parameter. The predictor module generates a probability distribution over all items that represents how likely each item is to be the next item. The paradigm is shown in Fig. 1 (a). Different existing models have different implementations of the encoder modules. For example, in [2], the encoder module is a Gated GNN that captures complex transitions of items to obtain the latent variable, and in [3], the encoder module is a Star GNN that uses a star node, representing the whole session, and a Highway Network, handling the overfitting problem. The predictor modules of most (if not all) existing models are all _linear_ models. Similar encoder-predictor paradigms could also be found in [4, 5, 6, 7, 8, 9].
Although existing models following the current encoder-predictor paradigm perform well, there are still some issues for further enhancement. The first issue is that most (if not all) existing models have a _low-capability_ predictor module, which affects the prediction accuracy. Specifically, under the encoder-predictor paradigm, even though there is an advanced model in the encoder module constructing the latent variable (which could represent the latent intent of a user's purchase), another important part in this recommendation comes from the predictor module, which could somehow simulate the complicated decision process of a human's purchase. Unfortunately, most (if not all) existing models have linear models, which are low-capability models, limiting the prediction performance.
The second issue is that designing a _high-capability_ model is challenging by considering the overfit problem. Specifically, one straightforward solution for the first issue is to design a high-capability model. It is well-known that an _extremely_ high-capability model suffers from the overfit problem. How to design an _appropriate_ high-capability model is needed for detailed investigation.
The third issue is that there is _random user's behavior_ in the input session, which may affect the prediction performance. Specifically, when a user browses some items, s/he normally has a clear intention on what s/he wants to view, but sometimes, the s/he may browse some other items that have little relation with his/her original intention due to his/her curiosity of seeing other items in a session. We call this kind of user's behavior as _random user's behavior_, which could create a challenge for prediction in existing models.
In this paper, we propose a novel framework called _Session-based Recommendation with Predictor Add-On (SR-PredictAO)_. Under _SR-PredictAO_, given an existing model called the _base model_ in this paper, we keep all existing modules of this existing model but we augment this model with two additional modules. The first additional module is the high-capability predictor module, which takes the latent variable as input and outputs the predicted probability distribution over all items being the next item in the session. Although we keep the original (low-capability) predictor module, we still include the new high-capability predictor module which could capture the complex human's decision process. The second additional module is module _Merger_, which takes the probability distributions over all items predicted by both the original predictor module and the new predictor module and outputs the final probability distribution over all items. This framework provides a lot of opportunities to researchers for optimization on how to specify these 2 modules, which is quite promising. The SR-PredictAO framework could be found in Fig. 1 (b) where the first augmented module is named as _NDF-SR_ (which will be described next). It is worth mentioning that our framework _SR-PredictAO_ could be applied to all existing models following the encoder-predictor paradigm (with the two additional modules), which could further improve the prediction performance of all existing models.
In this paper, we propose a model called _Neural Decision Forest for Session-based Recommendation_ (NDF-SR) for the first high-capability predictor module. Specifically, NDF-SR involves two components. The first component is called the _random user's behavior alleviator_, which could minimize the effect of random user's behaviors for the prediction process (addressing the third issue). The second component is called the _Neural Decision Forest_ (NDF) model, which is a high-capability model (addressing the first issue). It could be regarded as a _forest_ involving a number of _decision trees_ each constructed with the use of _neural_ network models. We also propose a pruning method in the NDF model to avoid the overfit problem (addressing the second issue). Furthermore, in this paper, for the second _Merger_ module, we adopt a simple linear combination which combines the predicted distributions from the original predictor and the new predictor to obtain the final predicted probability distribution. In the following, for clarify, when we describe _SR-PredictAO_, we mean the framework adopting the above modules.
In summary, our contributions are shown as follows.
1. To the best of our knowledge, we are the first to find the important low-capability issue in the predictor module of most (if not all) existing models, lowering down their prediction accuracy.
2. To address this important issue, we propose a framework called _SR-PredictAO_ including the high-capability predictor module where this module involves two components, namely the _random user's behavior alleviator_ (addressing the random user's behavior issue) and the _Neural Decision Forest_ (NDF) model (addressing the low-capability predictor issue). Moreover, we propose some pruning method in the NDF model to address the overfit problem.
3. We conduct extensive experiments on two public benchmark datasets, namely _Yoochoose_ and _Diginetica_, for
Fig. 1: (a) The overview of the base model, (b) Framework _SR-PredictAO_; Given an input session \(S\), the encoder module generates the latent variable \(\mathbf{z}\). In (a), \(\mathbf{z}\) is passed to the base model predictor module to obtain the predicted probability distribution over all items. In (b), \(\mathbf{z}\) is passed to both the base model predictor module and the new predictor module (called _NDF-SR_) to obtain two predicted probability distributions over all items. Then, module _Merger_ combines the two distributions to output the final distribution.
three state-of-the-art models. Experimental results show that _SR-PredictAO_ improves almost all state-of-the-art models on all datasets up to 2.9% on HR@20 (one accuracy measurement) and up to 2.1% on MRR@20 (another accuracy measurement), which could set a new state-of-the-art in the literature. This improvement is _consistent_ on all datasets. By considering the consistency of improvement and the ease of applicability of our framework, we regard our contribution as a major improvement to the field of the session-based recommendation system.
## II Related Work
In this section, we give the related work about session-based recommendation (Section II-A) and neural decision forest (Section II-B).
### _Session-based recommendation_
We categories existing studies about session-based recommendation into three categories: (1) conventional recommendation methods, (2) neural-network-based methods and (3) graph neural-network-based methods.
Due to the similarity between the _session-based recommendation_ (SR) problem and the traditional recommendation problem, conventional methods like Collaborative Filtering (CF) approaches [10, 11], nearest-neighbor approaches [12, 13] and Markov's chain approaches [14] are applied to the SR problem. However, due to the limited information in the session, they all performed poorly in the SR problem.
With the improvement of computation power and knowledge in _Neural Network_ (NN), many NN-based models, including RNN approaches [15], the transformer-based approach [16] and the CNN-based approach [17, 18], have been proposed. However, most of them do not perform well due to the limited information in the session.
In recent years, graph neural networks (GNNs) have become popular and have been shown to have state-of-the-art performance in many domains. Many recommendation systems [2, 3, 5, 6] also utilize GNNs due to its ability of modeling complex relationships among objects. In [2, Wu et al. apply gated graph neural networks (GGNNs) to capture the complex transitions of items, which result in accurate session representations. In [5], to solve information loss problems in GNN-based approaches for session-based recommendation, Chen et al. proposed a lossless encoding scheme, involving a dedicatedly designed aggregation layer and a shortcut graph attention layer. In [3], Pan et al. proposed Star Graph Neural Networks with Highway Networks (SGNN-HN) for session-based recommendation. In particular, the highway networks (HN) can select embeddings from item representations adaptively to order to prevent from overfitting. However, all aforementioned studies [2, 3, 5, 6] use a linear model, a low-capability model, as the predictor module (described in Section I).
### _Tree-based method_
The traditional tree-based method was proposed by Breiman in [19, 20]. Its outstanding performance in simulating the human decision process is studied by Quinlan et al. in [21] The high capability of the tree-based methods was shown by Mentch et al. [22]. With the rapid development of computation power and neural networks, a lot of effort has been made to combine classical tree-based methods with neural networks. In [23], Richmond et al. introduced _Convolutional Neural Networks_ (CNNs) as representation learners on a traditional random forest. Jancsary et al. in [24] introduced _regression tree fields_ for image restoration. To solve the problem that the traditional tree-based method cannot do backward propagation with other NN-based parts in the model, in [25], Kontschieder et al. constructed uniform and end-to-end differentiable Deep Neural Decision Forest and applied it to some computer vision models. To the best of our knowledge, no existing studies about session-based recommendation system utilizes the the tree-based models incorporated with the backward propagation with the NN-based parts in the models. We are the first one to propose this in the field of session-based recommendation system.
## III Preliminaries
In this section, we introduce (1) problem definition (Section III-A), (2) some preliminary knowledge about a _base model_, an existing model, following the encoder-predictor paradigm (Section III-B) and (3) the traditional version of the tree-based method (Section III-C).
### _Problem Definition_
The session-based recommendation is a sub-field of the next-item recommendation only with the input from a specific session. Its goal is to predict the next item that a user will browse based on the current active session involving all previus items browsed. We denote \(I=\{v_{1},v_{2},\cdots,v_{N}\}\) by the universal set of items in the whole dataset, where \(N\) is the total number of items. A session, denoted by \(\mathbf{s}_{i}=[s_{i,1},s_{i,2},\cdots,s_{i,l_{i}}]\), is a time-ordered sequence of items, where \(i\) is a temporary index of the session, \(l_{i}\) denotes the length of \(\mathbf{s}_{i}\) and, for each \(t\in[1,l_{i}]\), \(s_{i,t}\in I\) is the item at time step \(t\) in the session. The goal of the session-based recommendation is to predict what the next item \(s_{i,l_{i}+1}\) is. A typical session-based recommendation system generates a probability distribution over all items predicted being the next item, i.e., \(\mathbb{P}(s_{i,l_{i}+1}|\mathbf{s}_{i})\).
### _Base Model_
The base model (following the encoder-predictor paradigm) is formulated as follows.
\[\mathbf{z}=f_{\textit{encode}}(\mathbf{s}|\Theta_{\textit{encoder}}) \tag{1}\] \[\mathbf{y}_{base}=g_{\textit{predict}}(\mathbf{z}|\Theta_{\textit{predictor }}) \tag{2}\]
where (1) \(\mathbf{s}\) is the input session (represented in the form of a sequence of items), (2) \(\mathbf{z}\) is the latent variable generated by the encoder module of the model, (3) \(\mathbf{y}_{base}\) denotes the probability
distribution over all items predicted being the next item, (4) \(f_{encode}\) is the encoder which takes the input session as input and outputs a latent variable (a vector in \(\mathbb{R}^{n^{\prime}}\)) (5) \(g_{predict}\) is the predictor module which takes the latent variable as input and outputs the probability distribution, and (6) \(\Theta_{encode}\) (\(\Theta_{predict}\)) is the parameter configuration of the encoder (predictor) module.
As described in Section I, different existing models have different implementations of the encoder modules. In the following, we describe the encoder module and the predictor module of a base model of some state-of-the-art models.
#### Iii-B1 Encoder Module
This section focuses on the most popular base model's session encoding method, the GNNs encoder. But our methods can work on all kinds of session encoders as long as it generates a latent variable. GNNs are _Neural Networks_ (NN) that directly operate on the graph of data, given a graph \(G=(V,E)\), where each node \(v_{i}\in V\) represents an item in \(\mathbf{s}\) (the session). Typically, \(v_{i}\) is associated with a node feature vector \(\mathbf{x}_{i}\), which is the input to the first layer of GNNs. \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) is obtained by multiplying the embedding matrix (we define embedding matrix as \(\mathbf{A}\in\mathbb{R}^{N\times n}\) with the item ID), where \(n\) is the embedding dimensionality. And \(\mathbf{A}\) is a trainable matrix. Assume we totally have \(L\) layers of GNN. The formula of \(l\)-th (\(l\leqslant L\)) layer of GNN can be represented as follows:
\[\mathbf{x}_{i}^{(l+1)}=f^{(l)}(\mathbf{x}_{i}^{(l)},\mathbf{a}_{i}^{(l)}) \tag{3}\] \[\mathbf{a}_{i}^{(l)}=agg^{(l)}(\{msg^{(l)}(\mathbf{x}_{i}^{(l)},\mathbf{x}_{ j}^{(l)})|(j,i)\in E_{in}(i)\}) \tag{4}\]
where \(\mathbf{x}_{i}^{(l)}\) is the embedding vector of node \(i\) in the \(l\)-th layer of the GNN, and \(E_{in}(i)\) is the set of incoming edges for node \(v_{i}\in V\). The message processing function at the \(l\)-th layer \(f^{(l)}\) generates the updated embedding of the target node based on its neighborhood. \(agg^{(l)}\) is the aggregate function that connects the information of different edges together, and \(msg^{(l)}\) is the message-extracting function that obtains information from the edge between \((x_{i}^{(l)},x_{j}^{(l)})\). Let \(L\) be the total number of layers in the GNN. After \(L\) steps of message passing, the final representation for the latent variable is:
\[\mathbf{h}_{G}=f_{out}(\{\mathbf{x}_{i}^{(L)}|v_{i}\in V\}) \tag{5}\]
\(\mathbf{h}_{G}\) is the graph-level representation that we regard as the graph latent variable generated by the readout function \(f_{out}\).
After the graph level latent variable \(\mathbf{h}_{G}\) is obtained, most models adds some additional information to obtain a better result. For example, [5] adds all results of the Embedding layer, EOPA Layer, and SGAT Layer's (two special kinds of GNN mentioned in [5]) information to the graph representation, and [3] formulates the final result by concatenating \(\mathbf{z}_{g}\) and \(\mathbf{z}_{r}\), which are the last item's representation and the combination of all the graphs' result representation come from different levels respectively. After considering all the required information of the base model, we define this vector as the latent variable \(\mathbf{z}\in\mathbb{R}^{n^{\prime}}\), where \(n^{\prime}\) is the dimensionality of the latent variable. This approach is used in almost all well-known session-recommendation models [2, 3, 5, 6].
#### Iii-B2 Predictor Module
After the encoder module outputs the latent variable, the predictor module takes this as input and performs the following steps.
1. The first step is to perform a prediction function (normally a linear model), which takes the latent variable as input and outputs an embedding called the _session embedding_\(\mathbf{s}_{h}\in\mathbb{R}^{n}\) where \(n\) is the dimensionality of the session embedding, same as the embedding dimension of \(\mathbf{A}\) \[\mathbf{s}_{h}=\text{Linear}(\mathbf{z})\] (6)
2. The second step is to obtain the _score vector_\(\mathbf{c}\in\mathbb{R}^{N}\) over all items predicted being the next item. \[\mathbf{c}=[c_{1},c_{2},\cdots,c_{N}]^{T}=\mathbf{A}\mathbf{s}_{h}\] (7) where \(c_{i}\in\mathbb{R}\) is a score of item \(v_{i}\) predicted being the next item for each \(i\in[1,N]\) and \(\mathbf{A}\in\mathbb{R}^{N\times n}\) is the item embedding matrix we used before.
3. The third step is to obtain the _probability vector_\(\hat{\mathbf{y}}_{base}\in\mathbb{R}^{N}\) over all items predicted being the next item by using the softmax function based on the score vector \(\mathbf{c}\). \[\hat{\mathbf{y}}_{base}=softmax(\mathbf{c})=\frac{\exp(\mathbf{c})}{\sum_{i\in[1,N]}\exp(c _{i})}\] (8)
### _Tree-based method_
From the mathematical point of view, the tree-based method is a way of generating a locally constant function, represented by function \(tree:\mathbb{R}^{n^{\prime}}\rightarrow\mathbb{R}^{N}\) that divides the input space \(\mathbb{R}^{n^{\prime}}\) into many regions, and give each subspace a constant value in \(\mathbb{R}^{N}\). And we can define the tree recursively by first defining the _tree-split_ function \(\varphi\):
\[\varphi(\mathbf{x})=\chi(\mathbf{x}\in S)\mathbf{c}_{l}+\chi(\mathbf{x}\notin S)\mathbf{c}_{r} \tag{9}\]
where \(S\subseteq\mathbb{R}^{n^{\prime}}\) is a subregion of the input space, and \(\chi(\mathbf{x}\in S)\) is judging function that returns 1 when \(x\in S\), and 0 otherwise. The \(\mathbf{c}_{l},\mathbf{c}_{r}\) are defined as the left and right nodes of the tree-split. If \(\mathbf{c}_{l}\) or \(\mathbf{c}_{r}\) have its value in \(\mathbb{R}^{N}\), where \(N\) is the dimension of the predicted result, then we say it is a _leaf node_; if not, it is an _internal node_ that is associated with another tree split \(\varphi_{l/r}\). And the tree function can be represented as \(tree(\mathbf{x})=\varphi_{root}(\mathbf{x})\) where \(\varphi_{root}\) is the tree-split function associated with the _root node_, the beginning node of the tree. The max number of tree-split need to have from the root to the leaf node is defined as _depth_.
For example, in Fig. 2, each node \(d_{i}(i\in[1,7])\) is associated with a tree split function \(\varphi_{i}\) with corresponding region \(S_{i}\). The node of \(d_{1}\) is the root node (i.e., \(tree=\varphi_{1}\)), the nodes of \(d_{i\neq 1}\) are internal nodes. And node of \(\mathbf{\pi}_{j}(j\in[1,8])\) is leaf node that have its value \(\mathbf{\pi}_{j}\in\mathbb{R}^{N}\).
## IV Framework SR-PredictAO
Framework _SR-PredictAO_ involves two modules, namely the high-capability predictor module (Section IV-A) and the Merger module (Section IV-B). The training process of _SR-PredictAO_ is presented in Section IV-C.
### _High-Capability Predictor Module_
We propose a model called _Neural Decision Forest for Session-based Recommendation_ (NDF-SR) for the high-capability predictor module. Specifically, NDF-SR involves two components. The first component is called the _random user's behavior alleviator_ (Section IV-A1) and the second component is called the _Neural Decision Forest_ (NDF) model (Section IV-A2). As described in Section I, we also propose a pruning method in the NDF model to avoid the overfit problem. This pruning method could be found in the description for the second component.
#### Iv-A1 Random User's Behavior Alleviator
The base-model encoded latent variable for the previous session view of items is normally heavily affected by random user's behavior. To solve this problem, we could take the Empirical Bayes' point of view [26]. For Empirical Bayes', the observed data is not the underlying true value but a sample under a certain distribution around the truth. We would design our Alleviator under this cognition.
Formally, if a batch \(\mathbf{Z}\in\mathbb{R}^{m\times n^{\prime}}\) of \(m\) latent variables each with dimensionality of \(n^{\prime}\) we observe from the base model's encoder is:
\[\mathbf{Z}=\begin{bmatrix}\mathbf{z}_{1}^{T}\\ \mathbf{z}_{2}^{T}\\ \vdots\\ \mathbf{z}_{m}^{T}\end{bmatrix}=\begin{bmatrix}\mathbf{\xi}_{1}\\ \mathbf{\xi}_{2}\\ \vdots\\ \mathbf{\xi}_{n^{\prime}}\end{bmatrix}^{T}=\begin{bmatrix}z_{11}&z_{12}&\cdots&z_ {1n^{\prime}}\\ z_{21}&z_{22}&\cdots&z_{2n^{\prime}}\\ \vdots&\vdots&\ddots&\vdots\\ z_{m1}&z_{m2}&\cdots&z_{mn^{\prime}}\end{bmatrix}\]
We denote \(\mathbf{z}_{j}\) to be the \(j\)-th row of \(\mathbf{Z}\) and also the latent variable of the \(j\)-th session in the batch for each \(j\in[1,m]\). We denote \(\mathbf{\xi}_{i}\) to be the \(i\)-th column of \(\mathbf{Z}\) for each \(i\in[1,n^{\prime}]\). \(\mathbf{Z}\) is not the underlying truth value for the latent variable but a sample from a distribution with the underlying truth value as its expected value. Suppose that \(\mathbf{\mu}\in\mathbb{R}^{m\times n^{\prime}}\) denotes the correspondence truth values as follows.
\[\mathbf{\mu}=\begin{bmatrix}\mu_{11}&\mu_{12}&\cdots&\mu_{1n^{\prime}}\\ \mu_{21}&\mu_{22}&\cdots&\mu_{2n^{\prime}}\\ \vdots&\vdots&\ddots&\vdots\\ \mu_{m1}&\mu_{m2}&\cdots&\mu_{mn^{\prime}}\end{bmatrix}=\begin{bmatrix}\mathbf{ \mu}_{1}^{T}\\ \mathbf{\mu}_{2}^{T}\\ \vdots\\ \mathbf{\mu}_{m}^{T}\end{bmatrix}\]
The Empirical Bayes' assumption is that \(\forall i,j;z_{ij}|\mu_{ij}\sim\mathcal{N}(\mu_{ij},\sigma_{j}^{2})\), which is a normal distribution with mean \(\mu_{ij}\) and variance \(\sigma_{j}^{2}\), with an additional assumption that \(\sigma_{j}^{2}\geqslant 1\). This assumption also means that the variance is the same across different columns. We aim to obtain an estimator for \(\mathbf{\mu}\) given the observation \(\mathbf{Z}\). The _Maximum Likelihood Estimator_ (MLE) that is commonly used in the field suggests that we should just take the \(\mathbf{Z}\) itself. That is, for each \(i\in[1,m]\) and each \(j=[1,n^{\prime}]\),
\[\hat{\mu}_{ij}^{(MLE)}=z_{ij} \tag{10}\]
But, our alleviator uses the _James-Stein Estimator for Session-based Recommendation_ (JSE-SR) that applies indirect evidence from other values of the same entry in the batch. The JSE-SR is defined as follows:
\[\hat{\mu}_{ij}^{(JS)}=(1-\frac{m-2}{\|\mathbf{\xi}_{j}\|^{2}})z_{ij} \tag{11}\]
For each of the two estimators \(\hat{\mu}_{ij}\) (i.e.,\(\hat{\mu}_{ij}^{(MLE)}\) and \(\hat{\mu}_{ij}^{(JS)}\)), the effect of random user's behavior on the latent variable can be quantified as follows. For each \(j\in[1,n^{\prime}]\), \(\mathbb{E}[\sum_{i=1}^{m}(\mu_{ij}-\hat{\mu}_{ij})^{2}]\).
We can show the following lemma. In this lemma, we know that the estimator \(\hat{\mu}_{ij}^{(JS)}\) gives a smaller error compared with the estimator \(\hat{\mu}_{ij}^{(MLE)}\).
**Lemma IV.1**: \[\mathbb{E}[\sum_{i=1}^{m}(\mu_{ij}-\hat{\mu}_{ij}^{(JS)})^{2}]\leqslant \mathbb{E}[\sum_{i=1}^{m}(\mu_{ij}-\hat{\mu}_{ij}^{(MLE)})^{2}]\] (12)
_Proof Sketch:_ Firstly, for all predictor \(\hat{\mu}_{ij}:=\hat{\mu}_{ij}(z_{ij})\) of \(\mu_{ij}\), we can decompose \(\mathbb{E}[\sum_{i=1}^{m}(\mu_{ij}-\hat{\mu_{ij}})^{2}]=\sum_{i=1}^{m}\mathbb{ E}[(z_{ij}-\hat{\mu}_{ij})^{2}]+2\sum_{i=1}^{m}\mathbb{E}[(\mu_{ij}-\mu_{ij})(z_{ij} -\mu_{ij})]\). Secondly, we perform integration by parts, we have: \(\mathbb{E}[(z_{ij}-\mu_{ij})(\hat{\mu}_{ij}-\mu_{ij})]=\sigma_{j}^{2}\mathbb{ E}[\frac{\partial\hat{\mu}_{ij}}{2\langle z_{ij}\rangle}]\). Thirdly, we plug the \(\hat{\mu}_{ij}^{(JS)}\) and \(\hat{\mu}_{ij}^{(MLE)}\) into the equation, we have Equation 11. A complete proof could be found in Appendix-A.
Therefore, applying JSE-SR to all entries in \(\mathbf{Z}\), we have:
\[\hat{\mathbf{Z}}^{(JS)}=[\hat{\mu}_{ij}^{(JS)}]_{i\in[1,m],j\in[1,n^{\prime}]} \tag{13}\]
#### Iv-A2 Neural Decision Forest (NDF)
As described in Section I, the Neural Decision Forest (NDF) model could be regarded as a _forest_ involving a number of _decision trees_ each constructed with the use of _neural_ network models. Each decision tree in this model is formally named as a _neural decision tree (NDT)_.
In the following, we first define NDT and then NDF.
**NDT:** Our proposed NDT method is the part that provides (more than) enough capability to solve the lack of capability problem of the linear predictor. Considering the representation learning in the session-based recommendation, our proposed NDT differs from the traditional trees that greedily find the split that may reduce the loss function in the given variable space and entries proposed by [19], which requires a fixed encoder, but our proposed NDT uses _Neural Networks_ (NN) to do the split and are optimized by backward propagation together with the encoder. In our case, this encoder is normally a GNN-based encoder. The NDT that has depth \(d\), and it takes values from alleviator-processed latent variable \(\mathbf{z}^{(JS)}\in\mathbb{R}^{n^{\prime}}\) as input. It consists of the following.
* A decision function (normally a deep neural network): \(f:\mathbb{R}^{n^{\prime}}\rightarrow\mathbb{R}^{2^{d}-1}\) (because a tree with depth \(d\) requires \(2^{d}-1\) number of the split, resulting in \(2^{d}\) leaf nodes)
* A probability score matrix \(\mathbf{\pi}\in\mathbb{R}^{2^{2}\times N}\) (which is trainable) for all leaf nodes: \[\mathbf{\pi}=[\pi_{ij}]=[\pi_{1},\cdots,\mathbf{\pi}_{2^{d}}]^{T}\] (14) We mark the leaf nodes of a tree from left to right with index \(1,2,\cdots,2^{d}\), where the \(i\)-th leaf node means the
leaf node with index \(i\). Note that under our definition, the NDT is always a balanced tree. \(\pi_{ij}\) means the probability score of the \(j\)-th item in the \(i\)-th leaf node. \(\mathbf{\pi}_{i}\) means a vector containing the probability scores of all items in \(I\) of the \(i\)-th leaf node.
The NDT works as follows. The decision function generates a decision score for each split. Then, applying a sigmoid function to the decision score to obtain the right and left decision probability. A binary split is associated with the probability of arriving at the root of this split as \(p_{root}\), which is generated by previous splits. Let \(s=\sigma(f(z^{(JS)}))\). The split here means the process of giving an item in the root of the subtree what is the probability that this item goes to the right and left of the root. The probability is calculated as follows.
\[\begin{cases}p_{left}=p_{root}\cdot s\\ p_{right}=p_{root}\cdot(1-s)\end{cases} \tag{15}\]
For example, in Fig. 2, \(p_{root}\) for node \(d_{1}\) is 1, and \(p_{root}\) for node \(d_{2}\) is set to \(p_{left}\) computed within node \(d_{1}\).
We recursively apply this split method from the tree's root to the leaf nodes. We obtain the leaf-reaching probability \(\mathbf{p}_{leaf}=[p_{1}^{(leaf)},p_{2}^{(leaf)},\cdots,p_{2^{d}}^{(leaf)}]^{T} \in\mathbb{R}^{2^{d}}\) to represent what is the probability that this session may fall into each leaf node. Then, multiply \(softmax(\mathbf{\pi})\) matrix by \(\mathbf{p}_{leaf}\) to obtain the probability distribution \(\hat{\mathbf{p}}\in\mathbb{R}^{N}\) over all items that this session may represent.
\[\hat{\mathbf{p}}=\mathbf{p}_{leaf}^{T}softmax(\mathbf{\pi})=\sum_{k=1}^{2^{d}}p_{k}^{( leaf)}softmax(\mathbf{\pi}_{k}) \tag{16}\]
where \(\hat{\mathbf{p}}\) is the predicted probability for each item for this tree. To make \(\mathbf{\pi}\) normalized, we apply the softmax function before we use it.
**Pruning:** Because that all tree-based methods, including NDT, suffer from serious overfitting because they normally have excessive capability. The problem is more severe in our case since our NDT is trained simultaneously with the encoder. To solve that problem, we propose _NDT-pruning_ that can control the excessive capability to control overfitting.
Traditional pruning uses the judgment of loss function to see which leaves should drop, but for an NDT, it is hard to do a similar thing. Thus, to prune the NDT, we apply a random mask to the outcomes of NDT. So we do the following:
\[\mathbf{p}_{leaf}^{\prime}=softmax(RandomMask(\mathbf{p}_{leaf},r)) \tag{17}\]
where \(\mathbf{p}_{leaf}\in\mathbb{R}^{2^{d}}\) is the leaf-reaching probability, and each leaf node has a probability \(r\) (we call it pruning rate) to be 0, and \(r\in[0,1]\). After the random mask, we use \(\mathbf{p}_{leaf}^{\prime}\) to replace \(\mathbf{p}_{leaf}\) to obtain \(\hat{\mathbf{p}}\), the predicted next-item distribution of this tree.
Since the NDT typically has excessive capability than needed, which may fit into unrelated information in data, this makes the model easy to overfit. Our proposed NDT-pruning controls the overfitting by removing the excessive capability of the NDT. By choosing a good pruning rate, we can control the capability of our model in a reasonable range that can compensate for the lack of capability in linear predictors and not be too high to overfit. More details of the relation between the model's capability and NDT-pruning can be found in the technique report in the git repository
**NDF:** We construct the NDF by the basic building block NDT and NDT-pruning in this section. Breiman proved that combining trees into a forest model generally makes the model's outcome more stable [20]. Non-neural trees that formulate Random Forest should have a different mask of entries for every split, but that is not possible if we use a uniform decision function for each tree. So, we independently drop some entries for each NDT.
For example, if an input Alleviator-processed latent variable for the NDT is \(\mathbf{z}^{(JS)}=[z_{1}^{(JS)}\cdots,z_{n}^{(JS)}]^{T}\in\mathbb{R}^{n^{\prime}}\), for the \(i\)-th NDT after the variable mask-off, a fixed subset of \(\mathbf{z}_{i}^{\prime}=[z_{1_{i}},z_{2_{i}},\cdots,z_{\gamma_{i}}]\) where \(|\mathbf{z}_{i}^{\prime}|=\gamma_{i}\leqslant n^{\prime}\), and \(\mathbf{z}^{\prime}\subseteq\mathbf{z}^{(JS)}\). For each NDT, the list of entries to drop is randomly selected when building the model, but this list is fixed during training. If there are \(T\) number of NDTs in the NDF-SR, and their predicted next-item probability is \(\mathbf{P}=[\hat{\mathbf{p}}_{1},\hat{\mathbf{p}}_{2},\cdots,\hat{\mathbf{p}}_{T}]\), where \(\hat{\mathbf{p}}_{i}\in\mathbb{R}^{N}\) for all \(i=1,2,\cdots,T\). The NDF's predicted result is:
\[\hat{\mathbf{y}}_{NDF-SR}=\frac{1}{T}\cdot(\sum_{i=1}^{T}\hat{\mathbf{p}}_{i}) \tag{18}\]
which is also the predicted result of the NDF-SR, our proposed high-capability predictor.
We can prove by simulated data that, typically, the NDF-SR has a much higher capacity than the linear predictor. More details are in the technique report in the git repository.
### _Merger Module_
In this paper, for the second _Merger_ module, we adopt a simple linear combination which combines the predicted distributions from the original predictor and the new predictor to obtain the final predicted probability distribution by using a user parameter \(q\in[0,1]\) as follows.
\[\hat{\mathbf{y}}=q\cdot\hat{\mathbf{y}}_{base}+(1-q)\cdot\hat{\mathbf{y}}_{NDF-SR} \tag{19}\]
Here, \(\hat{\mathbf{y}}\in\mathbb{R}^{N}\) is the probability distribution over all items predicted being the next item (which is the combined result
Fig. 2: The overview of the NDT, decision function gives the split score for root and internal nodes, and the leaves nodes’ result is the probability of the session reaching the node
from the original predictor module and the new predictor module). \(\hat{\mathbf{y}}\) is the output of framework _SR-PredAO_.
### _Training_
Note that \(\hat{\mathbf{y}}\) obtained in module Merger is the output of framework _SR-PredAO_. Let \(\mathbf{y}\) be the real probability distribution over all items being the next item, which is a one-hot vector. The loss function of framework _SR-PredAO_\(\mathcal{L}(\cdot,\cdot)\) is the same as the one used in the base model, which is the cross-entropy loss.
\[\mathcal{L}(\mathbf{y},\hat{\mathbf{y}})=-\mathbf{y}^{T}\log(\hat{\mathbf{y}}) \tag{20}\]
For initialization, all trainable parameters in both the base model and the additional modules in framework _SR-PredAO_ are initialized randomly, and they are jointly updated in an end-to-end back propagation manner.
## V Experiment
We give the experimental setup in Section V-A and the experimental results in Section V-B. Implementation of this paper can be found in this link ( [https://github.com/RickySkywalker/SR-PredictAO-official](https://github.com/RickySkywalker/SR-PredictAO-official) ).
### _Experimental Setup_
#### V-A1 Datasets
We evaluated the performance of state-of-the-art models and our proposed framework on the following two benchmark real-world datasets:
* _Yoochoose1_ is a dataset obtained from the RecSys Challenge 2015, which contains user sessions of click events from an online retailer. Footnote 1: [http://2015.reesyschallenge.com/challenge.html](http://2015.reesyschallenge.com/challenge.html)
* _Diginetica2_ is a dataset released by the CIKM Cup 2016, which includes user sessions extracted from e-commerce search engine logs.
Footnote 2: [http://cikm2016.cs.iupui.edu/cikm-cup](http://cikm2016.cs.iupui.edu/cikm-cup)
Following [3, 5], for both datasets, we first filter out short sessions and infrequent items so that only sessions of length at least 2 and items that appear at least 5 times are kept. We split the training set and the test set in the following way. For _Yoochoose_, we use the sessions on the last day as the test set, and for _Diginetica_, we use the sessions of the last week as the test set. In particular, since _Yoochoose_ is too large, we only use the recent 1/64 fraction of the training set to train our model, denoted as _Yoochoose 1/64_. We also tried to use _Yoochoose 1/4_ (denoting the recent 1/4 fraction) for testing the performance of models on datasets with different scales, but it turns out the dataset is too large and our computation power cannot support this dataset.
Moreover, following [3, 16], we utilize the sequence splitting to preprocess datasets. Specifically, for an input session, \([v_{1},v_{2},...,v_{n}]\), we generate the sequences and the corresponding labels as \(([v_{1}],v_{2})\), \(([v_{1},v_{2}],v_{3})\),...,\(([v_{1},v_{2},...,v_{n-1}],v_{n})\) for training and testing. The statistics of the datasets after preprocessing are provided in Table I.
#### V-A2 Evaluation Metrics
Following previous studies [2, 3, 4, 5, 6, 27, 28, 29], we adopt the commonly used HR@20 (Hit Rate)3 and MRR@20 (Mean Reciprocal Rank) as our evaluation metrics.
Footnote 3: Note that [2, 3, 4, 5, 6, 27] used different metric names for HR@20 (e.g, P@20 and Recall@20). But, they used the same formula to obtain this measurement (i.e., the proportion of cases when the target item is in the top-20 items in all test cases).
#### V-A3 Base Model
Framework _SR-PredAO_ involves a base model (together with our proposed high-capabitily predictor module andn the Merger module). In our experiments, we choose the following three base models, namely LESSR [5], SGNN-HN [3] and DIDN [6], since they are representative in the literature. Roughly speaking, LESSR has a clear encoder-predictor paradigm for the ease of illustration. SGNN-HN and DIDN have the best performance on datasets Yoochoose 1/64 and Diginetica, respectively.
* LESSR [5]: LESSR, proposed by Chen et al. in 2020, intends to solve the lossy session encoding problem from the classical model of SR-GNN [2]. We chose this model as one of the base models because it has the clear encoder-predictor paradigm both in the paper and in the publicly-available code.
* SGNN-HN [3]: SGNN-HN, proposed by Pan et al. in 2020, introduces a star graph neural network to capture complex transition relationships among items in an on-going session and applies a highway network to deal with the over-fitting problem in existing GNNs. To the best of our knowledge, SGNN-HN is the state-of-the-art model in Yoochoose 1/64 from 2020 to 2023 because it has the best HR@20 as 72.06%, showing that it outperforms other models by around 1% according to [https://paperswithcode.com/sota/session-based-recommendations-on-yoochoose1-1](https://paperswithcode.com/sota/session-based-recommendations-on-yoochoose1-1). However, this model has an unclear encoder-predictor split in the paradigm, which affects the performance enhancement of _SR-PredAD_ using this model as a base model.
* DIDN [6]: The DIDN4, proposed by ZHANG et al. in 2022, offers a dynamic intent-aware model, solving the dynamic change of the user behaviors within the session, and an iterative de-noising model, filtering out noisy clicks within a session explicitly. It also further
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Statistic** & **Yoochoose 1/64** & **Diginetica** \\ \hline \# of Clicks & \(565,332\) & \(982,961\) \\ \hline \# of Training Sessions & \(375,625\) & \(647,523\) \\ \hline \# of Test Sessions & \(55,896\) & \(71,947\) \\ \hline \# of Items & \(17,792\) & \(43,097\) \\ \hline Average length & 6.14 & 5.12 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Statistics of datasets
mines collaborative information to enrich the session semantics. To the best of our knowledge, the DIDN model is the state-of-the-art model in Diginetica. It outperforms other models in Diginetica dataset for at least 0.98% in HR@20 and 4.76% in MRR@20 according to [https://paperswithcode.com/sota/session-based-recommendations-on-diginetica](https://paperswithcode.com/sota/session-based-recommendations-on-diginetica) (Note: the DIDN model's result could not be found in this link but its HR@20 and its MRR@20 are 56.22% and 20.02%, respectively).
We also considered using some newer proposed models like [7, 8, 9]. But, they do not have a state-of-the-art performance in these two datasets.
In the following, when we describe framework _SR-PredAO_ using the base model \(M\), we write _SR-PredAO(M)_.
### _Experimental Results_
#### Iv-B1 Performance Comparison
In framework _SR-PredAO_, all hyper-parameters (e.g., the batch size and the learning rate) in the base models are kept as the best experimental configuration shown in the existing papers because we want to see the improvement of framework _SR-PredAO_ (including the new predictor module and the merger module) on the base model.
Table II shows the experimental results of all models. Note that each reported result in the table is the best result of our model, and they may be in different parameter configurations.
From Table II, we see that framework _SR-PredAO_ has a relatively significant improvement on HR@20 for all models and on MRR@20 for almost all models. Specifically, framework _SR-PredAO_, when applied to existing state-of-the-art models, could have up to 2.9% improvement on HR@20 and 2.3% of improvement on MRR@20. The improvement is on a considerably significant scale. Besides, when there is a clearer encoder-predictor paradigm in an existing model, the performance enhancement is more substantial. However, if the model does not have a clear split of the encoder module and the predictor module in its implementation, then there is no improvement on MRR@20.
It is worth mentioning that using framework _SR-PredAO_ on any existing model could automatically improve the prediction accuracy, which is a great advantage. Compared with recent papers [3, 4, 5, 6, 7, 8, 9] showing that 1.4% of improvement is considered as a major contribution, framework _SR-PredAO_ has a significant improvement in the field.
#### Iv-B2 Ablation Studies
In the following experimental results, all the experiments are conducted on the base model of SGNN-HN since the SGNN-HN is the most challenging model to be tuned for framework _SR-PredAO_. The experiment for this model can better reflect the effectiveness of each feature in framework _SR-PredAO_. We study two features, namely (1) the random user's behavior alleviator and (2) NDT-pruning, by comparing the performance of SR-PredAO(SGNN-HN) with its model with one of the features removed.
Table III shows that if we drop the random user's behavior alleviator or NDT-Pruning in framework _SR-PredAO_, the improvement of _SR-PredAO_ over the base model drops to a great extent in the Diginetica dataset but not that much in _YooChoose_ (YC) 1/64 dataset. This is because the random user's behavior and the overfitting problem are not quite obvious in the YC dataset compared with the Diginetica dataset.
#### Iv-B3 Hyper-parameter Study
In this section, we study how the number of trees, the depth of the tree, and the pruning rate affect the performance of _SR-PredAO_. All the results are
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Diginetica**} & \multicolumn{2}{c}{**YooChoose I/64**} \\ \cline{2-5} & **HR@20** & **MRR@20** & **HR@20** & **MRR@20** \\ \hline \multirow{2}{*}{\begin{tabular}{c} LESSR \\ SR-PredAO(LESSR) \\ Improvement (\%) \\ \end{tabular} } & 51.71 & 18.15 & 70.94 & 31.16 \\ & **53.10** & **18.38** & **71.73** & **31.70** \\ \hline \multirow{2}{*}{\begin{tabular}{c} SGNN-HN \\ SR-PredAO(SGNN-HN) \\ Improvement (\%) \\ \end{tabular} } & **2.7** & **1.3** & **1.1** & **1.7** \\ \hline \multirow{2}{*}{\begin{tabular}{c} SGNN-HN \\ SR-PredAO(SGNN-HN) \\ Improvement (\%) \\ \end{tabular} } & 55.67 & **19.12** & 72.06 & **32.61** \\ & **55.91** & 19.06 & **72.62** & 32.47 \\ \hline \multirow{2}{*}{\begin{tabular}{c} DIDN \\ SR-PredAO(DIDN) \\ Improvement (\%) \\ \end{tabular} } & **0.4** & -0.3 & **0.8** & -0.4 \\ \hline \multirow{2}{*}{
\begin{tabular}{c} DIDN \\ SR-PredAO(DIDN) \\ Improvement (\%) \\ \end{tabular} } & 56.22 & 20.03 & 68.95 & 31.27 \\ & **57.86** & **20.49** & **69.50** & **31.44** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Experimental result (%) on three base models and two datasets
Fig. 3: The hyper-parameter study results of SR-PredAO(SGNN-HN)
shown in Fig. 3. When the number of trees reaches 128, HR@20 of _SR-PredAO_ is the highest. When the number of trees is larger 128, HR@20 decreases because more trees affects the model's learning capacity. For the pruning rate, as long as we do not remove the pruning feature, we can see that varying the rate does not affect the performance too much. For the depth of the tree, we can see that if the tree goes too deep (i.e., the depth is greater than 5), it may have a serious overfit problem due to the excessive capability, and if the tree is too shallow (i.e., the depth is smaller than 5), it cannot provide enough capability enhancement for prediction.
#### V-B4 Model Size Comparison
In order to perform a fair comparison between the base model (without using our framework) and our framework, we conduct experiments so that they have the same model complexities. Specifically, after we obtain SR-PredAO(SGNN-HN), we enlarge the base model (i.e., SGNN-HN) by increasing the embedding dimensionality and this base model (without using our framework), after parameter-tuning, is regarded as a baseline. The experimental result on Diginetica is shown in Table IV. The enlarged base model cannot outperform SR-PredAO(SGNN-HN) due to the inappropriate training capacity increment of the base model.
#### V-B5 Experimental Summary
In summary, framework _SR-PredAO_, when applied to existing state-of-the-art models, could have up to 2.9% improvement on HR@20 and 2.3% of improvement on MRR@20. We can observe this improvement in almost all base models on all datasets. By considering the consistency of improvement and the ease of applicability of our framework, we regard our contribution as a major improvement to the field of the session-based recommendation system.
## VI Conclusion
In this paper, we are the first to discover the important low-capability issue in the predictor module of most (if not all) existing models, lowering down their prediction accuracy. To address this important issue, we propose a framework called _SR-PredictAO_ which could be applied to any existing models following the common encoder-predictor paradigm. Extensive experimental results on two public benchmark datasets show that when framework _SR-PredictAO_ is applied to 3 existing state-of-the-art models, their performance are consistently improved up to 2.9% on HR@20 and up to 2.1% on MRR@20. Due to the consistent improvement on all datasets, we regard our contribution as a major improvement to the field of the session-based recommendation system.
|
2309.05770 | $K$-Orbit closures and Hessenberg varieties | This article explores the relationship between Hessenberg varieties
associated with semisimple operators with two eigenvalues and orbit closures of
a spherical subgroup of the general linear group. We establish the specific
conditions under which these semisimple Hessenberg varieties are irreducible.
We determine the dimension of each irreducible Hessenberg variety under
consideration and show that the number of such varieties is a Catalan number.
We then apply a theorem of Brion to compute a polynomial representative for the
cohomology class of each such variety. Additionally, we calculate the
intersections of a standard (Schubert) hyperplane section of the flag variety
with each of our Hessenberg varieties and prove this intersection possess a
cohomological multiplicity-free property. | Mahir Bilen Can, Martha Precup, John Shareshian, Özlem Uğurlu | 2023-09-11T18:55:12Z | http://arxiv.org/abs/2309.05770v1 | # \(K\)-orbit closures and Hessenberg varieties
###### Abstract.
This article explores the relationship between Hessenberg varieties associated with semisimple operators with two eigenvalues and orbit closures of a spherical subgroup of the general linear group. We establish the specific conditions under which these semisimple Hessenberg varieties are irreducible. We determine the dimension of each irreducible Hessenberg variety under consideration and show that the number of such varieties is a Catalan number. We then apply a theorem of Brion to compute a polynomial representative for the cohomology class of each such variety. Additionally, we calculate the intersections of a standard (Schubert) hyperplane section of the flag variety with each of our Hessenberg varieties and prove this intersection possess a cohomological multiplicity-free property.
**Keywords.** Hessenberg varieties, symmetric varieties, involutions, Catalan numbers, Monk's formula
**MSC.** 14M15, 14M27, 05A05
###### Contents
* 1 Introduction
* 2 Notation and preliminaries
* 2.1 Hessenberg varieties
* 2.2 \(K\)-orbits on the flag variety
* 2.3 The weak order
* 3 Irreducible Hessenberg varieties \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\)
* 4 W-sets and cohomology classes
## 1. Introduction
Let \(n\) be a positive integer and let \(G=GL_{n}(\mathbb{C})\). Given positive integers \(p,q\) such that \(p+q=n\), let \(K\) be a Levi subgroup of the stabilizer in \(G\) of a \(p\)-dimensional subspace of \(\mathbb{C}^{n}\). So, \(K\cong GL_{p}(\mathbb{C})\times GL_{q}(\mathbb{C})\). Then \(K\) is spherical. We examine coincidences between two well-studied classes of subvarieties in the type A flag variety: Hessenberg varieties and \(K\)-orbit closures. We identify a collection of Hessenberg varieties, each equal to the closure of a single \(K\)-orbit. Leveraging the theory of \(K\)-orbits we answer, for this particular collection, questions that are difficult to settle for arbitrary Hessenberg varieties.
Let \(B\) be the Borel subgroup of \(G\) consisting of upper triangular matrices. The flag variety \(\mathcal{B}=G/B\) has been studied extensively. More recently, Hessenberg varieties, which were first studied due to their connection with numerical linear algebra, have been of interest to geometers, representation theorists, and combinatorialists.
We identify \(\mathcal{B}\) with the collection of full flags
\[V_{\bullet}=0<V_{1}<\ldots<V_{n-1}<V_{n}=\mathbb{C}^{n}\]
with \(\dim V_{i}=i\) for all \(i\in[n]:=\{1,\ldots,n\}\). A _Hessenberg vector_ is a weakly increasing sequence \(\mathbf{m}=(m_{1},\ldots,m_{n})\) of integers satisfying \(i\leq m_{i}\leq n\) for each \(i\in[n].\) Given such \(\mathbf{m}\) and any \(n\times n\) matrix \(\mathsf{x}\), the associated _Hessenberg variety_ is
\[\operatorname{Hess}(\mathsf{x},\mathbf{m}):=\{V_{\bullet}\in\mathcal{B}\mid \mathsf{x}V_{i}\leq V_{m_{i}}\text{ for all }i\in[n]\}.\]
While there have been more recent developments, the survey [1] by Abe and Horiguchi gives a nice summary of the work on Hessenberg varieties and connections to various fields.
Despite their elementary definition, some basic questions about the structure of Hessenberg varieties remain wide open. The ones of interest herein follow.
1. What is the dimension of \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\)?
2. For which matrices \(\mathsf{x}\) and Hessenberg vectors \(\mathbf{m}\) is \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) irreducible?
3. If \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) is irreducible, can we describe cohomology class in \(H^{*}(\mathcal{B};\mathbb{Z})\) it represents?
Let us give an example illustrating that Questions (A) and (B) are subtle, in that their answers can depend on the choice of matrix \(\mathsf{x}\) when \(\mathbf{m}\) is fixed.
**Example 1.1**.: _Consider the Hessenberg vector \(\mathbf{m}=(2,3,4,\ldots,n,n)\). If \(\mathsf{s}\) is a regular semisimple matrix, then by work of De Mari, Procesi, and Shayman in [10], \(\operatorname{Hess}(\mathsf{s},\mathbf{m})\) is isomorphic to the toric variety associated to the fan of type \(A_{n-1}\) Weyl chambers. In particular, \(\operatorname{Hess}(\mathsf{s},\mathbf{m})\) is irreducible of dimension \(n-1\)._
_For \(i\in[n-1]\), let \(w^{i}\in\mathbf{S}_{n}\) be the unique permutation satisfying_
* \(w^{i}(1)=i+1\)_,_
* \(w^{i}(n)=i\)_, and_
* \(w^{i}(j)>w^{i}(j+1)\) _for_ \(2\leq j\leq n-2\)_._
_We write \(E_{1n}\) for the \(n\times n\) elementary matrix whose only nonzero entry is in its first row and last column. As shown by Tymoczko in [10], \(\operatorname{Hess}(E_{1n},\mathbf{m})\) is the union of the Schubert varieties \(X_{w^{i}}\), from which it follows that \(\operatorname{Hess}(E_{1n},\mathbf{m})\) has \(n-1\) irreducible components, each of dimension \(1+\binom{n-1}{2}\)._
We remark that for a fixed Hessenberg vector \(\mathbf{m}\) there can be irreducible varieties \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) and \(\operatorname{Hess}(\mathsf{y},\mathbf{m})\) of differing dimensions. For example, if \(m_{1}<n\) and \(m_{j}=n\) for \(j>1\), then \(\operatorname{Hess}(\mathsf{x},\mathbf{m})=\mathcal{B}\) if and only if \(\mathsf{x}\) is scalar, while \(\operatorname{Hess}(\mathsf{y},\mathbf{m})\) is irreducible of dimension \(\dim(\mathcal{B})-(n-m_{1})\) whenever \(\mathsf{y}\) is regular.
The results on \(\operatorname{Hess}(E_{1n},(2,3,\ldots,n,n))\) discussed in Example 1.1 are worth further consideration. The key point is that for each \(g\in B\), \(E_{1n}g=\lambda gE_{1n}\) for some \(\lambda\in\mathbb{C}\). That is, the Borel subgroup \(B\) stabilizes the subspace spanned by \(E_{1n}\) under the adjoint action.
It follows directly that for every Hessenberg vector \(\mathbf{m}\), \(\operatorname{Hess}(E_{1n},\mathbf{m})\) is \(B\)-invariant and therefore a union of \(B\)-orbits. Thus every irreducible component of \(\operatorname{Hess}(E_{1n},\mathbf{m})\) is a Schubert variety \(X_{w}\) for some \(w\in\mathbf{S}_{n}\). One can determine which \(X_{w}\) appears as such components for any given \(\mathbf{m}\); see [13, 1].
We use the approach described in the previous paragraph to study \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) when \(\mathsf{x}\) is semisimple with exactly two distinct eigenvalues. Given such \(\mathsf{x}\) with eigenvalues \(\lambda,\mu\) of respective multiplicities \(p,q\) (hence \(p+q=n\)), let \(Y,Z\) be the associated eigenspaces. Thus \(\mathbb{C}^{n}=Y\oplus Z\). The simultaneous stabilizer \(K\) of \(Y\) and \(Z\) in \(G\) is isomorphic to \(GL_{p}(\mathbb{C})\times GL_{q}(\mathbb{C})\), and it is straightforward to see that \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) is a union of \(K\)-orbits. It is well-known (see for example [12]) that \(K\) is _spherical_, that is, \(K\) has finitely many orbits on \(\mathcal{B}\). We will use the classification and theory of \(K\)-orbits on \(\mathcal{B}\) due to Yamamoto [14] and many others [15, 16] to address Questions (A), (B), (C) above for Hessenberg varieties defined using such \(\mathsf{x}\).
Assume as above that the semisimple matrix \(\mathsf{x}\) has exactly two distinct eigenvalues \(\lambda,\mu\) with respective multiplicities \(p,q\) and fix a Hessenberg vector \(\mathbf{m}\). We observe that the isomorphism type of \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) depends only on \(p\) and \(q\). Indeed, since \(\operatorname{Hess}(g^{-1}\mathsf{x}g,\mathbf{m})=g\operatorname{Hess}(\mathsf{ x},\mathbf{m})\) for every \(g\in G\), we may assume that \(\mathsf{x}=(x_{ij})\) is diagonal with \(x_{ii}=\lambda\) for \(i\in[p]\) and \(x_{ii}=\mu\) for \(p<i\leq n\). Moreover, it is straightforward to show that for scalars \(\alpha\neq 0\) and \(\beta\),
\[\operatorname{Hess}(\alpha\mathsf{x}+\beta I,\mathbf{m})=\operatorname{Hess}( \mathsf{x},\mathbf{m})\]
hence \(\lambda\) and \(\mu\) are irrelevant and our observation follows. So, there is no harm in writing \(\mathsf{x}_{p,q}\) to denote any such semisimple matrix \(\mathsf{x}\).
We summarize now our results on \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\). Our first result addresses Question (B).
**Theorem 1** (See Corollaries 3.7 and 3.9 and Theorem 3.10 below).: _The following conditions on the Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) are equivalent._
1. \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) _is irreducible._
2. _There is a Hessenberg vector_ \((\ell_{1},\dots,\ell_{q})\) _of length_ \(q\) _such that_ \(m_{i}=\ell_{i}+p\) _for_ \(i\leq q\) _and_ \(m_{i}=n\) _for_ \(q<i\leq n\)_._
3. \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) _is the closure of one of_ \(\frac{1}{q+1}\binom{2q}{q}\) _orbits of_ \(K\) _on_ \(\mathcal{B}\)_. This collection of orbits is naturally parameterized by_ \(231\)_-free permutations in_ \(\mathbf{S}_{q}\)_._
There is a formula for the dimensions of \(K\)-orbits in a flag variety (see [14, Section 2.3]). This formula allows us to compute and write a nice formula for the dimension of any irreducible \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\), thereby addressing Question (A) for this collection.
**Corollary 2** (See Corollary 3.13 below).: _If \(\mathbf{m}=(m_{1},\dots,m_{n})\) is a Hessenberg vector such that \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is irreducible, then_
\[\dim\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})=\sum_{i=1}^{n}(m_{i}-i).\]
Previous work on the Question (C) addresses the case where \(\mathsf{x}\) is regular. It is known that the class of any regular Hessenberg variety depends only on the underlying Hessenberg vector [1]. Polynomial representatives for the classes of regular Hessenberg varieties were first identified as specializations of certain double Schubert polynomials [1, 10]. Even more recently, Nadeau and Tewari [10] gave a combinatorial formula expressing each as a sum of Schubert polynomials. Here we consider certain cases in which \(\mathsf{x}\) is not regular.
Let us state a more specific version of Question (C). The cohomology classes associated with the Schubert varieties \(X_{w}\) (\(w\in\mathbf{S}_{n}\)) form a basis for \(H^{*}(\mathcal{B};\mathbb{Z})\). Let \(I\) be the ideal in \(R:=\mathbb{Z}[x_{1},\ldots,x_{n}]\) generated by constant-free symmetric polynomials. There is an isomorphism \(\phi\) from \(H^{*}(\mathcal{B};\mathbb{Z})\) to \(R/I\) mapping the class associated to \(X_{w}\) to the Schubert polynomial \(\mathfrak{S}_{w}\). (This presentation of \(H^{*}(\mathcal{B};\mathbb{Z})\) is due to Borel; see [1] or [11].) Given any irreducible subvariety \(\mathcal{V}\) of \(\mathcal{B}\), one can ask how to expand the image \(\mathfrak{S}(\mathcal{V})\) under \(\phi\) of the class associated to \(\mathcal{V}\) as a linear combination of Schubert polynomials. We obtain the following result for the collection of irreducible Hessenberg varieties introduced in the statement of Theorem 1.
**Theorem 3** (See Corollary 4.15).: _Let \(X:=\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) be an irreducible Hessenberg variety indexed by a \(231\)-free permutation \(w\in\mathbf{S}_{q}\). A polynomial representative of the class \(\mathfrak{S}(X)\) of \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) in the integral cohomology ring of the flag variety is given by the following sum of Schubert polynomials_
\[\mathfrak{S}(X)=\sum_{(u,v)}\mathfrak{S}_{uwv^{-1}w_{0}},\]
_where the sum is taken over all pairs \((u,v)\in\mathbf{S}_{q}\times\mathbf{S}_{q}\) such that \(wy_{0}=uv\) and \(\ell(wy_{0})=\ell(u)+\ell(v)\)._
A key ingredient in our computations for Theorem 3 is the useful notion of the \(W\)-set associated with a \(K\)-orbit \(\mathcal{O}=KV_{\bullet}\) in the flag variety. Loosely speaking, the \(W\)-set of \(\mathcal{O}\) consists of permutations that are obtained by multiplying the simple reflections that label the edges of certain saturated paths in the weak order on the spherical variety \(G/K\); see Section 2.3 for more. The origins of \(W\)-sets go back to the influential work of Richardson and Springer in [12], where the authors initiated a systematic study of the (weak) Bruhat orders on the Borel orbit closures in symmetric varieties. This development is generalized by Knop to all spherical homogeneous varieties in [14]. Brion's work [1] has brought to light a multitude of fascinating applications of \(W\)-sets to the geometry of \(K\)-orbits. In particular, Brion used \(W\)-sets to describe certain deformations of \(K\)-orbits in flag varieties to the unions of Schubert varieties; the results of Theorem 3 rest heavily on this work. More recently, combinatorialists have used \(W\)-sets to develop Schubert calculus for (classical) symmetric spaces. There is currently a fast-growing literature on this subject [15, 16, 17, 18].
It follows directly from Theorem 3 that if \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is irreducible, then the polynomial \(\mathfrak{S}(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}))\) is a \(0-1\) sum of Schubert polynomials. In other words, when we express \(\mathfrak{S}(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}))\) as a linear combination of Schubert polynomials, all coefficients lie in \(\{0,1\}\). Whenever a polynomial is a \(0-1\) sum of Schubert polynomials, we say that
the sum is _multiplicity-free_. Something stronger is true. For \(i\in[n-1]\), we write \(s_{i}\) for the transposition \((i,i+1)\in\mathbf{S}_{n}\).
**Theorem 4** (See Theorem 4.19 below).: _If \(i\in[n-1]\) and \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is irreducible, then the product \(\mathfrak{S}_{s_{i}}\mathfrak{S}(\operatorname{Hess}(\mathsf{x}_{p,q}, \mathbf{m}))\) is a multiplicity-free sum of Schubert polynomials._
Theorem 4, which is a consequence of Theorem 3 and Monk's formula, gives insight into how \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) intersects certain Schubert varieties of codimension one in \(\mathcal{B}\).
Geometrically speaking, at the cycle level, the classical Monk's formula ([13, Theorem 3]) says that the intersection of a Schubert variety \(X\subseteq G/B\) with a Schubert divisor \(Z\subset G/B\) is a multiplicity-free sum of Schubert divisors of \(X\). Although \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) has a flat degeneration to a union \(Y\) of (many) Schubert varieties, it is not a \(B\)-stable subvariety of \(G/B\). In light of this fact, we find it rather surprising that the cohomology class of the intersection of \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) with \(Z\) is a \(0-1\) sum of the classes of Schubert divisors in \(Y\). It is unknown to us that if this multiplicity-free phenomenon persists in all cases of the intersection between \(Z\) and any \(K\)-orbit closure or any irreducible semisimple Hessenberg variety in the flag variety.
It is natural to ask whether the methods used here and illustrated in Example 1.1 are more widely applicable. The key idea is that if \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) is invariant under the action of a spherical group \(H\), then known combinatorial descriptions of \(H\)-orbits allow for a detailed analysis of \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) that is difficult to carry out for arbitrary Hessenberg varieties. If a spherical subgroup \(H\) of \(G\) centralizes \(\mathsf{x}\) (up to multiplication by a scalar) then \(H\) indeed acts on \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) for all \(\mathbf{m}\). However, this situation is rare. If \(\mathsf{x}\) is semisimple, then \(C_{G}(\mathsf{x})\) is reductive. The reductive spherical subgroups of \(G\) are known (see [11], [12], [13]). These are the centralizers of the matrices \(\mathsf{x}_{p,q}\) studied herein along with the classical groups that act irreducibly on \(\mathbb{C}^{n}\). In the second case, the centralizer of every such classical group consists of the scalar matrices, and if \(\mathsf{x}\) is scalar then \(\operatorname{Hess}(\mathsf{x},\mathbf{m})=\mathcal{B}\) for all \(\mathbf{m}\). There are nilpotent matrices other than conjugates of \(E_{1n}\) with spherical centralizers in \(G\), but these are also rare. The automorphism group of \(\operatorname{Hess}(\mathsf{x},\mathbf{m})\) can be much larger than \(C_{G}(\mathsf{x})\), but it seems challenging to give a comprehensive and useful analysis of this phenomenon. On the other hand, in [10], De Mari, Procesi, and Shayman define Hessenberg varieties for arbitrary reductive groups. In Lie types other than \(A\) there are additional examples of reductive spherical subgroups centralizing non-scalar elements. We will examine these examples in future work.
The content of the rest of the paper is as follows. After reviewing the requisite results in Section 2, we prove Theorem 1 and Corollary 2 in Section 3. The proofs of Theorems 3 and 4 are the subject of Section 4.
**Acknowledgements.** The first author is partially supported by a grant from the Louisiana Board of Regents (contract no. LEQSF(2021-22)-ENH-DE-26). The second author is partially supported by NSF Grant DMS 1954001.
## 2. Notation and preliminaries
We review here various results and definitions that we will use below. We denote by \(\mathbb{Z}_{+}\) the set of positive integers. Let \(n\in\mathbb{Z}_{+}\). Let \(G=GL_{n}(\mathbb{C})\) and let \(B\leq G\) be the Borel subgroup consisting of upper triangular matrices. The flag variety \(G/B\) will be denoted by \(\mathcal{B}\). We identify each coset \(gB\in\mathcal{B}\) with the flag
\[V_{\bullet}=0<V_{1}<\ldots<V_{n-1}<V_{n}=\mathbb{C}^{n}\]
in which each \(V_{i}\) is spanned by the first \(i\) columns of \(g\).
Denote the symmetric group on \([n]\) by \(\mathbf{S}_{n}\). Let \(p\) and \(q\) be positive integers such that \(n=p+q\). We frequently consider the smaller symmetric group \(\mathbf{S}_{q}\) below, which we identify with the subgroup of \(\mathbf{S}_{n}\) stabilizing \([n]\setminus[q]\) pointwise. For \(i\in[n-1]\), we write \(s_{i}\) for the simple reflection \((i,i+1)\in\mathbf{S}_{n}\). A _reduced word_ for \(w\in\mathbf{S}_{n}\) is any shortest possible representation
\[w=s_{i_{1}}s_{i_{2}}\ldots s_{i_{\ell}}\]
of \(w\) as a product of simple reflections. We call the set of simple transpositions that appear in any reduced expression of \(w\) the _support of \(w\)_ and denote it by \(\mathrm{Supp}(w)\). For example, \(\mathrm{Supp}(2143)=\mathrm{Supp}(s_{1}s_{3})=\{s_{1},s_{3}\}\).
The _length_\(\ell(w)\) of \(w\in\mathbf{S}_{n}\) is the number of simple reflections appearing in any reduced word for \(w\). It is well known that
\[\ell(w)=\mid\{i<j\mid 1\leq i<j\leq n,\,w(i)>w(j)\}\mid\]
for all \(w\in\mathbf{S}_{n}\). The longest elements of both \(\mathbf{S}_{n}\) and \(\mathbf{S}_{q}\) play a role below; to avoid confusion, we write \(w_{0}\) for the longest element of \(\mathbf{S}_{n}\) and \(y_{0}\) for the longest element of \(\mathbf{S}_{q}\).
We say that \(w\in\mathbf{S}_{n}\)_avoids_\(312\) (or is \(312\)_-free_) if there do not exist \(1\leq i<j<k\leq n\) such that \(w(j)<w(k)<w(i)\) and define avoidance of \(231\) similarly. It is straightforward to show that \(w\) avoids \(231\) if and only if \(w^{-1}\) avoids \(312\).
### Hessenberg varieties
A _Hessenberg vector_ is a weakly increasing sequence
\[\mathbf{m}=(m_{1},\ldots,m_{n})\]
of integers satisfying \(i\leq m_{i}\leq n\) for each \(i\in[n]\). Given a matrix \(\mathsf{x}\in\mathfrak{g}:=\mathfrak{gl}_{n}(\mathbb{C})\) and Hessenberg vector \(\mathbf{m}\) we define the corresponding _Hessenberg variety_ by
\[\mathrm{Hess}(\mathsf{x},\mathbf{m}):=\{V_{\bullet}\in\mathcal{B}\mid \mathsf{x}V_{i}\leq V_{m_{i}}\text{ for all }i\in[n]\}.\]
Given a Hessenberg vector \(\mathbf{m}\) we define \(\pi_{\mathbf{m}}\) to be the lattice path from the upper left corner to the lower right corner of an \(n\times n\) grid in which the vertical step in row \(i\) occurs in column \(m_{i}\). Since \(m_{i}\geq i\), \(\pi_{\mathbf{m}}\) is a _Dyck path_, that is, the lattice path \(\pi_{\mathbf{m}}\) never crosses the diagonal connecting the two corners. We write \(\mathrm{area}(\pi_{\mathbf{m}})\) for the number of squares in the grid that lie below \(\pi_{m}\) and strictly above the diagonal and observe that
\[\mathrm{area}(\pi_{\mathbf{m}})=\sum_{i=1}^{n}(m_{i}-i).\]
Herein we examine Hessenberg varieties \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) where \(\mathsf{x}_{p,q}\in\mathfrak{g}\) is semisimple with exactly two distinct eigenvalues, one of multiplicity \(p\) and one of multiplicity \(q\) (so \(p+q=\mathrm{area}(\pi_{\mathbf{m}})\)). We say that \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hess_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _Hessenberg_ if \(\mathrm{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is _
\(n\)). Since \(\operatorname{Hess}(g^{-1}\mathsf{x}g,\mathbf{m})=g\operatorname{Hess}(\mathsf{x},m)\) for all \(g\in G\) and all \(\mathsf{x}\in\mathfrak{g}\), we assume without loss of generality that
\[\mathsf{x}_{p,q}=\text{diag}(\underbrace{\lambda_{1},\ldots,\lambda_{1}}_{p \text{ times}},\underbrace{\lambda_{2},\ldots,\lambda_{2}}_{q\text{ times}}), \tag{2.1}\]
for distinct \(\lambda_{1},\lambda_{2}\in\mathbb{C}\).
The centralizer of \(\mathsf{x}_{p,q}\) in \(G\) is the subgroup \(K\cong GL_{p}(\mathbb{C})\times GL_{q}(\mathbb{C})\) consisting of all \(g=(g_{ij})\in G\) such that \(g_{ij}=0\) if either \(i\leq p<j\) or \(j\leq p<i\). It is staightforward to confirm that if \(V_{\bullet}\in\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) and \(g\in K\), then
\[gV_{\bullet}:=0<gV_{1}<\ldots<gV_{n-1}<\mathbb{C}^{n}\in\operatorname{Hess}( \mathsf{x}_{p,q},\mathbf{m}).\]
Thus \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is a union of \(K\)-orbits on \(\mathcal{B}\).
### \(K\)-orbits on the flag variety
The group \(K\) is known to have finitely many orbits on the flag variety \(\mathcal{B}\). These orbits are parameterized by combinatorial objects called clans. Clans originated in work of Matsuki and Oshima [10] to parameterize symmetric subgroup orbits on complex flag manifolds of classical type. Their notation has morphed with developments through subsequent works, notably by Yamamoto [23] and then Wyser [21].
We define the set of clans as follows. Consider the set of all sequences
\[\gamma=c_{1}c_{2}\cdots c_{n}\]
such that
1. each \(c_{i}\) lies in \(\{+,-\}\cup\mathbb{Z}_{+}\),
2. each element of \(\mathbb{Z}_{+}\) appearing in \(\gamma\) appears exactly twice, and
3. if \(+\) and \(-\) appear, respectively, exactly \(s\) times and \(t\) times in \(\gamma\), then \(s-t=p-q\).
We define an equivalence relation on this set by identifying sequences \(\gamma=c_{1}\ldots c_{n}\) and \(\delta=d_{1}\ldots d_{n}\) if
* \(d_{i}=d_{j}\in\mathbb{Z}_{+}\) whenever \(c_{i}=c_{j}\in\mathbb{Z}_{+}\), and
* \(d_{i}=c_{i}\) whenever \(c_{i}\in\{+,-\}\).
A _\((p,q)\)-clan_ (or _clan_ if \(p,q\) are fixed) is an equivalence class of this relation. We identify a clan with its unique representative \(\gamma\) satisfying
* if \(j>1\in\mathbb{Z}_{+}\) appears in \(\gamma\) then \(j-1\) appears in \(\gamma\) and the first occurrence of \(j-1\) is to the left of the first occurrence of \(j\),
and write \(\operatorname{\mathsf{Clan}}_{p,q}\) for the set of all such representatives. So, for example, \(5++3-+35+\) and \(1++2-+21+\) lie in the same \((6,3)\)-clan and the second of these is our fixed representative for the equivalence class. In general, if \(\gamma\in\operatorname{\mathsf{Clan}}_{p,q}\) then there is some \(\ell\in\mathbb{Z}_{\geq 0}\) such that the integers appearing in \(\gamma\) are exactly those in \([\ell]\), and if \(s\) entries of \(\gamma\) are plus signs and \(t\) entries are minus signs, then \(p=\ell+s\) and \(q=\ell+t\).
A flag \(V(\gamma)_{\bullet}\) in \(\mathcal{B}\) is associated with each clan \(\gamma\) in the next definition.
**Definition 2.1**.: Let \(e_{1},\ldots,e_{n}\) be the standard basis for \(\mathbb{C}^{n}\). Given \((p,q)\)-clan \(\gamma=c_{1}\ldots c_{n}\), define \(v_{1},\ldots,v_{n}\in\mathbb{C}^{n}\) as follows.
* If \(c_{i}\) is the \(k^{th}\) occurrence of \(+\) in \(\gamma\) and exactly \(\ell\) elements of \([q]\) have appeared twice among \(c_{1},\ldots,c_{i-1}\), set \(v_{i}=e_{k+\ell}\).
* If \(c_{i}\) is the \(k^{th}\) occurrence of \(-\) in \(\gamma\) and exactly \(\ell\) elements of \([q]\) have appeared twice among \(c_{1},\ldots,c_{i-1}\), set \(v_{i}=e_{p+k+\ell}\).
* Say \(c_{i}=c_{j}=k\in[q]\) for some \(i<j\), with exactly \(r\) occurrences of \(+\) appearing in \(c_{1}\cdots c_{i-1}\), exactly \(s\) occurrences of \(-\) appearing in \(c_{1}\cdots c_{j-1}\), and exactly \(u\) elements of \([q]\) appearing twice in \(c_{1}\cdots c_{j}\). Then set \(v_{i}=e_{k+r}+e_{p+s+u}\) and \(v_{j}=e_{k+r}-e_{p+s+u}\).
For \(i\in[n]\), set
\[V(\gamma)_{i}:=\mathbb{C}\{v_{j}\mid j\leq i\}\]
and define
\[V(\gamma)_{\bullet}:=0<V(\gamma)_{1}<\ldots<V(\gamma)_{n-1}<\mathbb{C}^{n} \in\mathcal{B}.\]
We observe that, for arbitrary \(\gamma\), each vector \(v_{i}\) used to construct \(V(\gamma)_{\bullet}\) is either a standard basis vector or of the form \(e_{r}\pm e_{s}\) with \(r\in[p]\) and \(p<s\leq n\).
**Example 2.2**.: _Say \(p=5\), \(q=3\), and \(\gamma=+1+-2+21\). Then \(v_{1}=e_{1}\), \(v_{2}=e_{2}+e_{8}\), \(v_{3}=e_{3}\), \(v_{4}=e_{6}\), \(v_{5}=e_{4}+e_{7}\), \(v_{6}=e_{5}\), \(v_{7}=e_{4}-e_{7}\), and \(v_{8}=e_{2}-e_{8}\)._
**Definition 2.3**.: Given a \((p,q)\)-clan \(\gamma\), we set
\[\mathcal{O}_{\gamma}:=KV(\gamma)_{\bullet},\]
so \(\mathcal{O}_{\gamma}\) is the \(K\)-orbit on \(\mathcal{B}\) containing \(V(\gamma)_{\bullet}\).
**Lemma 2.4** (Matsuki-Oshima).: _Each \(K\)-orbit on \(\mathcal{B}\) contains a unique flag \(V(\gamma)_{\bullet}\), therefore each \(K\)-orbit on \(\mathcal{B}\) is of the form \(\mathcal{O}_{\gamma}\) for some \(\gamma\in\mathbf{Clan}_{p,q}\). Furthermore, \(\mathcal{O}_{\gamma}=\mathcal{O}_{\delta}\) for \(\gamma,\delta\in\mathbf{Clan}_{p,q}\) if and only if \(\gamma=\delta\)._
**Definition 2.5**.: Given \(\gamma,\tau\in\mathbf{Clan}_{p,q}\) we write \(\gamma\leq\tau\) whenever \(\mathcal{O}_{\gamma}\subseteq\overline{\mathcal{O}_{\tau}}\). We call the partial order \(\leq\) the _inclusion order_ on \(\mathbf{Clan}_{p,q}\).
We now present a result of Wyser [20] characterizing the inclusion order. Given a clan \(\gamma=c_{1}c_{2}\cdots c_{n}\), we define
1. \(\gamma(i;+)\) to be the total number of plus signs and pairs of equal natural numbers occurring among \(c_{1}\cdots c_{i}\),
2. \(\gamma(i;-)\) to be the total number of minus signs and pairs of equal natural numbers occurring among \(c_{1}\cdots c_{i}\), and
3. \(\gamma(i,j)\) to be the number of pairs of equal numbers \(c_{s}=c_{t}\in\mathbb{Z}_{+}\) with \(s\leq i<j<t\).
**Example 2.6**.: _If \(\gamma=+1+-2+21\) as in Example 2.2 above, then_
\[(\gamma(i;+))_{i=1}^{n}=(1,1,2,2,2,3,4,5),\]
\[(\gamma(i;-))_{i=1}^{n}=(0,0,0,1,1,1,2,3),\]
_and_
\[(\gamma(i,j))_{i,j=1}^{n}=\left(\begin{array}{cccccccc}0&0&0&0&0&0&0&0\\ 0&0&1&1&1&1&1&0\\ 0&0&0&1&1&1&1&0\\ 0&0&0&0&1&1&1&0\\ 0&0&0&0&0&2&1&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\end{array}\right).\]
**Theorem 2.7** (Wyser).: _Let \(\gamma\) and \(\tau\) be \((p,q)\)-clans. Then \(\gamma\leq\tau\) if and only if all three inequalities_
1. \(\gamma(i;+)\geq\tau(i;+)\)_,_
2. \(\gamma(i;-)\geq\tau(i;-)\)_, and_
3. \(\gamma(i,j)\leq\tau(i,j)\)__
_hold for all \(1\leq i<j\leq n\)._
The unique maximum element of \(\mathbf{Clan}_{p,q}\) in the inclusion order is
\[\gamma_{0}:=12\cdots q+\cdots+q\cdots 21.\]
(There are \(p-q\) plus signs appearing in \(\gamma_{0}\).) The \(K\)-orbit \(\mathcal{O}_{\gamma_{0}}\) is open and dense in \(\mathcal{B}\).
**Example 2.8**.: _We have_
\[(\gamma_{0}(i;+))_{i=1}^{n}=(\underbrace{0,\ldots,0}_{q\text{ times}},1,2, \ldots,p),\]
\[(\gamma_{0}(i;-))_{i=1}^{n}=(\underbrace{0,\ldots,0}_{p\text{ times}},1,2, \ldots,q),\]
_and_
\[\gamma_{0}(i,j)=\left\{\begin{array}{ll}i&\text{if }i\in[q],j\in[p],\\ q&\text{if }i,j\in[p],\\ p+q-j&\text{if }i\in\{q+1,\ldots n\},j\in\{p+1,\ldots,n\}.\end{array}\right.\]
_Finally, if \(i\in[q]\) and \(j\in\{p+1,\ldots,n\}\) then_
\[\gamma_{0}(i,j)=\min\{n-j,i\}.\]
The following statement, which we record here for use in the next section, follows directly from the definition of the statistic \(\gamma(i,j)\).
**Lemma 2.9**.: _Let \(\gamma\in\mathbf{Clan}_{p,q}\). For all \(i>1\), \(\gamma(i,j)-\gamma(i-1,j)\in\{0,1\}\) with \(\gamma(i,j)-\gamma(i-1,j)=1\) if and only if there exists \(t>j\) such that \(c_{i}=c_{t}\)._
### The weak order
We now recall a formula of Brion for the cohomology class of a \(K\)-orbit closure \(\overline{\mathcal{O}_{\gamma}}\) from [1]. While there is a version of Brion's result for orbits of arbitrary spherical subgroups, we state here the result for the special case of the spherical subgroup \(K=GL_{p}(\mathbb{C})\times GL_{q}(\mathbb{C})\) in \(GL_{n}(\mathbb{C})\).
First, we require some terminology. Let \(\Delta\) denote the subset of simple roots in the root system of \(\mathfrak{gl}_{n}(\mathbb{C})\) specified by our choice of Borel subgroup \(B\). In particular, we have
\[\Delta=\{\epsilon_{i}-\epsilon_{i+1}\mid i\in[n-1]\},\]
where \(\epsilon_{i}:\mathfrak{gl}_{n}(\mathbb{C})\to\mathbb{C}\) is defined by \(\epsilon_{i}(\mathsf{x})=\mathsf{x}_{i,i}\). For each \(\alpha_{i}:=\epsilon_{i}-\epsilon_{i+1}\in\Delta\), let \(P_{i}\) be the minimal parabolic subgroup defined by \(P_{i}:=B\sqcup Bs_{i}B\). Consider the canonical projection map \(\pi_{i}:G/B\to G/P_{i}\). For each \(\gamma\in\mathsf{Clan}_{p,q}\), the pull-back \(\overline{\pi_{i}^{-1}(\pi_{i}(\mathcal{O}_{\gamma}))}\) contains a unique dense \(K\)-orbit, which we denote by \(s_{i}\cdot\mathcal{O}_{\gamma}\). Notice that there might be more than one simple transposition giving the same \(K\)-orbit. Although this is not essential for the definition of our weak order, it will be important for us to keep track of these different simple transpositions. The _weak order_ on the set of \(K\)-orbits is the transitive closure of the relation defined by
\[\mathcal{O}_{\gamma}\prec\mathcal{O}_{\tau}\Leftrightarrow\tau\neq\gamma \text{ and }\mathcal{O}_{\tau}=s_{i}\cdot\mathcal{O}_{\gamma}\text{ for some }i\in[n-1]. \tag{2.2}\]
We also write \(\gamma\preceq\tau\) to denote the weak order on the set \(\mathsf{Clan}_{p,q}\). It is clear that \(\gamma\leq\tau\) whenever \(\gamma\preceq\tau\). The clan \(\gamma_{0}\) is the unique maximal element of \(\mathsf{Clan}_{p,q}\) with respect to both the weak order and inclusion order.
We form an (oriented) graph on the vertex set \(\mathsf{Clan}_{p,q}\) with edges \(\gamma\to\tau\) whenever (2.2) holds for some \(s_{i}\), \(i\in[n-1]\). In this case, we label the edge as follows:
\[\gamma\xrightarrow{s_{i}}\tau.\]
As we mentioned before, there can be more than one simple transposition \(s_{i}\) with \(i\in[n-1]\) giving the same cover relation in (2.2). Hence, an edge of our directed graph may possess multiple labels. We will use these labels in Section 4.
Given a directed path
\[P:\gamma=\gamma_{1}\xrightarrow{s_{i_{1}}}\gamma_{2}\xrightarrow{s_{i_{2}}} \gamma_{3}\cdots\xrightarrow{s_{i_{\ell}}}\gamma_{\ell+1}=\gamma_{0}\]
from \(\gamma\) to \(\gamma_{0}\) we define \(w(P):=s_{i_{1}}s_{i_{2}}\cdots s_{i_{\ell}}\in\mathsf{S}_{n}\).
**Definition 2.10**.: For each \(\gamma\in\mathsf{Clan}_{p,q}\), the \(W\)-_set_ of the \(K\)-orbit \(\mathcal{O}_{\gamma}\) is
\[W(\gamma):=\{w(P)\mid\text{$P$ a labeled directed path from $\gamma$ to $\gamma_{0}$}\}\subseteq\mathsf{S}_{n}.\]
We can now state Brion's formula [1, Theorem 6].
**Theorem 2.11** (Brion).: _Let \(\gamma\in\mathsf{Clan}_{p,q}\). The \(K\)-orbit closure \(\overline{\mathcal{O}_{\gamma}}\) has rational singularities and admits a flat degeneration to the reduced subscheme_
\[\bigcup_{w\in W(\gamma)}\overline{Bw_{0}wB/B}\subset\mathcal{B}.\]
_In particular, we have_
\[[\overline{\mathbb{O}_{\gamma}}]=\sum_{w\in W(\gamma)}[\overline{Bw_{0}wB/B}] \tag{2.3}\]
_in the integral cohomology ring of \(\mathcal{B}\)._
**Remark 2.12**.: _Let us denote by \(\mathcal{B}(G/K)\) the set of all \(B\)-orbit closures in a spherical homogeneous space \(G/K\), where \(G\) is a complex connected reductive algebraic group, and \(K\) is a spherical subgroup of \(G\). (As usual, \(T\), \(B\), and \(W\) stand for a maximal torus in \(G\), a Borel subgroup containing \(T\) in \(G\), and the Weyl group of \(G\), respectively.) For \(Y\in\mathcal{B}(G/K)\), the \(W\)-set of \(Y\) consists of \(w\in W\) such that the natural quotient morphism \(\pi_{Y,w}:\overline{BwB}\times_{B}Y\to G/K\) is surjective and generically finite. It turns out that, by [1, Lemma 5], this definition is equivalent to a generalization of our Definition 2.10 to the setup of spherical homogeneous spaces._
_Let \(d(Y,w)\) denote the degree of \(\pi_{Y,w}\). It turns out that this number is always a power of 2, [1, Lemma 5 (iii)]. The real geometric usefulness of this integer is explained by Brion in [1, Theorem 6]. In particular, the cohomology class corresponding to \(Y\) in \(H^{*}(G/B,\mathbb{Z})\) is given by_
\[[Y]=\sum_{w\in W(Y)}d(Y,w)[\overline{Bw_{0}wB/B}].\]
_In our special case, where \(K=GL_{p}(\mathbb{C})\times GL_{q}(\mathbb{C})\), the work of Vust [10] implies that each of these degrees is equal to \(1\), implying our identity (2.3). It also implies the vanishing of all higher cohomology spaces for the restrictions of effective line bundles from \(G/B\) to \(Y\)._
We now recall a combinatorial description for the weak order on \(\operatorname{\mathbf{Clan}}_{p,q}\) used in the work of the first author, Joyce, and Wyser [1]. This description is most easily stated in terms of charged matchings. A _matching_ on \([n]\) is a finite graph on the vertex set \([n]\) such that each vertex is either isolated or adjacent to precisely one other vertex. A _charged matching_ is a matching with an assignment of a \(+\) or \(-\) charge to each isolated vertex.
The set of \((p,q)\)-clans is in bijection with the set of all _charged matchings_ on \([n]\) having \(p-q\) more \(+\)'s than \(-\)'s. Explicitly, we obtain a matching from a clan \(\gamma=c_{1}c_{2}\cdots c_{n}\) by connecting \(i\) and \(j\) by an arc whenever \(c_{i}=c_{j}\in\mathbb{Z}_{+}\) and recording all signed entries as charges on isolated vertices. We identify the set of \((p,q)\)-clans with charged matchings throughout, but particularly in Section 4 below.
**Example 2.13**.: _The matching associated to the (5,3)-clan \(\gamma=+1+-2+21\) is as follows._
From [1, Section 2.5], we get that the weak order on clans is the transitive closure of the covering relations
\[\gamma\xrightarrow{s_{i}}\gamma^{\prime}\]
where we obtain \(\gamma^{\prime}\) from \(\gamma\) according to one of the following moves on the corresponding charged matchings, each of which is illustrated in Figure 2.1 below.
* Types IA1 and IA2: Switch the endpoint of a strand with an adjacent sign so as to lengthen the strand.
* Type IB: Create a crossing from two disjoint strands at consecutive vertices.
* Types IC1 and IC2: Create a nested pair of strands by uncrossing the ends of two crossing strands at consecutive vertices.
* Type II: Replace a pair of consecutive, opposite charges by a strand of length 1.
An astute reader will note that [1] actually studies the _opposite weak order_ on \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{Clan}}}}}_ {p,q}\) so our Figure 2.1 reverses the covering relations as presented in Figure 2.5 of that reference.
Irreducible Hessenberg varieties \(\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}}(\mathsf{x}_ {p,q},\mathbf{m})\)
In this section, we classify all irreducible Hessenberg varieties of the form \(\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}}(\mathsf{x}_ {p,q},\mathbf{m})\) and prove Theorem 1 from the Introduction. To begin, we identify the \(K\)-orbits that are contained in \(\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}}(\mathsf{x}_ {p,q},\mathbf{m})\).
**Proposition 3.1**.: _The \(K\)-orbit \(\mathcal{O}_{\gamma}\) associated to the \((p,q)\)-clan \(\gamma=c_{1}c_{2}\cdots c_{n}\) lies in \(\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}}(\mathsf{x} _{p,q},\mathbf{m})\) if and only if \(m_{i}\geq j\) whenever \(c_{i}=c_{j}\in\mathbb{Z}_{+}\) with \(i<j\)._
Proof.: It suffices to determine which clans \(\gamma\) satisfy \(V(\gamma)_{\bullet}\in\operatorname{\operatorname{\operatorname{\operatorname{ Hess}}}}(\mathsf{x}_{p,q},\mathbf{m})\), where \(V(\gamma)\) is the flag representative of \(\mathcal{O}_{\gamma}\) specified in Definition 2.1 above. We observe first that each \(e_{i}\) is an eigenvector for \(\mathsf{x}_{p,q}\) and that if \(v_{i}\in\{e_{r}+e_{s},e_{r}-e_{s}\}\) with \(r\in[p]\) and \(p<s\leq n\), then \(\mathbb{C}\{v_{i},\mathsf{x}_{p,q}v_{i}\}=\mathbb{C}\{e_{r},e_{s}\}\). Thus \(V(\gamma)_{i}+\mathsf{x}_{p,q}V(\gamma)_{i}\) is spanned by those standard basis vectors \(e_{k}\) such that one of
* there is some \(j\in[i]\) with \(v_{j}=e_{k}\), or
* there is some \(a\in[i]\) with \(v_{a}=e_{r}+e_{s}\) and \(k\in\{r,s\}\)
holds. On the other hand, the standard basis vector \(e_{k}\) is an element of \(V(\gamma)_{m_{i}}\) if and only if one of
* there is some \(j\in[m_{i}]\) with \(v_{j}=e_{k}\), or
* there is some \(b\in[m_{i}]\) with \(v_{b}=e_{r}-e_{s}\) and \(k\in\{r,s\}\)
holds. Indeed, if there is no \(j\in[m_{i}]\) with \(v_{j}=e_{k}\), then \(e_{k}\in V(\gamma)_{m_{i}}\) if and only if there are \(a,b\in[m_{i}]\) with \(v_{a}=e_{r}+e_{s}\), \(v_{b}=e_{r}-e_{s}\) and \(k\in\{r,s\}\). In this case, \(a<b\) and \(c_{a}=c_{b}\) by definition of the flag \(V(\gamma)_{\bullet}\). The proposition follows.
As any \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}} }(\mathsf{x}_{p,q},\mathbf{m})\) is a union of \(K\)-orbits, the next proposition follows immediately from the definitions.
**Proposition 3.2**.: _The Hessenberg variety \(\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}}(\mathsf{x} _{p,q},\mathbf{m})\) is irreducible if and only if, among the clans corresponding to \(K\)-orbits contained in \(\operatorname{\operatorname{\operatorname{\operatorname{Hess}}}}(\mathsf{x} _{p,q},\mathbf{m})\), there is a unique maximal one with respect to the inclusion order._
Let us specify two particular clans \(\sigma\) and \(\tau\) in \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{\operatorname{\operatorname{ \operatorname{ }}}}}}}}}}}} {}\)\(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{}}}}}}}}}}}} }}\)\(\tau\)\(\
and
\[\tau:=\underbrace{-\cdots-}_{\text{$q$ times}}\underbrace{++\cdots+}_{\text{$p$ times}}.\]
Observe that \(\sigma(i;-)=0\) for \(i\leq p\) and \(\tau(i;+)=0\) for \(i\leq q\). Under our assumption that \(p\geq q\), the next claim follows.
**Lemma 3.3**.: _If \(\gamma=c_{1}c_{2}\cdots c_{n}\in\mathbf{Clan}_{p,q}\) such that \(\sigma\leq\gamma\) and \(\tau\leq\gamma\), then all of_
Figure 2.1. Cover relations of the weak order on \(\mathbf{Clan}_{p,q}\).
1. \(c_{i}=i\) _for each_ \(i\in[q]\)_,_
2. \(c_{i}=+\) _for_ \(q<i\leq p\)_, and_
3. \(\{c_{i}\mid p<i\leq p+q\}=[q]\)__
_hold. In particular, if \(\overline{\mathcal{O}_{\gamma}}=\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{ m})\) for some Hessenberg vector \(\mathbf{m}\), then \(\gamma\) satisfies all three conditions._
Proof.: As each of \(\mathcal{O}_{\sigma}\) and \(\mathcal{O}_{\tau}\) lies in \(\overline{\mathcal{O}_{\gamma}}\), we have
* \(\gamma(i;-)\leq\sigma(i;-)=0\) for \(i\leq p\) and
* \(\gamma(i;+)\leq\tau(i;+)=0\) for \(i\leq q\)
by Theorem 2.7. These conditions imply that \(\gamma\) cannot contain any signs or pairs of positive integers within the first \(q\) entries and cannot contain any minus signs or pairs of positive integers within the first \(p\) entries. Since \(\gamma\) contains at most \(q\) natural number pairs, conditions (a) and (b) now follow. Condition (c) follows from (a) and (b) and the fact that \(\gamma\) is a clan. The last statement of the lemma follows immediately, as both \(\mathcal{O}_{\sigma}\) and \(\mathcal{O}_{\tau}\) lie in \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) for every Hessenberg vector \(\mathbf{m}\) by Proposition 3.1.
We can rewrite condition (c) from Lemma 3.3 as
1. _there is some_ \(w\in\mathbf{S}_{q}\) _such that_ \(c_{p+i}=w(i)\) _for each_ \(i\in[q]\)_._
Given a clan \(\gamma\) satisfying (a),(b), and (c'), we write \(\gamma_{w}\) for \(\gamma\). Thus, for each \(w\in\mathbf{S}_{q}\) we obtain a unique clan \(\gamma_{w}=c_{1}^{w}c_{2}^{w}\cdots c_{n}^{w}\) defined by
* \(c_{i}^{w}=+\) for all \(q<i\leq p\), and
* \(c_{i}^{w}=c_{p+w^{-1}(i)}^{w}=i\) for all \(i\in[q]\).
Note that \(\gamma_{0}=\gamma_{y_{0}}\) where \(y_{0}\) is the longest permutation in \(\mathbf{S}_{q}\). In fact, the collection of all such \((p,q)\)-clans forms is precisely the inclusion-interval \([\gamma_{e},\gamma_{0}]\) in \(\mathbf{Clan}_{p,q}\).
**Lemma 3.4**.: _Let \(\gamma_{e}=12\cdots q+\cdots+12\cdots q\) be the clan corresponding to the identity in \(\mathbf{S}_{q}\). If \(\gamma_{e}\leq\gamma\), then \(\gamma=\gamma_{w}\) for some \(w\in\mathbf{S}_{q}\). In particular, \(\gamma_{w}(i;+)=\gamma_{0}(i;+)\) and \(\gamma_{w}(i;-)=\gamma_{0}(i;-)\) for all \(i\), and \(\gamma_{w}(i,j)=\gamma_{0}(i,j)\) whenever \(i,j\in[p]\) or \(i,j\in\{p+1,\ldots,n\}\)._
Proof.: We observe that both \(\sigma\leq\gamma_{e}\) and \(\tau\leq\gamma_{e}\). Thus \(\gamma\) satisfies each of the conditions (a), (b), and (c') by Lemma 3.3, and the first assertion of the lemma is proved. To prove the second, we observe that the equality of the various statistics holds in the case of \(w=e\); cf. Example 2.8. The general case now follows since \(\gamma_{0}(i;-)\leq\gamma_{w}(i;-)\leq\gamma_{e}(i;-)\), \(\gamma_{0}(i;+)\leq\gamma_{w}(i;-)\leq\gamma_{e}(i;+)\), and \(\gamma_{e}(i,j)\leq\gamma_{w}(i,j)\leq\gamma_{0}(i,j)\) by Theorem 2.7 in all cases.
The inclusion order on the clans in the interval \([\gamma_{e},\gamma_{0}]\) is greatly simplified. Indeed, the only case in which the statistics appearing in Theorem 2.7 can differ is when considering \(\gamma_{w}(i,j)\) with \(i\in[q]\) and \(j\in\{p+1,p+2,\ldots,n\}\). In that situation, we obtain the following.
**Lemma 3.5**.: _For all \(w\in\mathbf{S}_{q}\) and all \(i\in[q],j\in\{p+1,\ldots,n\}\),_
\[\gamma_{w}(i,j) =\left|\left\{w^{-1}(1),\ldots,w^{-1}(i)\right\}\cap\{j-p+1, \ldots,q\}\right|\] \[=\left|\left\{k\leq i\mid w^{-1}(k)>j-p\right\}\right|.\]
Proof.: We have a pair \(s<t\) such that \(s\leq i<j<t\) and \(c_{s}^{w}=c_{t}^{w}\) if and only if \(t=w^{-1}(s)+p\) by definition of \(\gamma_{w}\).
**Example 3.6**.: _Consider \((5,3)\)-clans \(\gamma_{w}=123+213\) and \(\gamma_{w^{\prime}}=123+132\). For the clan \(\gamma_{w}\), \(w(1)=2,w(2)=1,\) and \(w(3)=3\), while for \(\gamma_{w^{\prime}}\) we have \(w^{\prime}(1)=1,w^{\prime}(2)=3,\) and \(w^{\prime}(3)=2\). The definition of \(\gamma(i;j)\) and straightforward calculation give us that_
\[(\gamma_{w}(i,j))_{i,j=1}^{n}=\left(\begin{array}{cccccccc}0&1&1&1&1&1&0&0\\ 0&0&2&2&2&1&0&0\\ 0&0&0&3&3&2&1&0\\ 0&0&0&0&3&2&1&0\\ 0&0&0&0&0&2&1&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\end{array}\right)\quad(\gamma_{w^{\prime}}(i,j))_{i,j=1}^{n}= \left(\begin{array}{cccccccc}0&1&1&1&1&0&0&0\\ 0&0&2&2&2&1&1&0\\ 0&0&0&3&3&2&1&0\\ 0&0&0&0&3&2&1&0\\ 0&0&0&0&0&2&1&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0\end{array}\right).\]
_Moreover, it also follows from Lemma 3.5 that_
\[\begin{array}{ll}\gamma_{w}(1,6)=1=|\{1\}|&\gamma_{w}(1,7)=0&\gamma_{w}(1,8) =0\\ \gamma_{w}(2,6)=1=|\{1\}|&\gamma_{w}(2,7)=0&\gamma_{w}(2,8)=0\\ \gamma_{w}(3,6)=2=|\{1,3\}|&\gamma_{w}(3,7)=1=|\{3\}|&\gamma_{w}(3,8)=0.\end{array}\]
_and,_
\[\begin{array}{ll}\gamma_{w^{\prime}}(1,6)=0&\gamma_{w^{\prime}}(1,7)=0&\gamma _{w^{\prime}}(1,8)=0\\ \gamma_{w^{\prime}}(2,6)=1=|\{2\}|&\gamma_{w^{\prime}}(2,7)=1=|\{2\}|&\gamma _{w^{\prime}}(2,8)=0\\ \gamma_{w^{\prime}}(3,6)=2=|\{2,3\}|&\gamma_{w^{\prime}}(3,7)=1=|\{2\}|&\gamma _{w^{\prime}}(3,8)=0.\end{array}\]
_Therefore, the statistics from Theorem 2.7 only differ for these clans when \((i;j)\) is either \((1,6)\) or \((2,7)\)._
Our work above tells us that if \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is the closure of a single \(K\)-orbit, then it is equal to \(\mathcal{O}_{\gamma_{w}}\) for some \(w\in\mathbf{S}_{q}\).
**Corollary 3.7**.: _Let \(\mathbf{m}=(m_{1},\ldots,m_{n})\) be a Hessenberg vector. If there is some clan \(\gamma\) such that \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})=\overline{\mathcal{O}_{ \gamma}}\) then \(\gamma=\gamma_{w}\) for some \(w\in\mathbf{S}_{q}\). Furthermore, \((m_{1}-p,m_{2}-p,\ldots,m_{q}-p)\) is a Hessenberg vector of length \(q\) and \(m_{i}=n\) for all \(i\geq q\)._
Proof.: It follows immediately from Lemma 3.3 that \(\gamma=\gamma_{w}\) for some \(w\in\mathbf{S}_{q}\). By Proposition 3.1, for each \(i\in[q]\) we have \(m_{i}\geq p+w^{-1}(i)\). Thus, for each \(i\in[q]\),
\[p+w^{-1}(i)\leq m_{i}\leq p+q\Leftrightarrow w^{-1}(i)\leq m_{i}-p\leq q.\]
It follows that \((m_{1}-p,m_{2}-p,\ldots,m_{q}-p)\) is a sequence of positive integers satisfying \(m_{i}-p\leq q\); it is also weakly increasing since \(\mathbf{m}\) is. Moreover, for each \(i\in[q]\),
\[m_{i}-p\geq\max\{w^{-1}(j)\mid j\leq i\}\geq i.\]
This concludes the proof.
It follows from Corollary 3.7 that there are at most \(\mathsf{Cat}_{q}=\frac{1}{q+1}\binom{2q}{q}\) Hessenberg vectors \(\mathbf{m}\) of length \(p+q\) such that \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is a \(K\)-orbit closure as there are \(\mathsf{Cat}_{q}\) Hessenberg
vectors of length \(q\). We aim to show that there are exactly \(\mathsf{Cat}_{q}\) such \(\mathbf{m}\), and classify the set of \(\mathsf{Cat}_{q}\) clans \(\gamma\) such that \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})=\overline{\mathcal{O}_{\gamma}}\).
**Lemma 3.8**.: _Assume \(u\in\mathbf{S}_{q}\) and \(\mathbf{m}\) is a Hessenberg vector such that \(\mathcal{O}_{\gamma_{u}}\subseteq\operatorname{Hess}(\mathsf{x}_{p,q}, \mathbf{m})\). If there exist \(i<j<k\) such that \(u^{-1}(i)>u^{-1}(k)>u^{-1}(j)\) then there is some \(w\in\mathbf{S}_{q}\) such that \(\gamma_{u}\leq\gamma_{w}\) and \(\mathcal{O}_{\gamma_{w}}\subseteq\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{ m})\)._
Proof.: Let \(w\) be obtained from \(u\) by switching \(j\) and \(k\). Direct examination shows that for all \(a,b\in[n]\), all of \(\gamma_{w}(a;+)\leq\gamma_{u}(a;+)\), \(\gamma_{w}(a;-)\leq\gamma_{u}(a;-)\) and \(\gamma_{w}(a,b)\geq\gamma_{u}(a,b)\) hold. Thus \(\gamma_{u}\leq\gamma_{w}\) by Theorem 2.7. Assume for contradiction that there exists \(a,b\in[n]\) with \(a<b\) and \(s\in[q]\) such that \(c_{a}^{w}=c_{b}^{w}=s\) and \(b>m_{a}\). By definition of the clan \(\gamma_{w}\), \(a\in[q]\) and \(b=p+w^{-1}(a)\). Since \(\mathcal{O}_{\gamma_{u}}\subseteq\operatorname{Hess}(\mathsf{x}_{p,q}, \mathbf{m})\), it must be that \(a\in\{j,k\}\) and so \(b\in\{p+w(j),p+w(k)\}.\) However,
\[m_{a}>m_{i}\geq p+u^{-1}(i)=p+w^{-1}(i)>p+\max\{w^{-1}(j),w^{-1}(k)\}\geq b,\]
giving the desired contradiction.
**Corollary 3.9**.: _Let \(w\in\mathbf{S}_{q}\). If there is some Hessenberg vector \(\mathbf{m}\) such that \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})=\overline{\mathcal{O}_{ \gamma_{w}}}\), then \(w^{-1}\) avoids the pattern \(312\), hence \(w\) avoids \(231\)._
There are \(\mathsf{Cat}_{q}\) elements \(w\in\mathbf{S}_{q}\) avoiding \(231\). Let
\[\mathbf{Clan}_{p,q}^{231}:=\{\gamma_{w}\mid w\in\mathbf{S}_{q},\,w\text{ avoids }231\}.\]
We prove below that this set of clans parameterize irreducible semisimple Hessenberg varieties of the form \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\). For each \(w\in\mathbf{S}_{q}\) avoiding the pattern \(231\), define a length \(n=p+q\) Hessenberg vector \(\mathbf{m}(w)\) by
\[m(w)_{i}:=\begin{cases}\max\left\{w^{-1}(k)+p\mid k\leq i\right\}&i\leq q,\\ n&i>q.\end{cases}\]
We can now state the main theorem of this section.
**Theorem 3.10**.: _For each \(w\in\mathbf{S}_{q}\) avoiding the pattern \(231\), the Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))\) is irreducible and equal to the closure of the \(K\)-orbit \(\mathcal{O}_{\gamma_{w}}\). Furthermore, every irreducible Hessenberg variety defined using the semisimple matrix \(\mathsf{x}_{p,q}\) is of this form._
Our proof of Theorem 3.10 requires the following technical lemma.
**Lemma 3.11**.: _Let \(w\in\mathbf{S}_{q}\) be \(231\)-free and \(\mathbf{m}=\mathbf{m}(w)\) the associated Hessenberg vector defined above. Let \(i\in[q]\) such that \(w^{-1}(i)+p<m_{i}\). Then for all \(j\) with \(w^{-1}(i)+p\leq j<m_{i}\), \(\gamma_{w}(i-1,j)=m_{i}-j\)._
Proof.: By Lemma 3.5, \(\gamma_{w}(i-1,j)=|\{k<i\mid w^{-1}(k)>j-p\}|\). By definition of the Hessenberg vector \(\mathbf{m}\), there exists \(a<i\) such that \(m_{i}=w^{-1}(a)+p\). Our assumptions imply that
\[j<m_{i}=w^{-1}(a)+p\Rightarrow j-p<w^{-1}(a)\quad\text{and}\quad w^{-1}(i)<w^{ -1}(a).\]
Let \(k\in\{1,2,\ldots,q\}\) such that \(j-p<w^{-1}(k)\leq w^{-1}(a).\) Note that \(j-p<w^{-1}(k)\) implies \(w^{-1}(i)<w^{-1}(k)\) and \(k\neq i\).
If \(k>i\) then \(w^{-1}(k)<w^{-1}(a)\) and \(w^{-1}\) contains \(w^{-1}(a)w^{-1}(i)w^{-1}(k)\) as a subsequence, contradicting the fact that \(w^{-1}\) is \(312\)-free. Thus, we must have \(k<i\). This shows that
\[\left\{k\in[q]\mid j-p<w^{-1}(k)\leq w^{-1}(a)\right\}\subseteq\left\{k<i\mid w ^{-1}(k)>j-p\right\}.\]
The sets are actually equal, since
\[w^{-1}(a)=\max\left\{w^{-1}(1),\ldots,w^{-1}(i)\right\}.\]
We conclude \(\gamma_{w}(i-1,j)=w^{-1}(a)-(j-p)=m_{i}-j\), as desired.
Proof of Theorem 3.10.: By Proposition 3.2, Corollary 3.7, and Corollary 3.9, every irreducible Hessenberg variety defined using \(\mathsf{x}_{p,q}\) is equal to \(\overline{\mathcal{O}_{\gamma_{w}}}\) for some \(\gamma_{w}\in\mathbf{Clan}_{p,q}^{231}\). To complete the proof we show that \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))=\overline{\mathcal{O}_{ \gamma_{w}}}\). It follows immediately from the definition of the Hessenberg vector \(\mathbf{m}(w)\) and Proposition 3.1 that \(\mathcal{O}_{\gamma_{w}}\subset\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{ m}(w))\). It therefore suffices to show that if \(\mathcal{O}_{\gamma}\subset\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))\) for some \(\gamma=c_{1}c_{2}\cdots c_{n}\in\mathbf{Clan}_{p,q}\), then \(\gamma\leq\gamma_{w}\).
By Theorem 2.7 and Lemma 3.4, we must prove \(\gamma(i,j)\leq\gamma_{w}(i,j)\) for all \(i\in[q]\) and \(j\in\{p+1,\ldots,n\}\). Seeking a contradiction, suppose \(\gamma(i,j)>\gamma_{w}(i,j)\). We may assume \(i\) is minimal with respect to this property. We write \(\mathbf{m}=\mathbf{m}(w)\) throughout, to simplify notation.
Consider first the case \(i=1\). Note that \(\gamma(1,j)\), \(\gamma_{w}(1,j)\in\{0,1\}\) so we must have \(\gamma(1,j)=1\) and \(\gamma_{w}(1,j)=0\). The latter implies \(w^{-1}(1)+p\leq j\) by Lemma 3.5. On the other hand, \(\gamma(1,j)=1\) implies by Lemma 2.9 that there exists \(t>j\) such that \(c_{1}=c_{t}\). Now \(m_{1}=w^{-1}(1)+p<t\), contradicting the assumption that \(\mathcal{O}_{\gamma}\subset\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\).
Now assume \(i>1\). We have both \(\gamma(i-1,j)\leq\gamma_{w}(i-1,j)\) and \(\gamma(i,j)>\gamma_{w}(i,j)\). By Lemma 2.9 this can only be the case if
\[\gamma(i,j)-\gamma(i-1,j)=1, \tag{3.1}\]
and
\[\gamma_{w}(i,j)-\gamma_{w}(i-1,j)=0. \tag{3.2}\]
We may furthermore conclude that
\[\gamma(i-1,j)=\gamma_{w}(i-1,j) \tag{3.3}\]
since otherwise, \(\gamma(i-1,j)<\gamma_{w}(i-1,j)\) and
\[\gamma(i,j)=\gamma(i-1,j)+1\leq\gamma_{w}(i-1,j)=\gamma_{w}(i,j),\]
contradicting our assumption that \(\gamma(i,j)>\gamma_{w}(i,j)\).
By Lemma 2.9, Equation (3.1) implies that there exists \(t>j\) such that \(c_{i}=c_{t}\). From equation (3.2), we get that
\[\left|\left\{w^{-1}(1),\ldots,w^{-1}(i)\right\}\cap\{j-p+1,\ldots,q\}\right|=\] \[\left|\left\{w^{-1}(1),\ldots,w^{-1}(i-1)\right\}\cap\{j-p+1, \ldots,q\}\right|\]
so \(w^{-1}(i)\leq j-p\), implying \(w^{-1}(i)+p\leq j\).
If \(m_{i}=w^{-1}(i)+p\) then we have \(m_{i}<t\), a contradiction to \(\mathcal{O}_{\gamma}\subset\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\). We obtain the same contradiction if \(j\geq m_{i}\) so we may now assume both \(m_{i}>w^{-1}(i)+p\) and \(j<m_{i}\).
By Lemma 3.11 and Equation (3.3), \(\gamma(i-1,j)=m_{i}-j\). This implies there are precisely \(m_{i}-j\) pairs \((a<b)\) such that \(a\leq i-1<j<b\) and \(c_{a}=c_{b}\). As \(\mathcal{O}_{\gamma}\subseteq\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) we have \(b\leq m_{a}\leq m_{i}\) in each case. There are only \(m_{i}-j\) values \(b\) such that \(j<b\leq m_{i}\), and each such position in the clan \(\gamma\) is occupied by \(c_{b}\) for one of the pairs counted by \(\gamma(i-1,j)\). This forces \(t>m_{i}\), another contradiction. We conclude \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})=\overline{\mathcal{O}_{ \gamma_{w}}}\), as desired.
The dimension of the \(K\)-orbit \(\mathcal{O}_{\gamma}\) associated to \(\gamma=c_{1}c_{2}\cdots c_{n}\in\operatorname{\mathbf{Clan}}_{p,q}\) is
\[\dim\mathcal{O}_{\gamma}=\ell(\gamma)+\frac{p(p-1)}{2}+\frac{q(q-1)}{2},\]
where
\[\ell(\gamma):=\sum_{\begin{subarray}{c}c_{i}=c_{j}\in\mathbb{N}\\ i<j\end{subarray}}\left(\,j-i-|\{k\in\mathbb{N}\mid c_{s}=c_{t}=k\text{ for }s<i<t<j\}|\,\right)\]
by [10]. We apply this formula to compute the dimension of the irreducible Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))\). We first require the following technical lemma.
**Lemma 3.12**.: _Given \(w\in\mathbf{S}_{n}\), define a sequence \(\mathbf{h}=\mathbf{h}(w):=(h_{1},\ldots,h_{n})\) by_
\[h_{i}:=\max\{w(k)\mid k\leq i\}.\]
_If \(w\) is \(312\)-free, then_
\[\ell(w)=\sum_{i=1}^{n}(h_{i}-i).\]
Proof.: Write \(w=w(1)\ldots w(n)\) in one-line notation and find \(k\) such that \(w(k)=n\). If \(n>1\) let \(w^{\prime}\) be obtained from \(w\) by erasing \(n\) from the given one-line representation and let \(\mathbf{h}^{\prime}\) be obtained from \(w^{\prime}\) as \(\mathbf{h}\) was obtained from \(w\). For \(i\in[n]\), set
\[\mathsf{inv}_{i}(w):=\mid\{j>i\mid w(j)<w(i)\}\mid,\]
and define \(\mathsf{inv}_{i}(w^{\prime})\) similarly.
We will show by induction on \(n\) that \(\mathsf{inv}_{i}(w)=h_{i}-i\) for every \(i\), from which the lemma follows. The case \(n=1\) is trivial. Assume \(n>1\). We observe that if \(i\geq k\) then \(w_{j}<w_{i}\) for all \(j>i\) (since \(w\) is \(312\)-free) hence \(\mathsf{inv}_{i}(w)=n-i=h_{i}-i\). If \(i<k\) then
\[\mathsf{inv}_{i}(w)=\mathsf{inv}_{i}(w^{\prime})=h_{i}^{\prime}-i=h_{i}-i,\]
the second equality following from the inductive hypothesis.
Recall that \(\pi_{\mathbf{m}(w)}\) denotes the Dyck path associated with the Hessenberg vector \(\mathbf{m}(w)\) as in Section 2. Our work above shows that \(\dim\operatorname{Hess}(\mathsf{x}_{pq},\mathbf{m}(w))\) is given by the area of \(\pi_{\mathbf{m}(w)}\).
**Corollary 3.13**.: _For each \(w\in\mathbf{S}_{q}\) avoiding the pattern \(231\), the Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))\) is irreducible of dimension_
\[\dim\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))=\ell(w)+pq+\frac{p(p-1 )}{2}=\operatorname{area}(\pi_{\mathbf{m}(w)}). \tag{3.4}\]
Proof.: Recall \(c_{i}^{w}=c_{j}^{w}\in\mathbb{N}\) if and only if \(j=w^{-1}(i)+p\). Keeping also in mind that \(\ell(w)=\ell(w^{-1})\), we therefore have
\[|\{k\in\mathbb{N}\mid c_{s}=c_{t}=k\text{ for }s<i<t<w^{-1}(i)+p\}|=|\{s<i\mid w^{- 1}(s)<w^{-1}(i)\}|,\]
and thus
\[\sum_{i\in[q]}|\{k\in\mathbb{N}\mid c_{s}=c_{t}=k\text{ for }s<i<t<j\}|=\ell(w_{0} )-\ell(w^{-1})=\frac{q(q-1)}{2}-\ell(w).\]
We now obtain
\[\ell(\gamma_{w}) = \sum_{i\in[q]}\left(w^{-1}(i)-i+p\right)-\sum_{i\in[q]}|\{k\in \mathbb{N}\mid c_{s}=c_{t}=k\text{ for }s<i<t<j\}|\] \[= pq-\left(\frac{q(q-1)}{2}-\ell(w)\right)=\ell(w)+pq-\frac{q(q-1) }{2}.\]
As \(\dim\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))=\dim\mathcal{O}_{ \gamma_{w}}\) by Theorem 3.10, this proves the first equality in (3.4).
To prove the second we observe first that if \(\mathbf{m}(w)=(m_{1},\ldots,m_{n})\) then
\[\operatorname{area}(\pi_{m(w)}) = \sum_{i=1}^{n}(m_{i}-i)\] \[= pq+\sum_{i=1}^{q}\max\{w^{-1}(k)\mid k\leq i\}-\sum_{i=1}^{q}i+ pn-\sum_{i=q+1}^{n}i\] \[= p^{2}+2pq-\binom{p+q+1}{2}+\binom{q+1}{2}+\sum_{i=1}^{q}\max\{w^ {-1}(k)\mid k\leq i\}-\sum_{i=1}^{q}i\] \[= pq+\frac{p(p-1)}{2}+\sum_{i=1}^{q}\max\{w^{-1}(k)\mid k\leq i\} -\sum_{i=1}^{q}i.\]
We complete the proof by applying Lemma 3.12 to \(w^{-1}\).
**Remark 3.14**.: _It follows from Corollary 3.13 and the seminal work [10] of De Mari, Procesi, and Shayman on Hessenberg varieties that if \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})\) is irreducible and \(\mathsf{s}\) is an \(n\times n\) regular semisimple matrix, then_
\[\dim\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m})=\dim\operatorname{Hess}( \mathsf{s},\mathbf{m}).\]
_Indeed, in the case of a regular semisimple element \(\mathsf{s}\), it is easy to see from [10, Theorem 6] that the dimension of \(\operatorname{Hess}(\mathsf{s},\mathbf{m})\) is precisely the area of \(\pi_{\mathbf{m}}\)._
## 4. W-sets and cohomology classes
We now turn our attention to computing the \(W\)-sets introduced in Section 2.3 above for the clans \(\gamma_{w}\) with \(w\in\mathsf{S}_{q}\). Our work below shows that the restriction of the weak order to the interval \([\gamma_{e},\gamma_{0}]\) in \(\operatorname{\mathsf{Clan}}_{p,q}\) can be identified with the two-sided weak order on \(\mathsf{S}_{q}\) (see Theorem 4.2 below). As a result, we give a concrete formula for the class \([\overline{\mathcal{O}}_{\gamma_{w}}]\) and, in particular, the class of any Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))\). Finally, as an
application of our results, we prove that the product of \([\overline{O}_{\gamma_{w}}]\) with any Schubert divisor is a multiplicity-free sum of Schubert polynomials.
Recall that _left weak order_\(\leq_{L}\) on the symmetric group \(\mathbf{S}_{q}\) is the partial order defined by the covering relations
\[w<_{L}s_{i}w\text{ where }i\in[n-1]\text{ is such that }w^{-1}(i)<w^{-1}(i+1).\]
The left multiplication by \(s_{i}\) interchanges the order of \(i\) and \(i+1\) in the one-line notation for \(w\). For example, \(51324<_{L}52314\). Similarly, the _right weak order_\(\leq_{R}\) on \(\mathbf{S}_{q}\) is the partial order defined by the covering relations
\[w<_{R}ws_{i}\text{ where }i\in[n-1]\text{ is such that }w(i)<w(i+1).\]
The right multiplication by \(s_{i}\) interchanges the entries in positions \(i\) and \(i+1\) of the one-line notation for \(w\). For example, \(51324<_{R}53124\).
We call the partial order \(\preceq\) on \(\mathbf{S}_{q}\) that is generated by the covering relations of both of the left and the right weak orders the _two-sided weak order_ on \(\mathbf{S}_{q}\).
**Example 4.1**.: _In Figure 4.1, we depict the two-sided weak order on \(\mathbf{S}_{4}\). The blue (double) edges correspond to the cover relations that are admitted by both of the orders \(\leq_{L}\) and \(\leq_{R}\). The ordinary edges correspond to a covering relation of either \(\leq_{R}\) or \(\leq_{L}\), but not both. Our figure shows that the two-sided weak order on \(\mathbf{S}_{4}\) is not isomorphic to the Bruhat (i.e., inclusion) order; for example, \(s_{1}s_{2}s_{1}=3214\leq s_{2}s_{1}s_{3}s_{2}=3412\) in Bruhat order but Figure 4.1 shows that \(3214\) is not below \(3412\) in the two-sided weak order._
The first main result of this section is the following theorem.
Figure 4.1. The two-sided weak order on \(\mathbf{S}_{4}\).
**Theorem 4.2**.: _The restriction of the weak order on the interval of clans_
\[[\gamma_{e},\gamma_{0}]=\{\gamma_{w}\mid w\in\mathbf{S}_{q}\}\]
_is isomorphic to the two-sided weak order on \(\mathbf{S}_{q}\)._
To begin, we prove that the restriction of the weak order to the interval \([\gamma_{e},\gamma_{0}]\) is generated by only two of the cover relations described in Section 2.3 (cf. Figure 2.1).
**Lemma 4.3**.: _Every cover relation of the weak order in the interval \([\gamma_{e},\gamma_{0}]\) is of type IC1 or IC2._
Proof.: Let \(\gamma_{w}=c_{1}c_{2}\cdots c_{n}\in\mathbf{Clan}_{p,q}\) for some \(w\in\mathbf{S}_{q}\). We have by definition that
\[c_{1}\cdots c_{p}=12\cdots q+\cdots+,\]
and furthermore that no \(-\) signs occur in \(\gamma_{w}\). This implies that the cover relations of types IA1, IA2, and II do not occur in the restriction of the weak order on \(\mathbf{Clan}_{p,q}\) to \([\gamma_{e},\gamma_{0}]\). Note also that no cover relation of type IB can occur among the clans in \([\gamma_{e},\gamma_{0}]\) since all arcs are either nested or crossing as there is an arc connecting \(i<j\) if and only if \(j=w^{-1}(i)+p\). This finishes the proof of our assertion.
By the lemma, to analyze the weak order on \([\gamma_{e},\gamma_{0}]\) it is enough to consider cover relations of type IC1 and IC2. The following example illustrates a cover relation of each type.
**Example 4.4**.: _Let \(p=6\) and \(q=5\). Let \(w=51324\in\mathbf{S}_{5}\). The following depicts the cover relation of type IC2 obtained by uncrossing the (dashed) arcs in the charged matching for \(\gamma_{w}\) with right endpoints \(8\) and \(9\), creating a nested pair. Note that the resulting matching corresponds to the clan \(\gamma_{ws_{2}}\), and we have \(w=51324<_{R}ws_{2}=53124\)._
_Similarly, we may apply a cover relation of type IC1 to \(\gamma_{w}\) by swapping the (dashed) arcs with left endpoints \(1\) and \(2\), creating a nested pair. The resulting matching corresponds to clan \(\gamma_{s_{1}w}\) and we have \(w=51324<_{L}s_{1}w=52314\)._
In the example above, we saw that each covering relation was of the form \(\gamma_{w}\prec\gamma_{w^{\prime}}\) for \(w,w^{\prime}\in\mathbf{S}_{5}\) such that \(w\prec w^{\prime}\) in the two-sided weak order on \(\mathbf{S}_{5}\). This holds in greater generality and brings us to the proof of Theorem 4.2.
Proof of Theorem 4.2.: By Lemma 4.3, the covering relations of the weak order on \([\gamma_{e},\gamma_{0}]\) are given by either a Type IC1 covering relation or by a Type IC2 covering relation.
Now, a covering relation of Type IC1 on clans in \([\gamma_{e},\gamma_{0}]\) is of the form
\[\gamma_{w}=c_{1}\cdots c_{w^{-1}(i)+p}\cdots c_{w^{-1}(i+1)+p}\cdots c_{n} \stackrel{{ s_{i}}}{{\longrightarrow}}\gamma^{\prime}=c_{1} \cdots c_{w^{-1}(i+1)+p}\cdots c_{w^{-1}(i)+p}\cdots c_{n}, \tag{4.1}\]
for some \(i\in[q-1]\) such that \(w^{-1}(i)<w^{-1}(i+1)\). Since the resulting clan is obtained from \(\gamma_{w}\) by interchanging \(i\) and \(i+1\) in the one-line notation for \(w\), we see that \(\gamma^{\prime}=\gamma_{s_{i}w}\). Similarly, a covering relation of type IC2 on clans in \([\gamma_{e},\gamma_{0}]\) is of the form
\[\gamma_{w}=c_{1}\cdots c_{i}c_{i+1}\cdots c_{n}\stackrel{{ s_{i}}}{{ \longrightarrow}}\gamma^{\prime}=c_{1}\cdots c_{i+1}c_{i}\cdots c_{n}, \tag{4.2}\]
for some \(i\in\{p+1,\ldots,n-1\}\) such that \(w(i-p)<w(i-p+1)\). In this case, we have \(\gamma^{\prime}=\gamma_{ws_{i-p}}\) since the resulting clan is obtained by interchanging the entries in the positions \(i-p\) and \(i-p+1\) in the one-line notation for \(w\). In conclusion, we see that, for \(w,v\in\mathbf{S}_{q}\), if the clan \(\gamma_{w}\) is covered by the clan \(\gamma_{v}\) in the weak order, then \(w\) is covered by \(v\) in either the right weak order or the left weak order on \(\mathbf{S}_{q}\).
We proceed to prove the converse statement. Let \(w,v\in\mathbf{S}_{q}\) be two permutations. Let \(\gamma_{w}=c_{1}^{w}c_{2}^{w}\cdots c_{n}^{w}\) denote the (unique) clan corresponding to \(w\), which is defined by
* \(c_{i}^{w}=+\) for all \(q<i\leq p\), and
* \(c_{i}^{w}=c_{p+w^{-1}(i)}^{w}=i\) for all \(i\in[q]\).
Let \(\gamma_{v}\) denote the unique clan corresponding to \(v\), defined in a similar manner. Now, we assume that \(w\) is covered by \(v\) in the left weak order on \(\mathbf{S}_{q}\). Hence, \(s_{i}w=v\) holds for some \(i\in[q-1]\). After writing \(w\) and \(s_{i}w\) in their one-line notations, we see that the covering relation \(w\leq_{L}v\) corresponds to the covering relation in (4.1). Likewise, if \(w\) is covered by \(v\) in the right weak order in such a way that \(ws_{i}=v\) for some \(i\in[q-1]\), then the covering relation in (4.2) holds. Hence, we proved that \(\gamma_{w}\preceq\gamma_{v}\) if and only if \(w\leq_{L}v\) or \(w\leq_{R}v\), as desired.
**Corollary 4.5**.: _The restriction of the weak order to \(\mathbf{Clan}_{p,q}^{231}\) is isomorphic to the restriction of the two-sided weak order on \(\mathbf{S}_{q}\) to all \(231\)-free permutations._
**Remark 4.6**.: _It is well known that the right weak order on the set of \(231\)-free permutations is isomorphic to the Tamari lattice [14, Theorem 1.2]. It is also well-known that the Bruhat (i.e., inclusion) order on the set of \(231\)-free permutations is isomorphic to the opposite of the Dyck path lattice [1]._
**Example 4.7**.: _Let \(p=q=3\). Figure 4.2 shows the weak order on \([\gamma_{123},\gamma_{321}]\subset\mathbf{Clan}_{3,3}\) with all covering relations and corresponding clan written underneath each charged matching. The circled matching corresponds to the clan \(\gamma_{231}\). By removing this matching, we obtain the Hasse diagram of the two-sided weak order on \(\mathbf{Clan}_{3,3}^{231}\)._
With a precise description of the weak order on \([\gamma_{e},\gamma_{0}]\subset\mathbf{Clan}_{p,q}\) in hand, we turn our attention to computing the \(W\)-sets \(W(\gamma_{w})\) for each \(w\in\mathbf{S}_{q}\). Using Br
we can use this set to obtain polynomial representatives of the cohomology classes \([\overline{\mathcal{O}_{\gamma_{w}}}]\) for each \(w\in\mathbf{S}_{q}\). If \(w\) avoids \(231\) then we obtain, by our work in the previous section, a polynomial representative for the cohomology class of the semisimple Hessenberg variety \(\operatorname{Hess}(\mathbf{x}_{p,q},\mathbf{m}_{w})\).
We begin with a lemma whose proof is evident.
**Lemma 4.8**.: _The map \(\varphi:\mathbf{S}_{n}\to\mathbf{S}_{n}\) defined by \(\varphi(v)=w_{0}v^{-1}w_{0}\) for all \(v\in\mathbf{S}_{n}\) is an anti-involution. In other words, we have \(\varphi^{2}=id\) and for all \(v,w\in\mathbf{S}_{n}\) we have \(\varphi(vw)=\varphi(w)\varphi(v)\)._
Since the map \(\varphi\) defined in Lemma 4.8 is an anti-involution, it is a bijection. We are interested in the restriction of \(\varphi\) to the subgroup \(\mathbf{S}_{q}:=\langle s_{1},\ldots,s_{q-1}\rangle\hookrightarrow\mathbf{S}_ {n}\). Recall that the support \(\mathrm{Supp}(w)\) of \(w\) is the set of all simple reflections that arise in any reduced word for \(w\).
**Lemma 4.9**.: _The restriction of \(\varphi\) to \(\mathbf{S}_{q}\) induces a bijection \(\varphi:\mathbf{S}_{q}\to\langle s_{p+1},\ldots,s_{n-1}\rangle\). Furthermore, \(\ell(v)=\ell(\varphi(v))\) for all \(v\in S_{q}\) and \(\mathrm{Supp}(u)\cap\mathrm{Supp}(\varphi(v))=\emptyset\) for all \(u,v\in\mathbf{S}_{q}\)._
Proof.: Since \(\varphi(s_{i})=w_{0}s_{i}w_{0}=s_{n-i}\) for all \(i\in[n-1]\), the first assertion of the lemma is obvious. Next, if \(s_{i_{1}}\cdots s_{i_{r}}\) is a reduced expression for \(v\in\mathbf{S}_{q}\), then \(s_{n-i_{1}}\cdots s_{n-i_{r}}\) is a reduced expression of \(w_{0}vw_{0}\). In particular,
\[v=s_{i_{1}}\cdots s_{i_{r}}\Rightarrow\varphi(v)=s_{n-i_{r}}s_{n-i_{r-1}} \cdots s_{n-i_{1}}\]
so \(\ell(\varphi(v))=\ell(v)\). Finally, \(\mathrm{Supp}(u)\subseteq\{s_{1},\ldots,s_{q-1}\}\) and \(\mathrm{Supp}(\varphi(v))\subseteq\{s_{p+1},\ldots,s_{n-1}\}\). Since \(q\leq p\), we obtain the final assertion.
We these observations in place, we define a map that will allow us to compute \(W(\gamma_{w})\) in Theorem 4.12 below.
**Lemma 4.10**.: _The map_
\[\mathbf{S}_{q}\times\mathbf{S}_{q}\to\mathbf{S}_{n},\ (u,v)\mapsto u\varphi(v) \tag{4.3}\]
_is injective. Furthermore, \(\ell(u\varphi(v))=\ell(u)+\ell(v)\) for all \(u,v\in\mathbf{S}_{q}\)._
Proof.: Recall from Lemma 4.9 that \(\varphi\) maps \(\mathbf{S}_{q}=\langle s_{1},\ldots,s_{q-1}\rangle\) to \(\langle s_{p+1},\ldots,s_{n-1}\rangle\) and note that the intersection of these subgroups is the trivial group since \(q\leq p\). Thus, if \(u_{1},u_{2},v_{1},v_{2}\in\mathbf{S}_{q}\) such that \(u_{1}\varphi(v_{1})=u_{2}\varphi(v_{2})\) then
\[u_{2}^{-1}u_{1}=\varphi(v_{2})\varphi(v_{1})^{-1}\in\mathbf{S}_ {q}\cap\langle s_{p+1},\ldots,s_{n-1}\rangle=\langle e\rangle\] \[\Rightarrow u_{1}=u_{2}\ \text{ and }\ \varphi(v_{1})=\varphi(v_{2}),\]
and injectivity of the map follows. Since \(\mathrm{Supp}(u)\cap\mathrm{Supp}(\varphi(v))=\emptyset\), any reduced word for \(u\varphi(v)\) is the product of a reduced word of \(u\) in \(\mathbf{S}_{q}\) and a reduced word for \(\varphi(v)\) in \(\langle s_{p+1},\ldots,s_{n-1}\rangle\). Thus \(\ell(u\varphi(v))=\ell(u)+\ell(\varphi(v))=\ell(u)+\ell(v)\) as desired.
For each \(w\in\mathbf{S}_{q}\) we define the set
\[\mathcal{S}(w):=\{(u,v)\in\mathbf{S}_{q}\times\mathbf{S}_{q}\mid\ w=uv\text{ and }\ell(w)=\ell(u)+\ell(v)\}.\]
Note that \(\mathcal{S}(e)=\{(e,e)\}\) and \(\mathcal{S}(s_{i})=\{(e,s_{i}),(s_{i},e)\}\).
**Example 4.11**.: _If \(q=3\) and \(w=312=s_{2}s_{1}\) then \(\mathcal{S}(w)=\{(e,s_{2}s_{1}),(s_{2},s_{1}),(s_{2}s_{1},e)\}\)._
Recall that \(y_{0}\in\mathbf{S}_{q}\) denotes the longest element. The second main theorem of this section describes the \(W\)-sets of clans \(\gamma_{w}\) concretely using the set \(\mathcal{S}(wy_{0})\).
**Theorem 4.12**.: _For all \(w\in\mathbf{S}_{q}\) the restriction of the map (4.3) from Lemma 4.10 to \(\mathcal{S}(wy_{0})\subseteq\mathbf{S}_{q}\times\mathbf{S}_{q}\) induces a bijection_
\[\psi_{w}:\mathcal{S}(wy_{0})\to W(\gamma_{w}),\;\psi_{w}(u,v):=u\varphi(v).\]
_In particular, the \(W\)-set of the clan \(\gamma_{w}\) is \(W(\gamma_{w})=\{u\varphi(v)\mid(u,v)\in\mathcal{S}(wy_{0})\}\)._
Proof.: We argue first that \(\psi_{w}(u,v)\in W(\gamma_{w})\) for all \((u,v)\in\mathcal{S}(wy_{0})\). Given \((u,v)\in\mathcal{S}(wy_{0})\), let \(u=s_{a_{1}}s_{a_{2}}\cdots s_{a_{r}}\) and \(v=s_{b_{1}}s_{b_{2}}\cdots s_{b_{t}}\) be reduced words for \(u\) and \(v\), respectively. By assumption,
\[wy_{0}=s_{a_{1}}s_{a_{2}}\cdots s_{a_{r}}s_{b_{1}}s_{b_{2}}\cdots s_{b_{t}}\]
is a reduced word for \(wy_{0}\). Manipulating this expression and using the fact that \(y_{0}s_{i}y_{0}=s_{q-i}\) for all \(i\) implies
\[s_{a_{r}}\cdots s_{a_{2}}s_{a_{1}}ws_{q-b_{t}}\cdots s_{q-b_{2}}s_{q-b_{1}}=y_ {0}\]
with \(\ell(y_{0})=\ell(w)+r+t\). In particular, this expression yields a chain of length \(r+t\) in the two-sided weak order on \(\mathbf{S}_{q}\):
\[w\xrightarrow{s_{a_{1}}}s_{a_{1}}w \xrightarrow{s_{a_{2}}}s_{a_{1}}w\to\cdots\xrightarrow{s_{a_{r}}} s_{a_{r}}\cdots s_{a_{2}}s_{a_{1}}w=u^{-1}w\] \[\xrightarrow{s_{q-b_{t}}}u^{-1}ws_{q-b_{t}}\to\cdots\xrightarrow{ s_{q-b_{2}}}u^{-1}ws_{q-b_{t}}\cdots s_{q-b_{2}}\xrightarrow{s_{q-b_{1}}}y_{0}.\]
In this chain, left multiplication by \(s_{a_{k}}\) is a cover in the left weak order on \(\mathbf{S}_{q}\) and corresponds to a cover of type IC1 on clans. This cover of type IC1 on clans is labeled by the simple reflection \(s_{a_{k}}\in\mathbf{S}_{n}\). Right multiplication by \(s_{b_{k}}\) is a cover in the right weak order on \(\mathbf{S}_{q}\) and corresponds to a cover of type IC2 on clans. This cover of type IC2 on clans is labeled by the simple reflection \(s_{n-b_{k}}=\varphi(s_{b_{k}})\in\mathbf{S}_{n}\). By Theorem 4.2 and definition of the \(W\)-set, it follows that
\[u\varphi(v)=s_{a_{1}}s_{a_{2}}\cdots s_{a_{r}}s_{n-b_{t}}\cdots s_{n-b_{2}}s_ {n-b_{1}}\in W(\gamma_{w})\]
as desired.
To complete the proof, it suffices by Lemma 4.10 to show that \(\psi_{w}\) is surjective. We proceed by induction on the nonnegative integer \(\ell(wy_{0})=\ell(y_{0})-\ell(w)\). If \(\ell(wy_{0})=0\) then \(w=y_{0}\), \(W(\gamma_{w})=W(\gamma_{0})=\{e\}\), and \(\mathcal{S}(wy_{0})=\mathcal{S}(e)=\{(e,e)\}\). Thus our claim holds trivially in this case.
Suppose now that \(w\in\mathbf{S}_{q}\) such that \(\ell=\ell(wy_{0})>0\) and \(\psi_{w^{\prime}}\) is surjective for all \(w^{\prime}\in\mathbf{S}_{q}\) such that \(\ell(w^{\prime}y_{0})=\ell-1\). Since the \(W\)-set of \(\gamma_{w}\) is obtained by multiplying the labels of the weak order cover relations along a saturated path from \(\gamma_{w}\) to \(\gamma_{0}\), if \(x\in W(\gamma_{w})\) there exists \(w^{\prime}\in\mathbf{S}_{q}\) and \(x^{\prime}\in W(\gamma_{w^{\prime}})\) such that \(\gamma_{w}\xrightarrow{s_{i}}\gamma_{w^{\prime}}\) and \(x=s_{i}x^{\prime}\). By Theorem 4.2, \(w^{\prime}\) is a cover of \(w\) in the two-sided weak order on \(\mathbf{S}_{q}\) so \(\ell(w^{\prime})=\ell(w)+1\). This in turn implies \(\ell(w^{\prime}y_{0})=\ell(wy_{0})-1=\ell-1\) and the induction hypothesis implies that there exists \((u,v)\in\mathcal{S}(w^{\prime}y_{0})\) such that \(x^{\prime}=u\varphi(v)\).
There are two possible cases to consider: the cover \(\gamma_{w}\prec\gamma_{w^{\prime}}\) is either of type IC1 or IC2. If \(\gamma_{w}\xrightarrow{s_{i}}\gamma_{w^{\prime}}\) is a cover in the weak order on clans of type IC1, then the proof of
Theorem 4.2 implies \(i\in[q-1]\) and \(w^{\prime}=s_{i}w\). Our assumptions also yield
\[\ell(x)=\ell(x^{\prime})+1 \Rightarrow\ell(s_{i}u\varphi(v))=\ell(u\varphi(v))+1\] \[\Rightarrow\ell(s_{i}u)+\ell(v)=\ell(u)+\ell(v)+1 \tag{4.4}\] \[\Rightarrow\ell(s_{i}u)=\ell(u)+1,\]
where the second implication follows from Lemma 4.10. This shows \((s_{i}u,v)\in\mathcal{S}(wy_{0})\) since \(s_{i}uv=s_{i}w^{\prime}y_{0}=wy_{0}\) and
\[\ell(w^{\prime}y_{0})=\ell(u)+\ell(v)\Rightarrow\ell(y_{0})-\ell(w)-1=\ell(u) +\ell(v)\Rightarrow\ell(wy_{0})=\ell(s_{i}u)+\ell(v)\]
by (4.4) above. Now \(x=s_{i}u\varphi(v)=\psi_{w}(s_{i}u,v)\), so \(\psi_{w}\) is surjective in this case.
If \(\gamma_{w}\xrightarrow{s_{i}}\gamma_{w^{\prime}}\) is a cover in the weak order on clans of type IC2, then the proof of Theorem 4.2 implies \(i\in\{p+1,\ldots,n-1\}\) and \(w^{\prime}=ws_{i-p}\) with \(\ell(w^{\prime})\). Note that \(s_{i}\) commutes with \(u\in\mathbf{S}_{q}\) and recall that \(s_{i}=\varphi(s_{n-i})\). Thus
\[x=s_{i}x^{\prime}=s_{i}u\varphi(v)=us_{i}\varphi(v)=u\varphi(s_{n-i})\varphi(v )=u\varphi(vs_{n-i}) \tag{4.5}\]
by Lemma 4.8. Our assumptions also imply
\[\ell(x)=\ell(x^{\prime})+1 \Rightarrow\ell(u\varphi(vs_{n-i}))=\ell(u\varphi(v))+1\] \[\Rightarrow\ell(u)+\ell(vs_{n-i})=\ell(u)+\ell(v)+1 \tag{4.6}\] \[\Rightarrow\ell(vs_{n-i})=\ell(v)+1,\]
where the first implication follows from (4.5) and the second from Lemma 4.10. This shows \((u,vs_{n-i})\in\mathcal{S}(wy_{0})\) since \(uvs_{n-i}=w^{\prime}y_{0}s_{n-i}=w^{\prime}s_{i-p}y_{0}=wy_{0}\) and
\[\ell(w^{\prime}y_{0})=\ell(u)+\ell(v)\Rightarrow\ell(y_{0})-\ell(w)-1=\ell(u )+\ell(v)\Rightarrow\ell(wy_{0})=\ell(u)+\ell(vs_{n-i})\]
by (4.6) above. Using (4.5) we conclude \(x=\psi_{w}(u,vs_{n-1})\) so \(\psi_{w}\) is indeed surjective.
**Example 4.13**.: _Let \(q=4\) and \(w=3214=s_{1}s_{2}s_{1}\in\mathbf{S}_{4}\). Then \(wy_{0}=4123=s_{3}s_{2}s_{1}\) and_
\[\mathcal{S}(wy_{0})=\mathcal{S}(s_{3}s_{2}s_{1})=\{(s_{3}s_{2}s_{1},e),(s_{3}s _{2},s_{1}),(s_{3},s_{2}s_{1}),(e,s_{3}s_{2}s_{1})\},\]
_so, according to Theorem 4.12, the \(W\)-set of \(\gamma_{3214}\) is_
\[\{s_{3}s_{2}s_{1},s_{3}s_{2}s_{3+p},s_{3}s_{3+p}s_{2+p},s_{3+p}s_{2+p}s_{1+p}\}.\]
_The interested reader can also confirm this using Theorem 4.2 and the poset pictured in Figure 4.1. Each element of the \(W\)-set is obtained from a saturated chain in the poset connecting \(3214\) to \(y_{0}=4321\). Covers arising from the right weak order (respectively, left weak order) on \(\mathbf{S}_{q}\) labeled by \(s_{i}\) correspond to covers in the weak order on clans labeled by \(s_{i+p}\) (respectively \(s_{i}\)). Note that there are more chains than elements of the \(W\)-set, as two chains can yield the same reduced word._
We apply the results of Theorem 4.12 to compute the cohomology class of each \(K\)-orbit closure \(\overline{\mathcal{O}}_{\gamma_{w}}\). We make use of Borel's description of the integral cohomology ring \(H^{*}(GL_{n}/B,\mathbb{Z})\) as the ring of coinvariants, that is,
\[H^{*}(GL_{n}/B,\mathbb{Z})\simeq\mathbb{Z}[x_{1},\ldots,x_{n}]/I,\]
where \(I\) is the ideal generated by the symmetric polynomials without a constant term. It is a well-known fact that the Schubert polynomial \(\mathfrak{S}_{w}\) is a polynomial representative for the
cohomology class \([\overline{Bw_{0}wB/B}]\). For a more detailed definition of Schubert polynomials see [12, 13]. Combining Theorem 4.12 with Brion's Theorem 2.11 now yields the following.
**Proposition 4.14**.: _For all \(w\in\mathbf{S}_{q}\), the cohomology class of the closure of the \(K\)-orbit \(\mathcal{O}_{\gamma_{w}}\) is represented by the polynomial_
\[\mathfrak{S}(\gamma_{w}):=\sum_{(u,v)\in\mathcal{S}(wy_{0})} \mathfrak{S}_{u\varphi(v)}=\sum_{(u,v)\in\mathcal{S}(wy_{0})}\mathfrak{S}_{u} \mathfrak{S}_{\varphi(v)}.\]
Proof.: By Theorem 2.11, the polynomial representative of the cohomology class of \(\overline{\mathcal{O}_{\gamma_{w}}}\) is given by the formula
\[\mathfrak{S}(\gamma_{w}):=\sum_{x\in W(\gamma_{w})}\mathfrak{S}_{x},\]
where \(\mathfrak{S}_{x}\) is the Schubert polynomial indexed by the permutation \(x\in\mathbf{S}_{n}\). By Theorem 4.12 each \(x\in W(\gamma_{w})\) can be written \(x=u\varphi(v)\) for a unique \((u,v)\in\mathcal{S}(wy_{0})\). The result now follows immediately, as \(u\) and \(\varphi(v)\) have disjoint supports by Lemma 4.9, so \(\mathfrak{S}_{u\varphi(v)}=\mathfrak{S}_{u}\mathfrak{S}_{\varphi(v)}\) (see, for example, [13, Corollary 2.4.6].)
The following is now immediate from Theorem 3.10.
**Corollary 4.15**.: _For all \(w\in\mathbf{S}_{q}\) avoiding the pattern \(231\), the polynomial representative of the cohomology class of Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w))\) is given by_
\[\mathfrak{S}(\operatorname{Hess}(\mathsf{x}_{p,q},\mathbf{m}(w)))= \sum_{(u,v)\in\mathcal{S}(wy_{0})}\mathfrak{S}_{u}\mathfrak{S}_{\varphi(v)}.\]
**Example 4.16**.: _Let \(q=3\). For the permutation \(w=123\in\mathbf{S}_{3}\), the \(W\)-set of \(\gamma_{123}\) is_
\[\{s_{1}s_{2}s_{1},s_{p+1}s_{p+2}s_{p+1},s_{1}s_{2}s_{p+2},s_{2}s_{1}s_{p+1},s_{ 1}s_{p+2}s_{p+1},s_{2}s_{p+1}s_{p+2}\}.\]
_It follows from Corollary 4.15 that the polynomial representative of the cohomology class for the Hessenberg variety \(\operatorname{Hess}(\mathsf{x}_{p,3},\mathbf{m}(123))\) is_
\[\mathfrak{S}_{s_{1}s_{2}s_{1}}+\mathfrak{S}_{s_{p+1}s_{p+2}s_{p+1}}+\mathfrak{ S}_{s_{1}s_{2}s_{p+2}}+\mathfrak{S}_{s_{2}s_{1}s_{p+1}}+\mathfrak{S}_{s_{1}s_{p+2}s _{p+1}}+\mathfrak{S}_{s_{2}s_{p+1}s_{p+2}}.\]
_Applying a similar calculation to the permutation \(w=213\in\mathbf{S}_{3}\) gives us the polynomial_
\[\mathfrak{S}_{s_{2}s_{1}}+\mathfrak{S}_{s_{2}s_{p+2}}+\mathfrak{S}_{s_{p+2}s_ {p+1}},\]
_representing the cohomology class for the Hesseberg variety \(\operatorname{Hess}(\mathsf{x}_{p,3},\mathbf{m}(213))\), as the \(W\)-set of \(\gamma_{213}\) is \(\{s_{2}s_{1},s_{2}s_{p+2},s_{p+2}s_{p+1}\}\)._
Our final goal is to understand the intersection of the closure of the orbit corresponding to the clan \(\gamma_{w}\) with a "basic hyperplane of \(GL_{n}/B\)." Here, by a basic hyperplane of \(GL_{n}/B\), we mean the Schubert divisor \(X_{s_{i}w_{0}}\), and \(i\in[n-1]\). Such an intersection is succinctly expressed in the cohomology ring by Monk's formula [13, Theorem 2.7.1].
**Lemma 4.17** (Monk's formula).: _For all \(u\in\mathbf{S}_{n}\) and all \(m\in[n-1]\),_
\[\mathfrak{S}_{s_{m}}\mathfrak{S}_{u}=\sum_{\begin{subarray}{c}j\leq m<k\\ \ell\left(ut_{jk}\right)=\ell(u)+1\end{subarray}}\mathfrak{S}_{ut_{jk}},\]
_where \(t_{jk}\) is the transposition in \(\mathbf{S}_{n}\) that interchanges \(j\) and \(k\) and leaves every other number fixed._
**Example 4.18**.: _Let \(q=3\). We know from Example 4.16 that the cohomology class for the Hessenberg variety \(\mathrm{Hess}(\mathsf{x}_{p,3},\mathbf{m}(123))\) represented by_
\[\mathfrak{S}_{s_{1}s_{2}s_{1}}+\mathfrak{S}_{s_{p+1}s_{p+2}s_{p+1}}+\mathfrak{ S}_{s_{1}s_{2}s_{p+2}}+\mathfrak{S}_{s_{2}s_{1}s_{p+1}}+\mathfrak{S}_{s_{1}s_{p+2}s_{ p+1}s_{p+2}}.\]
_Now, let us use Monk's formula to understand the product \(\mathfrak{S}_{s_{m}}\mathfrak{S}(\gamma_{123})\) for all \(m<n=6\). We note that in each case, the product is a \(0-1\) sum of Schubert polynomials._
_Multiplying by \(\mathfrak{S}_{s_{1}}\) gives us_
\[\mathfrak{S}_{s_{3}s_{1}s_{2}s_{1}}+\mathfrak{S}_{s_{1}s_{p+1}s_{ p+2}s_{p+1}}+\mathfrak{S}_{s_{1}s_{2}s_{1}s_{p+2}}+\mathfrak{S}_{s_{p+1}s_{3}s_{2}s_ {1}}+\] \[\mathfrak{S}_{s_{3}s_{2}s_{1}s_{p+1}}+\mathfrak{S}_{s_{2}s_{1}s_{ p+2}s_{p+1}}+\mathfrak{S}_{s_{2}s_{1}s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{1}s_{2}s_{p+1}s_{ p+2}}.\]
_Multiplying by \(\mathfrak{S}_{s_{2}}\) gives us_
\[\mathfrak{S}_{s_{3}s_{1}s_{2}s_{1}}+\mathfrak{S}_{s_{2}s_{3}s_{1} s_{2}}+\mathfrak{S}_{s_{2}s_{p+1}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{3}s_{1}s_{2}s_{ p+2}}+\] \[\mathfrak{S}_{s_{p+1}s_{3}s_{2}s_{1}}+\mathfrak{S}_{s_{3}s_{2}s_ {1}s_{p+1}}+\mathfrak{S}_{s_{1}s_{2}s_{1}s_{p+1}}+\mathfrak{S}_{s_{2}s_{1}s_{ p+2}s_{p+1}}+\] \[\mathfrak{S}_{s_{1}s_{2}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{1}s_{2} s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{p+1}s_{p+2}s_{3}s_{2}}+\mathfrak{S}_{s_{3}s_{2}s_ {p+1}s_{p+2}}.\]
_Multiplying by \(\mathfrak{S}_{s_{3}}\) gives us_
\[\mathfrak{S}_{s_{2}s_{1}s_{3}s_{2}}+\mathfrak{S}_{s_{1}s_{2}s_{1 }s_{3}}+\mathfrak{S}_{s_{p+1}s_{p+2}s_{p+1}s_{3}}+\mathfrak{S}_{s_{p+1}s_{3}s_ {p+2}s_{p+1}}+\] \[\mathfrak{S}_{s_{3}s_{p+1}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{3}s_{ 1}s_{2}s_{p+2}}+\mathfrak{S}_{s_{1}s_{2}s_{3}s_{p+2}}+\mathfrak{S}_{s_{p+1}s_{ 3}s_{2}s_{1}}+\] \[\mathfrak{S}_{s_{3}s_{p+1}s_{2}s_{1}}+\mathfrak{S}_{s_{p+1}s_{2} s_{1}s_{3}}+\mathfrak{S}_{s_{2}s_{1}s_{3}s_{p+1}}+\mathfrak{S}_{s_{p+2}s_{p+1}s_{1}s_{ 3}}+\] \[\mathfrak{S}_{s_{3}s_{1}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{3}s_{2}s_ {p+1}s_{p+2}}+\mathfrak{S}_{s_{2}s_{p+1}s_{p+2}s_{3}}+\mathfrak{S}_{s_{2}s_{3} s_{p+1}s_{p+2}}.\]
_Multiplying by \(\mathfrak{S}_{s_{4}}\) gives us_
\[\mathfrak{S}_{s_{p+1}s_{1}s_{2}s_{1}}+\mathfrak{S}_{s_{p+1}s_{p+2} s_{3}s_{p+1}}+\mathfrak{S}_{s_{3}s_{p+1}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{1}s_{2}s_{ p+2}s_{p+1}}+\] \[\mathfrak{S}_{s_{1}s_{2}s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{3}s_{2}s_ {1}s_{p+1}}+\mathfrak{S}_{s_{2}s_{1}s_{3}s_{p+1}}+\mathfrak{S}_{s_{2}s_{1}s_{ p+2}s_{p+1}}+\] \[\mathfrak{S}_{s_{3}s_{1}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{3}s_{2}s_ {p+1}s_{p+2}}+\mathfrak{S}_{s_{2}s_{3}s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{1}s_{p+ 1}s_{p+2}s_{p+1}}.\]
_Multiplying by \(\mathfrak{S}_{s_{5}}\) gives us_
\[\mathfrak{S}_{s_{3}s_{p+1}s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{p+2}s_ {1}s_{2}s_{1}}+\mathfrak{S}_{s_{1}s_{2}s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{2}s_{1}s _{p+2}s_{p+1}}+\] \[\mathfrak{S}_{s_{2}s_{1}s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{1}s_{p+1} s_{p+2}s_{p+1}}+\mathfrak{S}_{s_{3}s_{2}s_{p+1}s_{p+2}}+\mathfrak{S}_{s_{2}s_{3}s_{p+1}s_{ p+2}}.\]
We can use Monk's formula to understand the product \(\mathfrak{S}_{s_{m}}\mathfrak{S}(\gamma_{w})\) in general. In particular, we show that the product \(\mathfrak{S}_{s_{m}}\mathfrak{S}(\gamma_{w})\) is a \(0-1\) sum of Schubert polynomials for all \(m<n\).
**Theorem 4.19**.: _If \(m\in[n-1]\) and \(w\in\mathbf{S}_{q}\), then the product \(\mathfrak{S}_{s_{m}}\mathfrak{S}(\gamma_{w})\) is a multiplicity-free sum of Schubert polynomials._
Proof.: It follows from Proposition 4.14 that
\[\mathfrak{S}_{s_{m}}\mathfrak{S}(\gamma_{w})=\sum_{(u,v)\in\mathcal{S}(wy_{0})} \mathfrak{S}_{s_{m}}\mathfrak{S}_{u\varphi(v)}. \tag{4.7}\]
We apply Monk's formula to each product \(\mathfrak{S}_{s_{m}}\mathfrak{S}_{u\varphi(v)}\) and obtain
\[\mathfrak{S}_{s_{m}}\mathfrak{S}_{u\varphi(v)}=\sum_{\begin{subarray}{c}j\leq m <k\\ \ell\left(u\varphi(v)t_{jk}\right)=\ell(u)+\ell(v)+1\end{subarray}}\mathfrak{S }_{u\varphi(v)t_{jk}}. \tag{4.8}\]
Suppose there exists pairs \((u_{1},v_{1})\) and \((u_{2},v_{2})\) in \(\mathcal{S}(w)\) such that
\[u_{1}\varphi(v_{1})t_{jk}=u_{2}\varphi(v_{2})t_{j^{\prime}k^{\prime}} \tag{4.9}\]
for some \(j,k,j^{\prime},k^{\prime}\) such that \(j\leq m<k\) and \(j^{\prime}\leq m<k^{\prime}\). To complete the proof, we show that (4.9) implies \(u_{1}=u_{2}\) and \(v_{1}=v_{2}\).
Write \(x_{1}=u_{1}\varphi(v_{1})\) and \(x_{2}=u_{2}\varphi(v_{2})\) for the remainder of the proof. We begin with a few observations. By construction, each permutation \(x_{i}=u_{i}\varphi(v_{i})\) with \(i\in\{1,2\}\) satisfies
\[[q]=\{x_{i}(1),\ldots,x_{i}(q)\} \tag{4.10}\]
and
\[x_{i}(a)=a\text{ for all }q+1\leq a\leq p, \tag{4.11}\]
and
\[[n]\setminus[p]=\{x_{i}(p+1),\ldots,x_{i}(n)\}. \tag{4.12}\]
We obtain the one-line notation of \(x_{1}t_{jk}\) from that of \(x_{1}\) by exchanging the entries in positions \(j\) and \(k\) and similarly for \(x_{2}t_{j^{\prime}k^{\prime}}\). These observations imply that pairs \(j<k\) and \(j^{\prime}<k^{\prime}\) satisfying (4.9) must fall into one of the following cases:
1. \(j,j^{\prime}\in[q]\), \(k,k^{\prime}\in[n]\setminus[q]\),
2. \(j,j^{\prime},k,k^{\prime}\in[q]\),
3. \(j,j^{\prime},k,k^{\prime}\in\{q+1,\ldots,p\}\),
4. \(j,j^{\prime}\in\{q+1,\ldots,p\}\) and \(k,k^{\prime}\in[n]\setminus[p]\), and
5. \(j,j^{\prime},k,k^{\prime}\in[n]\setminus[p]\).
Note that cases (3) and (4) do not arise when \(p=q\).
We begin with Case (1). In this case, the equality (4.9) and equation (4.10) imply
\[\{k\} = ([n]\setminus[q])\cap\{(x_{1}t_{jk})^{-1}(1),\ldots,(x_{1}t_{jk}) ^{-1}(q)\}\] \[= ([n]\setminus[q])\cap\{(x_{2}t_{j^{\prime}k^{\prime}})^{-1}(1), \ldots,(x_{2}t_{j^{\prime}k^{\prime}})^{-1}(q)\}=\{k^{\prime}\}\]
so \(k=k^{\prime}\). Similarly, using (4.9), (4.11), and (4.12) we have
\[\{j\} = [q]\cap\{(x_{1}t_{jk})^{-1}(q+1),\ldots,(x_{1}t_{jk})^{-1}(n)\}\] \[= [q]\cap\{(x_{2}t_{j^{\prime}k^{\prime}})^{-1}(q+1),\ldots,(x_{2}t_ {j^{\prime}k^{\prime}})^{-1}(n)\}=\{j^{\prime}\}\]
so \(j=j^{\prime}\). Now (4.9) implies \(x_{1}=x_{2}\) and thus \(u_{1}=u_{2}\) and \(v_{1}=v_{2}\) by Lemma 4.10. Case (4) follows by similar reasoning, so we omit it to avoid repetition.
Now suppose we are in the setting of Case (2). Then (4.9) becomes \(u_{1}t_{jk}\varphi(v_{1})=u_{2}t_{j^{\prime}k^{\prime}}\varphi(v_{2})\), since \(t_{jk}\) and \(t_{j^{\prime}k^{\prime}}\) commute with \(\varphi(v_{1})\) and \(\varphi(v_{2})\). By Lemma 4.9, we also know that both of the following sets
\[\operatorname{Supp}(u_{1}t_{jk})\cap\operatorname{Supp}(\varphi(v_{1}))\quad \text{and}\quad\operatorname{Supp}(u_{2}t_{j^{\prime}k^{\prime}})\cap \operatorname{Supp}(\varphi(v_{2}))\]
are empty. Consequently, \(\operatorname{Supp}(\varphi(v_{1}))=\operatorname{Supp}(\varphi(v_{2}))\) implying that \(\varphi(v_{1})=\varphi(v_{2})\), and hence \(v_{1}=v_{2}\) by Lemma 4.8. It now follows from the definition of the set \(\mathcal{S}(wy_{0})\) that \(u_{1}=u_{2}\). Indeed, we have
\[u_{1}v_{1}=wy_{0}=u_{2}v_{2}\text{ and }v_{1}=v_{2}\Rightarrow u_{1}v_{1}=u_{ 2}v_{1}\Rightarrow u_{1}=u_{2}.\]
The proof of Case (5) is almost identical to that of (2), so we omit it to avoid repetition.
Finally, we consider Case (3). The equality (4.9) and equation (4.11) immediately imply that \(j=j^{\prime}\) and \(k=k^{\prime}\). Thus \(x_{1}=x_{2}\) and we conclude \(u_{1}=u_{2}\) and \(v_{1}=v_{2}\) as before. This finishes the proof of our theorem.
|
2309.17374 | 2D silver-nanoplatelets metasurface for bright directional
photoluminescence, designed with the local Kirchhoff's law | Semiconductor colloidal nanocrystals are excellent light emitters in terms of
efficiency and spectral control. Integrating them with a metasurface would pave
the way to ultrathin photoluminescent devices with reduced amount of active
material and performing complex functionalities such as beam shaping or
polarization control. To design such a metasurface, a quantitative model of the
emitted power is needed. Here, we report the design, fabrication and
characterization of a $\approx$ 300 nm thick light-emitting device combining a
plasmonic metasurface with an ensemble of nanoplatelets. The source has been
designed with a new methodology based on a local form of Kirchhoff's law. The
source displays record high directionality and brightness. | Elise Bailly, Jean-Paul Hugonin, Jean-René Coudevylle, Corentin Dabard, Sandrine Ithurria, Benjamin Vest, Jean-Jacques Greffet | 2023-09-29T16:23:42Z | http://arxiv.org/abs/2309.17374v1 | D silver-nanoplatelets metasurface for bright directional photoluminescence designed with the local Kirchhoff's law: Supplemental material
###### Abstract
The absorption and emission spectra of the nanoplatelets (NPLs) (in solution in hexane) are given in Figure 1. The peak emission wavelength is 605 nm. The Stokes shift, evaluated between the absorption peak (blue dotted line) and the emission peak (red dotted line) is 20 nm.
## 1 Nanoplatelets' properties
### Emission and Absorption spectra of the nanoplatelets
The absorption and emission spectra of the nanoplatelets (NPLs) (in solution in hexane) are given in Figure 1. The peak emission wavelength is 605 nm. The Stokes shift, evaluated between the absorption peak (blue dotted line) and the emission peak (red dotted line) is 20 nm.
Figure 1: Normalized emission and absorption spectra of the solution of NPLs in hexane in arbitrary units.
### TEM images
The TEM (transmission electron microscopy) image of the NPLs is presented in Figure 2. For TEM imaging, a drop of diluted NPLs solution in hexane is drop-casted on a copper grid covered with an amorphous carbon film. The grid is degassed overnight under secondary vacuum. A JEOL 2010F is used at 200 kV for the picture acquisition.
### Refractive index of the nanoplatelets
In order to measure the refractive index of the NPLs, we fabricated a sample consisting on NPLs deposited by spin coating on top of a stack 50 nm thick layer of silver/ 1 nm thick layer of germanium/ SF10 glass substrate. Silver and germanium were deposited using electron beam evaporation. To deposit the NPLs, 200 \(\mathrm{\SIUnitSymbolMicro m}\)L of a NPLs solution was spin coated on the metallic substrate at 500 rpm during 30 seconds with an acceleration ramp of 5 seconds. The ellipsometry measurements were performed in three steps.
1. First, the refractive index of a SF10 glass substrate was measured to serve as a reference.
2. The refractive index of silver (including 1 nm of germanium) was measured from a reference sample which was fabricated under the same conditions as the sample covered with NPLs. It thus consists in a 50 nm thick layer of silver on a 1 nm thick layer of germanium, on a SF10 glass substrate. The experimental refractive index is similar to the silver index of reference [1], as it can be seen in Figure 3, so that the germanium layer has little impact on the refractive index. The thickness of the layer was obtained by scratching it with a needle and measuring the depth of the slit by AFM. We obtained \(50\pm 3\) nm, in agreement with the nominal value.
3. Knowing the refractive index models of glass and silver, the refractive index of the NPLs was extracted from ellipsometry data and processed using a B-spline method (which is Kramers-Kronig consistent). The refractive index is given in Figure 4. The total thickness of the sample was obtained by scratching it with a needle and measuring the total depth of the slit by AFM. By substracting the experimental thickness values of the silver and germanium layers obtained from the reference sample, we obtained \(42\pm 5\) nm thick.
Figure 2: TEM image of the NPLs.
In order to perform the dispersion relation computed with a complex frequency presented in Figure 5 in the main article [2], we fitted the index of the NPLs as well as the index of silver by a polynomial of degree 2: \(p(\lambda)=p_{1}\lambda^{n}+p_{2}\lambda^{n-1}+...+p_{n}\lambda+p_{n+1}\), with the Matlab(r) function "polyfit". The fitting coefficients are given in Table 1. The comparisons between the ellipsometry measurements and the polynomial fits are presented in Figure 3 for the silver and in Figure 4 for the NPLs. A higher degree of the polynomial fits more accurately the experimental data, but the dispersion relation remains the same.
Nevertheless, the absorptivity computations presented in the main article [2] are done with an interpolallation of experimental index of the NPLs obtained by ellipsometry and the refractive index of silver of reference [1].
## 2 Spatial structure of the surface plasmon
This section shows the spatial structure of the mode which exists at the interface between a silver substrate and a thin layer (2 nm) of NPLs at 607.4 nm, computed with the refractive index of the NPLs measured by ellipsometry and the refractive index of silver of reference [1]. Figure 5 shows that the mode is evanescent both in metal and in air. It corresponds to a surface plasmon polariton at the interface metal/NPLs/air.
Figure 4: Refractive index of the NPLs measured by ellipsometry and comparison with the polynomial fit whose coefficients are given in Table 1, from a sample composed of \(42\pm 5\) nm thick layer of NPLs deposited by spin coating on top of a 50 nm of silver on a 1 nm of Germanium, on a SF10 glass substrate. For the sake of clarity, the imaginary part of the refractive index is multiplied by 5.
Figure 5: Spatial structure of the mode as a function of z at 607.4 nm (in arbitrary units), for a sample composed of 2 nm thick layer of NPLs (in pink hue area) on silver. At this wavelength the refractive indexes are \(n_{\mathrm{NPLs}}=1.7844+0.0598i\), \(n_{\mathrm{Ag}}=0.1260+3.7852i\) and \(n_{\mathrm{eff}}=1.0419+0.0031i\).
Estimation of the beam efficiency
In this section, we present the computation of the beam efficiency at the experimental peak emission wavelength (at 607.4 nm), defined as \(P_{\mathrm{lobe}}/P_{\mathrm{tot}}\), where \(P_{\mathrm{lobe}}=\int_{0}^{R_{\mathrm{lobe}}}\mathrm{d}P_{\mathrm{e}}\) is the emitted power in the emission peak represented in red in Figure 6, with \(k_{\mathrm{lobe}}=2.40\)\(\mathrm{\SIUnitSymbolMicro m}^{-1}\).
Since the signal is symmetrical in \(\pm k_{x}\), we integrate over the positive axis only and multiply by two.
The total power \(P_{\mathrm{tot}}\) has been determined in two different ways:
* Experimentally, light is collected within a light cone limited by the numerical aperture of the objective (NA = 0.75), so that it is not possible to obtain the exact total power emitted between \(0^{\circ}\) and \(90^{\circ}\). However, it is possible to estimate the beam efficiency with the total power collected, called \(P_{\mathrm{tot}}^{\mathrm{min}}=\int_{0}^{k_{\mathrm{min}}}\mathrm{d}P_{ \mathrm{e}}\), represented with green dotted lines in Figure 6. We chose a value of \(k_{\mathrm{min}}=7.36\)\(\mathrm{\SIUnitSymbolMicro m}^{-1}\) (corresponding to \(45.3^{\circ}\) at 607.4 nm), slightly lower than \(k_{\mathrm{NA}}=k_{0}\mathrm{NA}\), before the signal decreases (see Fig. 6). This estimation gives an overestimation of the beam efficiency value. We obtain a beam efficiency of 44.8 %.
* It is also possible to give a boundary value of the total emitted power emitted between 0 and \(90^{\circ}\), by extrapolating the value of the emitted power at \(k_{\mathrm{min}}\) for \(k>k_{\mathrm{min}}\). The integrated power is then \(P_{\mathrm{tot}}^{\mathrm{max}}=\int_{0}^{k_{\mathrm{max}}}\mathrm{d}P_{ \mathrm{e}}\) and is represented with blue dotted lines in Figure 6. Thus, we obtain an underestimation of the beam efficiency value of 35 %.
We therefore estimate that the radiative efficiency lies in the range 35% and 44.8 %.
## 4 Emission and Absorption for TE and TM polarization states
We present in Figure 7 the comparisons between normalized experimental radiation patterns and the normalized absorptivities, for Transverse Electric (TE), Transverse Magnetic (TM) polarization states, and for the total emitted power, plotted at their experimental peak emission wavelengths. Only the last case is presented in the article [2].
We also present in Figure 8 the comparison between the experimental radiation patterns and the normalized absorptivities for \(h_{\rm top-NPLs}\) = 0 nm, that is considering there was no overfilling of the grating grooves, and showing a less good agreement.
Figure 8: Normalized experimental radiation patterns at their peak emission wavelengths (red curve) and normalized theoretical absorption pattern calculated at the same peak emission wavelengths (blue dotted line), as a function of the polarization state (a): Transverse Electric (TE) at 608.4 nm, (b): Transverse Magnetic (TM) at 605.8 nm, (c): Total emission at 607.4 nm, for \(h_{\rm res}\) = 100 nm, \(l_{\rm res}\) = 450 nm, \(p_{\rm res}\) = 600 nm, \(h_{\rm top-NPLs}\) = 0 nm.
Figure 7: Normalized experimental radiation patterns at their peak emission wavelengths (red curve) and normalized theoretical absorption pattern calculated at the same peak emission wavelengths (blue dotted line), as a function of the polarization state (a): Transverse Electric (TE) at 608.4 nm, (b): Transverse Magnetic (TM) at 605.8 nm, (c): Total emission at 607.4 nm, for \(h_{\rm res}\) = 100 nm, \(l_{\rm res}\) = 450 nm, \(p_{\rm res}\) = 600 nm, \(h_{\rm top-NPLs}\) = 2 nm. |
2302.14269 | Measuring arousal and stress physiology on Esports, a League of Legends
case study | Esports gaming is an area in which videogame players need to cooperate and
compete with each other, influencing their cognitive load, processing, stress,
and social skills. Here it is unknown to which extent competitive videogame
play using a desktop setting can affect the physiological responses of players'
autonomic nervous system. For such, we propose a study where we have measured
distinct electrodermal and cardiac activity metrics over competitive players
during several League of Legends gameplay sessions in a Esports stadium. We
mainly found that game performance (whether winning or losing the game)
significantly affects both electrodermal and cardiac activity, where players
who lost the game showed higher stress-related physiological responses, as
compared to winning players. We also found that important specific in-game
events such as "Killing", "Dying" or "Destroying Turret" significantly
increased both electrodermal and cardiac activity over players more than other
less-relevant events such as "Placing Wards" or "Destroying Turret Plates".
Finally, by analyzing activity over player roles we found different trends of
activity on these measurements, this could foster the exploration on human
physiology with a higher set of participants in future Esports studies. | David Berga, Alexandre Pereda, Eleonora De Filippi, Arijit Nandi, Eulalia Febrer, Marta Reverte, Lautaro Russo | 2023-02-28T03:02:47Z | http://arxiv.org/abs/2302.14269v2 | # Physiology on Esports, a League of Legends study
###### Abstract
Esports gaming is an area in which videogame players need to cooperate and compete with each other, influencing their cognitive load, processing, stress, and as well as social skills. Here it is unknown to which extent traditional videogame play (with a desktop setting) can affect the physiological responses of players' autonomic nervous system. For such, we propose a study where we have measured distinct electrodermal and cardiac activity metrics over competitive players during several League of Legends gameplay sessions. We mainly found that game performance (whether winning or losing the game) significantly affects both EDA and ECG, where players who lost the game showed higher stress-related physiological responses, as compared to winning players. We also found that important specific in-game events such as "Killing", "Dying" or "Destroying Turret" significantly increased both electrodermal and cardiac activity over players more than other less-relevant events such as "Placing Wards" or "Destroying Turret Plates". Finally, by analyzing activity on player roles we found different trends of activity on these measurements, that may foster the exploration of esports gaming effects on human physiology in future studies.
## 1 Introduction
Research in the field of psychology has traditionally focused on three main forms of both emotional and attentional responses: subjective perceptions of the individual about their own state, effects on behavior, and changes in their physiological patterns, such as acceleration or deceleration of heart rate, increase in skin conductivity, etc. (Bradley and Lang 2000; Mauss and Robinson 2009). Each one of these approaches shows both advantages and disadvantages. First, auto-informed methods, such as questionnaires or interviews, for which participants are directly asked to report their status are the only way to access the individual's subjective perception are limited by the individual's own ability to introspect, since many psychological processes can occur unconsciously or with low levels of consciousness (Nisbett and Wilson 1977). Moreover, in certain cases it is possible that cognitive biases (such as social desirability bias) interfere with the reports, making the information not entirely reliable. For this reason, in the field of experimental psychology research, the analysis of physiological responses (for example, variations in heart rate, skin conductivity, activation of facial muscles, etc.) has been introduced as a way of obtaining information about the cognitive and emotional processes of individuals in an indirect way. On the other hand, the main disadvantage of the physiological methods with respect to the self-reported ones is that data is obtained in a much
slower and more intensive way in terms of the necessary resources, so it is recommended to combine both methodologies with qualitative studies for which data can be obtained from a large number of participants, and laboratory studies with smaller samples (Cacioppo, Chen, and Cacioppo 2017). Experimental suggestions should therefore be adapted to the type of information that each of these measures can offer us. Here are the most relevant physiological measures that we used in our study:
* Electrodermal activity (or EDA, also known as galvanic skin response or GSR) is a correlate of the activation of the sympathetic branch of the autonomic nervous system, which provides reliable information with a high temporal resolution about participants' physiological arousal, related to the intensity of experienced cognitive-emotional responses.
* Cardiac activity (ECG or electrocardiogram), or optical pulse (PPG) as a measure of Heart Rate. This signal is mediated by both the sympathetic and parasympathetic systems, thus responding to both physiological arousal and emotional regulation processes. This is dependent on both the intensity of emotions and their hedonic load, also with the appearance of cognitive resources for stimulus processing. The variability of heart rate can be analyzed by looking at the ratio of sympathetic vs parasympathetic activity within the PPG signal.
### Related Work
Physiological studies were previously carried out in the game literature (Kivikangas et al. 2011; Argasinski and Grabska-Gradzinska 2017; Alhargan, Cooke, and Binjammaz 2017) analyzing signals such as the ECG/PPG and the GSR/EDA, showing relevant differences in the participants given their affective state, mostly validated with questionnaires. Various other studies have also compared muscle signals (Electomyography/EMG) (Ahsan, Ibrahimy, and Khalifa 2009), brain signals (Electroencephalography/EEG) (Hafeez et al. 2021), and facial gestures (Samara et al. 2017). The main focus of some of these latter studies is to assess the affective state of participants from these measures, classifying each one of the 7 affective categories of basic emotions (initially defined by Ekman 2005): anger, sadness, fear, disgust, joy, surprise, contempt, and neutral. Each of the affective categories is standardized and validated with different psychophysiological tests such as the Russell test (Russell 1980) or the Self Assessment Mannekin (Bradley and Lang 1994), serving as a guideline for the validation of physiological measures that could affect the affective responses of each individual. In this case, the EDA and the ECG are signals widely validated by the literature that can give insight into the changes in the affective states of each individual in a real-time manner.
In a recent review by Leis and Lautenbach 2020, 17 studies were meta-analyzed in Esports contexts for psychological and physiological stress, and it was concluded that simply playing in an Esports non-competitive environment produced no stress reactions, whereas in competitive environments several studies reported increases in anxiety levels, cortisol levels, and physiological sympathetic activation, all three indicators of stress (Jones, Tan, and Bloom 2012; Yaribeygi et al. 2017). However, stress is not the only interesting indicator to consider in Esports environments, whether competitive or not, peripheral physiology can also provide insight into various aspects of cognitive/emotional information processing, such as polarity, emotion, engagement, boredom, frustration, etc. The benefits of extracting this information are evident, as exemplified by Smerdov et al. 2020. They performed a very comprehensive sensory analysis that results in the submission of a dataset collected from professional and amateur teams in 22 League of Legends video matches, including the simultaneous recording of the physiological (i.e. movements, pulse, saccades) and psychological (self-reported post-match survey) activity of five players, with interesting results such as the lack of correlation between stress and concentration levels for professional players. We take a similar approach, focusing on a simultaneous exploration of electrodermal activity and cardiac activity in all five players of a team in different events.
ContributionsIn our study we perform the first physiological analysis on a widely played desktop videogame "League of Legends". These are our main objectives:
* Evaluate the affective responses by analyzing distinct EDA and ECG metrics depending on game performance (winning or losing).
* Analyse differences in physiological responses over game events (e.g. "Killing", "Dying", "Kill Assist", "Destroy Turret", "Destroy Turret Plate", "Placing Ward", etc.)
* Investigate differences in individual physiological responses based on player roles (Jungle, Middle, Utility, Bottom, and Top)
## 2 Methods and Experimental Design
For our experiments we used a Shimmer31 pack with 5 simultaneous GSR+ Units providing Galvanic Skin Response for acquisition of Electrodermal Activity (EDA), as well as Optical Pulse (PPG) estimating heart rate variations.
Footnote 1: [https://shimmersensing.com/](https://shimmersensing.com/)
Footnote 2: [https://github.com/dberga/riotwatcher-shimmer-pymput](https://github.com/dberga/riotwatcher-shimmer-pymput)
We developed our own Python tools (see2) for capturing data and sending it through Bluetooth (using pyserial and pylsl) and synchronizing that data with events of the game (using riotwatcher) for later statistical analysis for EDA (Ledalab3 V3.4.9) and PPG (HeartPy4).
Footnote 3: [http://www.ledalb.de/](http://www.ledalb.de/)
Footnote 4: [https://github.com/suritiarid41/Hearrate_Analysis](https://github.com/suritiarid41/Hearrate_Analysis)
A total of 4 sessions have been performed in an Asobu eSports Experience with 16 participants (contacted and selected by United Gamers Academy) in the gameplay experimentation. From such, we captured data from a total of 12 participants playing a specific team during Summoner's Rift gameplay (avg time 30-45 min), later filtered on 7 with enough valid events for statistical comparison. We cut the recording of these participants from the start to the end of the game and we set specific window times for each event (i.e. 5 sec). This data is synchronized with events downloaded from riotwatcher api5. The events selected for capture are "killing", "dying", "kill assist", "special killing", "item purchased", "level up", "ward placed", "building kill", "champion transform", "turret plate destroyed" and "elite monster kill". After gameplay we annotated riot's metadata for each participant such as game session data (total kills/deaths, damage done, etc.), win or loss condition and player roles (top, mid, bot, utility and jungle). Average player level was 216 (with lowest 82 and biggest 402) corresponding to silver-gold S12 competitive rank gamers.
Footnote 5: Riot API tokens available in [https://developer.riotgames.com/](https://developer.riotgames.com/)
### Physiological data processing
#### 2.1.1 GSR preprocessing
The MatLab-based toolbox Ledalab (Benedek and Kaernbach 2010) was used for the GSR signal preprocessing and analysis. First, we carried out a preliminary visual examination to look for periodic drift in the signal, which reflects artifacts, and we resampled the raw signal to 50Hz using Neurokit26. The following preprocessing operations were then carried out using Ledalab toolbox: low-pass Butterworth filtering with a cutoff frequency of 5 Hz, and smoothing to eliminate any remaining artifacts. Finally, we performed an event-related analysis utilizing the Continuous Decomposition Analysis to extract the features indicating Skin Conductance Responses (SCRs) (CDA). By extracting the phasic (driver) information underlying EDA, this approach attempts to obtain the signal features of the underlying sudomotor nerve activity. Skin conductance data is deconvoluted by the overall response shape, considerably enhancing temporal accuracy. This method enables
the extraction of continuous phasic and tonic activity based on traditional deconvolution within a predetermined time window, which for us corresponded to a window comprising the three seconds before an event marker to the five following seconds. The number of SCRs within the response window, response latency for the first SCR, mean SCR amplitudes, maximum phasic, and average tonic activity within the specified window were therefore collected for each event described in the previous section.
#### 2.1.2 PPG data processing
Processing and analysis of raw PPG data were conducted using the Python-based toolkit "Heartpy" Gent et al. 2019; Van Gent et al. 2019, specialized for the analysis of PPG signal as compared to ECG. At every heartbeat, blood perfuses via the capillaries and arteries, causing the skin to become discolored. The PPG detects this discoloration. The systolic peak, diastolic notch, and diastolic peak make up the signal. First, as we did with the GSR signal, we resampled the raw PPG signal to 50Hz using Neurokit27. Then, we run the processing algorithm that comes with the Heartpy toolkit and which allows for the peak detection to extract reliable time-domain measures, such as beats per minute (BPM), and Interbeat Intervals (IBI). Furthermore, for each event, we extracted measures that reflect Heart Rate Variability (HRV) such as the RMSSD (root mean square of successive differences) and the SDSD (standard deviation of successive differences).
Footnote 7: [https://neuropsychology.github.io/NeuroKit/](https://neuropsychology.github.io/NeuroKit/)
## 3 Results
We performed data curation for our statistical analysis using data from 7 participants (a total of 2 game sessions with recorded measures in which 3 participants played twice) with enough event samples for later analysis and processing.
### Physiological results: Skin Conductance
We have processed raw GSR data with Ledalab to extract the following measures: nrSCRs (total skin conductance number "#" of responses above threshold), Latency (delay/surpassed time "s" to elicit EDA with respect to the event), Amplitude (mean activity "mV" inside the event window), PhasicMax (max phasic value "mV" from the gap with respect the response and the event window) and Tonic (max tonic activity "mV" with respect window). Previous literature in electrodermal physiology has shown EDA can be a reliable quantifier of sympathetic dynamics Posada-Quintero et al. 2016, meaning higher EDA correlated with higher sympathetic (stress/alert) levels.
In Table 1 we show mean statistics of nrSCR, Latency, Amplitude, PhasicMax, and Tonic values of players that win the gameplay and lose the gameplay. Similarly, in Table 2 we show statistics for events "Killing", "Dying", "Place Ward", "Destroy Turret" and "Destroy Turret Plate". We expand these statistics in Table 3 filtering player roles in the game.
Given the Chi-squared measured distributions (non-parametric) we performed Friedman's tests over win-loss and event data for each GSR metric. For the case of winning and losing conditions,
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline wincon & nrSCR & Latency & Amplitude & PhasicMax & Tonic \\ \hline WIN & 1.853\(\pm\)1.730 & -.151\(\pm\)1.926 &.274\(\pm\).631 &.545\(\pm\)1.165 & 12.577\(\pm\)7.047 \\ LOSS & 2.599\(\pm\)2.210 & -.758\(\pm\)1.698 &.190\(\pm\).414 &.370\(\pm\).545 & 6.881\(\pm\)4.743 \\ TOTAL & 2.372\(\pm\)2.101 & -.574\(\pm\)1.788 &.215\(\pm\).490 &.423\(\pm\).788 & 8.611\(\pm\)6.121 \\ \hline \end{tabular}
\end{table}
Table 1: Win and Loss mean GSR statistics (2 sessions) by stacking all events in one statistic.
we found significant differences in player's activity when they won or lost the game during "Killing" in nrSCR, Amplitude, and PhasicMax activity (\(p\)=.046, \(\chi^{2}\)=4.000). We also observed significant differences when winning/losing the game during "Destroying Turret" in nrSCR and Amplitude (\(p\)=.020, \(\chi^{2}\)=5.444) as well as "Destroying Plate" for nrSCR (\(p\)=.008, \(\chi^{2}\)=7.143) with a trend in Amplitude (\(p\)=.071, \(\chi^{2}\)=3.266). The tonic activity was only significantly distinct depending on winning/losing for the events of "Dying" (\(p\)=.035, \(\chi^{2}\)=4.455) and "Placing Ward" (\(p\)=.002, \(\chi^{2}\)=10.000).
We also analyzed in distributions of GSR activity between events for the same winners and
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline EVENT & nrSCR & Latency & Amplitude & PhasicMax & Tonic & N \\ \hline KILL & 1.225\(\pm\)1.170 & 0.0485\(\pm\)2.114 & 0.031\(\pm\)0.0416 & 0.101\(\pm\)0.066 & 7.764\(\pm\)3.233 & 9 \\ DIE & 2.423\(\pm\)2.120 & -0.191\(\pm\)0.739 & 0.323\(\pm\)0.627 & 0.504\(\pm\)0.862 & 12.198\(\pm\)5.818 & 35 \\ PLACE WARD & 2.32\(\pm\)2.160 & -0.295\(\pm\)2.145 & 0.225\(\pm\)0.448 & 0.152\(\pm\)0.138 & 12.064\(\pm\)5.410 & 75 \\ DES.TURRET & 2.182\(\pm\)2.214 & -0.611\(\pm\)1.537 & 0.156\(\pm\)0.233 & 0.250\(\pm\)0.439 & 9.745\(\pm\)6.270 & 79 \\ DES.PLATE & 2.175\(\pm\)1.885 & -0.457\(\pm\)1.837 & 0.294\(\pm\)0.587 & 0.457\(\pm\)0.680 & 3.087\(\pm\)1.768 & 49 \\ \hline \end{tabular}
\end{table}
Table 2: Event mean GSR statistics (2 sessions) from events ”Killing”, ”Dying”, ”Placing Ward”, ”Destroying Turret” and ”Destroying Turret Plate”.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & Role & nrSCR & Latency & Amplitude & PhasicMax & Tonic & N \\ \cline{2-8} & Jungle & 1.200\(\pm\)2.168 &.644\(\pm\)1.578 &.039\(\pm\).056 &.114\(\pm\)0.085 & 7.974\(\pm\)3.427 & 5 \\ & Middle & 1.333\(\pm\).577 & -7.13\(\pm\)3.271 &.024\(\pm\).017 &.100\(\pm\)0.014 & 5.993\(\pm\).479 & 3 \\ & Utility &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.007\(\pm\).000 & 11.78\(\pm\)0.000 & 1 \\ & Bottom &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 & 0 \\ & Top &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 & 0 \\ \hline \hline \multicolumn{8}{|l|}{Role} & nrSCR & Latency & Amplitude & PhasicMax & Tonic & N \\ \cline{2-8} & Jungle & 3.000\(\pm\)2.793 & \(\pm\)4.362\(\pm\)2.147 & 5.22\(\pm\)1.037 & 694\(\pm\)4.149 & 11.97\(\pm\)4.11 & 11 \\ & Middle &.800\(\pm\)1.789 & -2.76\(\pm\)6.617 &.034\(\pm\)0.76 &.163\(\pm\)1.48 & 6.157\(\pm\)7.80 & 5 \\ & Utility & 2.333\(\pm\).577 & -2.060\(\pm\)1.45 &.083\(\pm\)0.39 &.041\(\pm\)0.014 & 11.79\(\pm\)1.45 & 3 \\ & Bottom & 2.909\(\pm\)2.071 & -9.969\(\pm\)1.125 &.158\(\pm\)1.70 &.279\(\pm\)2.23 & 6.329\(\pm\)4.415 & 11 \\ & Top & 1.737\(\pm\)1.968 & -6.45\(\pm\)1.377 &.127\(\pm\)2.267 &.292\(\pm\)3.33 & 10.84\(\pm\)9.42 & 19 \\ \hline \hline \multirow{8}{*}{
\begin{tabular}{l} \end{tabular} } & Role & nrSCR & Latency & Amplitude & PhasicMax & Tonic & N \\ \cline{2-8} & Jungle & 3.000\(\pm\)2.512 &.201\(\pm\)2.827 &.151\(\pm\)1.189 &.241\(\pm\)2.50 & 9.458\(\pm\)4.04 & 14 \\ & Middle &.750\(\pm\)1.035 & -5.98\(\pm\)1.109 &.029\(\pm\)0.054 &.064\(\pm\)1.05 & 9.40\(\pm\)2.91 & 8 \\ & Utility &.250\(\pm\).500 &.560\(\pm\)1.120 &.005\(\pm\)0.010 &.238\(\pm\)2.03 & 9.537\(\pm\)1.67 & 4 \\ & Bottom &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 & 0 \\ & Top & 2.778\(\pm\)8.833 & -5.44\(\pm\)1.462 &.675\(\pm\)1.048 & 1.335\(\pm\)2.00 & 21.59\(\pm\)2.16 & 9 \\ \hline \end{tabular}
\end{table}
Table 3: Role-dependent mean **nrSCR (#)**, **Latency (sec)**, **Amplitude (mV)**, **PhasicMax** **(mV)** and **Tonic (mV)** statistics (2 sessions) from events ”Killing”, ”Dying”, ”Placing Ward”, ”Destroying Turret” and ”Destroying Turret Plate”. N are event occurrences.
found significant differences in Latency (\(p\)=.024, \(\chi^{2}\)=11.265), Amplitude (\(p\)=.041, \(\chi^{2}\)=9.959), Tonic activity (\(p\)=.010,\(\chi^{2}\)=13.28) and a trend for PhasicMax (\(p\)=.092, \(\chi^{2}\)=8.000). In the same analysis, we did not find differences in GSR activity between the events in the case of losing the game.
### Physiological results: Heart Rate
We have processed raw PPG data with Heartpy to obtain the BPM ("#" beats per minute), IBI (time "ms" of interbeat interval or R-R), SDNN (standard deviation of intervals "ms" between adjacent beats of the IBI of normal sinus beats), SDSD (standard deviation of successive differences between adjacent R-R intervals "ms") and RMSSD (root mean square of successive differences between adjacent R-R intervals "ms"). The latter metrics (SDSD and RMSSD) are related to the measurement of HRV (heart rate variability). Indeed, higher HRV (higher values of SDSD or RMSSD) can represent parasympathetic/vagal activity ( associated with a state of relaxation), while a lower HRV (lower values for SDSD or RMSSD) represents sympathetic/flight-or-flight activity (being stressed or alert; Valenza et al. 2018). Here we have to point out that studies on HRV are commonly analyzed over large timeline streams of heart rate data (about 5 min or more; Shaffer and Ginsberg 2017) for more simple and large tasks. However, our measurements of HRV are considering 5 to 10-second windows of activity according to the League of Legends fast-paced events. In Tables 4-6 we show mean statistics of pulse metrics according to win condition, event, and role.
After performing Friedman tests over PPG metrics between all events and we found significant differences in SDSD (\(p=\).016, \(\chi\)=10.371) when winning the game, while when losing the game we found differences in BPM and IBI (\(p=\).041, \(\chi\)=8.28). We also tested if there were differences when winning or losing the game for each specific event and we saw significant differences for "Destroying Turret Plate" between win and loss for SDSD (\(p=\)5.32\(\times\)10\({}^{-4}\), \(\chi\)=12.0) and RMSSD (\(p=\).004, \(\chi\)=8.333), and "Dying" (SDSD; \(p=\).011, \(\chi\)=6.4)(RMSSD; \(p=\).002, \(\chi\)=10) as well as for SDSD when "Placing a Ward" (\(p=\).011, \(\chi\)=6.4).
## 4 Conclusions
This study shows the potential of using physiological measurements (EDA and ECG) over ESports/desktop gameplay environment. In this study, we characterized physiological responses depending on performance, events as well as participants' roles in the game. In most cases, we found significant differences in EDA (for nrSCR, Amplitude, and PhasicMax activity) during "Killing",
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline wincon & BPM & IBI & SDNN & SDSD & RMSSD \\ \hline WIN & 172.145\(\pm\)112.795 & 531.156\(\pm\)332.714 & 128.673\(\pm\)48.392 & 100.174\(\pm\)46.513 & 201.980\(\pm\)87.488 \\ LOSS & 93.540\(\pm\)50.271 & 743.045\(\pm\)212.297 & 76.840\(\pm\)52.382 & 57.060\(\pm\)44.479 & 117.470\(\pm\)92.097 \\ TOTAL & 114.062\(\pm\)79.606 & 687.724\(\pm\)265.416 & 90.373\(\pm\)56.104 & 68.316\(\pm\)48.750 & 139.534\(\pm\)98.038 \\ \hline \end{tabular}
\end{table}
Table 4: Win and Loss mean PPG statistics (2 sessions) by stacking all events in one statistic.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline EVENT & BPM & IBI & SDNN & SDSD & RMSSD & N \\ \hline KILL & 68.620\(\pm\)10.396 & 892.429\(\pm\)139.157 & 73.404\(\pm\)46.177 & 56.974\(\pm\)31.484 & 99.246\(\pm\)53.674 & 7 \\ DIE & 103.381\(\pm\)65.949 & 725.839\(\pm\)249.362 & 86.549\(\pm\)53.718 & 67.178\(\pm\)46.320 & 132.395\(\pm\)85.887 & 37 \\ PLACE WARD & 121.905\(\pm\)75.396 & 647.536\(\pm\)278.835 & 92.073\(\pm\)53.922 & 75.722\(\pm\)54.092 & 143.491\(\pm\)93.544 & 31 \\ DES.TURRET & 118.132\(\pm\)94.834 & 678.146\(\pm\)260.378 & 94.580\(\pm\)55.782 & 72.386\(\pm\)51.682 & 145.934\(\pm\)96.149 & 57 \\ DES.PLATE & 117.418\(\pm\)78.046 & 672.916\(\pm\)275.895 & 89.918\(\pm\)60.236 & 63.527\(\pm\)46.951 & 140.361\(\pm\)110.991 & 71 \\ \hline \end{tabular}
\end{table}
Table 5: Event mean PPG statistics (2 sessions) from events ”Killing”, ”Dying”, ”Placing Ward”, ”Destroying Turret” and ”Destroying Turret Plate”.
"Destroying Turret" or "Destroying Turret Plate" between players that are winning the game and players that are losing the game. When players were winning the game, they showed distinct patterns of physiological activity depending on the events in the game (e.g., "Killing", "Destroying Turret", "Destroying Plate"). In contrast, we did not find any significant difference between these events for players that were losing the game. This can hinder the possibility that players that perform badly show similar physiological states across the game, while players that perform well have distinct physiological behavior during the game. For the case of ECG, SDSD was significantly distinct for players that were winning the game between different events. On the other side, we found that only IBI and BPM measures showed significant differences for players that were losing the game. Overall, players that performed better (winning) showed significantly higher parasympathetic activity (i.e., relaxation) than the ones that were losing. These results suggest that higher relaxation states could significantly improve in-game performance, while players that are losing the game tend to be more alert. Furthermore, the analysis for specific events, like "Dying", "Destroying Turret Plate" or "Placing Ward", has shown that players have distinct values of SDSD and RMSSD, with Killing" or "Dying" events inducing higher sympathetic activity (lower HRV).
Despite the lack of physiological samples for participants and game sessions we obtained enough
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & Role & BPM & IBI & SDNN & SDSD & RMSSD & N \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Jungle & 63.195\(\pm\)10.832 & 967.750\(\pm\)142.479 & 84.5987\(\pm\)57.767 & 66.8081\(\pm\)40.275 & 113.310\(\pm\)67.023 & 4 \\ & Middle & 75.852\(\pm\)3.301 & 792.000\(\pm\)34.176 & 58.4780\(\pm\)28.399 & 43.8612\(\pm\)9.449 & 80.492\(\pm\)31.328 & 3 \\ & Utility &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 & 0 \\ & Bottom &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 & 0 \\ & Top &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 & 0 \\ \cline{2-8} & Role & BPM & IBI & SDNN & SDSD & RMSSD & N \\ \cline{2-8} & Jungle & 75.215\(\pm\)8.200 & 806.800\(\pm\)93.180 & 69.730\(\pm\)48.760 & 49.185\(\pm\)41.834 & 99.139\(\pm\)63.691 & 10 \\ & Middle & 80.846\(\pm\)8.515 & 748.667\(\pm\)77.577 & 67.150\(\pm\)31.274 & 55.873\(\pm\)25.619 & 95.511\(\pm\)31.233 & 5 \\ & Utility & 176.58\(\pm\)60.409 & 375.714\(\pm\)157.912 & 158.366\(\pm\)16.102 & 104.299\(\pm\)26.433 & 254.481\(\pm\)58.683 & 3 \\ & Bottom & 71.112\(\pm\)5.453 & 848.111\(\pm\)64.272 & 58.362\(\pm\)25.412 & 48.458\(\pm\)29.071 & 97.162\(\pm\)44.958 & 9 \\ & Top & 149.90\(\pm\)100.33 & 628.457\(\pm\)39.2726 & 116.888\(\pm\)63.553 & 96.551\(\pm\)58.618 & 179.177\(\pm\)108.76 & 37 \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Role & BPM & IBI & SDNN & SDSD & RMSSD & N \\ \cline{2-8} & Jungle & 72.025\(\pm\)8.956 & 845.821\(\pm\)111.874 & 68.544\(\pm\)40.886 & 54.092\(\pm\)32.402 & 104.573\(\pm\)69.816 & 13 \\ & Middle & 75.660\(\pm\)5.498 & 796.250\(\pm\)59.320 & 60.883\(\pm\)25.517 & 41.489\(\pm\)21.775 & 80.912\(\pm\)39.805 & 4 \\ & Utility & 151.11\(\pm\)70.545 & 513.417\(\pm\)302.925 & 111.647\(\pm\)53.012 & 96.855\(\pm\)71.343 & 175.363\(\pm\)99.071 & 8 \\ & Bottom &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\).000 &.000\(\pm\)0.000 & 0 \\ & Top & 221.87\(\pm\)73.442 & 297.601\(\pm\)101.160 & 137.749\(\pm\)61.433 & 117.229\(\pm\)51.861 & 227.036\(\pm\)95.086 & 6 \\ \cline{2-8} & Role & BPM & IBI & SDNN & SDSD & RMSSD & N \\ \cline{2-8} & Jungle & 81.985\(\pm\)10.532 & 742.917\(\pm\)92.725 & 66.538\(\pm\)35.728 & 46.652\(\pm\)31.067 & 85.828\(\pm\)44.360 & 15 \\ & Middle & 96.998\(\pm\)49.985 & 696.306\(\pm\)174.643 & 83.246\(\pm\)45.988 & 53.699\(\pm\)26.111 & 128.027\(\pm\)86.284 & 12 \\ & Utility & 176.16\(\pm\)91.996 & 455.294\(\pm\)261.896 & 144.335\(\pm\)49.317 & 121.310\(\pm\)43.578 & 241.965\(\pm\)84.882 & 14 \\ & Bottom & 86.635\(\pm\)19.452 & 721.889\(\pm\)157.714 & 127.253\(\pm\)59.682 & 110.750\(\pm\)68.850 & 184.948\(\pm\)96.020 & 6 \\ & Top & 135.37\(\pm\)176.90 & 844.944\(\pm\)376.067 & 60.987\(\pm\)44.726 & 41.902\(\pm\)40.029 & 99.731\(\pm\)75.180 & 10 \\ \cline{2-8} & Role & BPM & IBI & SDNN & SDSD & RMSSD & N \\ \cline{2-8} & Jungle & 76.362\(\pm\)8.247 & 795.708\(\pm\)100.562 & 60.503\(\pm\)66.094 & 46.599\(\pm\)35.956 & 87.578\(\pm\)64.595 & 16 \\ & Middle & 76.874\(\pm\)6.404 & 785.385\(\pm\)63.742 & 61.381\(\pm\)29.526 & 47.089\(\pm\)34.529 & 87.474\(\pm\)37.099 & 13 \\ & Utility & 202.95\(\pm\)88.446 & 355.705\(\pm\)182.813 & 129.698\(\pm\)42.153 & 111.779\(\pm\)44.273 & 205.033\(\pm\)81.351 & 16 \\ & Bottom & 96.659\(\pm\)35.939 & 691.758\(\pm\)214.864 & 113.291\(\pm\)80.127 & 51.086\(\pm\)25.804 & 193.149\(\pm\)167.07 & 11 \\ & Top & 120.34\(\pm\)94.250 & 769.005\(\pm\)378.607 & 86.455\(\pm\)68.067 & 53.487\(\pm\)50.697 & 134.803\(\pm\)127.02 & 15 \\ \hline \multirow{4}{*}{
\begin{tabular}{l} \end{tabular} } & Role & BPM & IBI & SDNN & RMSD & N \\ \cline{2-8} & Jungle & 75.39\(\pm\)10.170 & 807.065\(\pm\)114.5
measurements to pinpoint differences in-game performance and events. By having a higher number of participants and game sessions we would suggest undergoing similar studies, not only for analyzing physiology over game performance and events but also for conducting an in-depth analysis of game roles, champions, player level, and type of match (beyond League of Legends' summoner's rift) in relation with EDA and ECG measurements.
### Author Contributions
David Berga contributed to the development, data analysis, experimentation, writing and reviewing of the paper. Alexandre Pereda contributed to the experimental design with subjects, writing and reviewing the paper. Eleonora de Filippi contributed to the data analysis and statistics as well as reviewing of the paper. Arijit Nandi contributed to the development tools of shimmer. Eulalia Febrer, Marta Reverte and Lautaro Russo contributed to the session management and contact with parters for the experimental setting.
### Funding
This study has been possible through the Grant IRC 2020 (Reference ACE033/21 /000046) funded by ACCIO (Catalan Agency for Business Competitiveness), from the project "ESports-LAB" lead by INDECAT (Associacio Catalana Cluster de la Industria de l'Esport), partners with Generalitat de Catalunya, EsportCat and Fundacio Catalana per l'Esport.
### Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Experiment participants signed an authorization form prior to the study to authorize the usage of the captured physiological data as well as performance from their Riot Gamertag and remained anonymous according to the national LOPD (Ley Organica de Proteccion de Datos de Caracter Personal).
|
2309.15205 | Beauty beacon: correlated strategies for the Fisher runaway process | Suppose that females choose males based on attributes that do not signal any
genetic quality that is not related to the choice itself. Can being choosy
confer selective advantage in this situation? We introduce correlated
strategies, which means that females, when making their choice, may take into
consideration external and independent random factors that are known to be
observable by all. Individual-based simulation is used to show that, in this
case, choosiness can emerge against the cost of over 25\% when pitted against
randomly mating females. Moreover, after being established in the population,
it can sustain costs of over 35\% .
While such costs are not biologically plausible, they demonstrate
unequivocally that sexual choice is a strong evolutionary force. Thus,
correlated strategies are shown to be an evolutionary tool that channels
randomness from the environment into genetic diversity. In addition, it turns
out that a higher number of attributes in the ornament makes the choice more
advantageous, which may result in a runaway complexity of sexual traits.
Implications for the evolution of (female) cognitive abilities and speciation
are discussed. | Daniil Ryabko, Angustias Vaca, Prudencio Pazoca | 2023-09-26T19:09:32Z | http://arxiv.org/abs/2309.15205v1 | # Beauty beacon: correlated strategies for the Fisher runaway process
###### Abstract
Suppose that females choose males based on attributes that do not signal any genetic quality that is not related to the choice itself. Can being choosy confer selective advantage in this situation? We introduce correlated strategies, which means that females, when making their choice, may take into consideration external and independent random factors that are known to be observable by all. Individual-based simulation is used to show that, in this case, choosiness can emerge against the cost of over 25% when pitted against randomly mating females. Moreover, after being established in the population, it can sustain costs of over 35%. While such costs are not biologically plausible, they demonstrate unequivocally that sexual choice is a strong evolutionary force. Thus, correlated strategies are shown to be an evolutionary tool that channels randomness from the environment into genetic diversity. In addition, it turns out that a higher number of attributes in the ornament makes the choice more advantageous, which may result in a runaway complexity of sexual traits. Implications for the evolution of (female) cognitive abilities and speciation are discussed.
## 1 Introduction
Elaborate male ornaments and displays have fascinated and puzzled researchers since Darwin (1871) proposed sexual selection as one of the main drivers of evolution. Fisher (1915) famously suggested an explanation that came to be known as the (Fisher) runaway process or the "sexy sons" hypothesis: if females choose males with some particular trait, then a genetic link is established between this choice and the trait. The choice creates a positive selection pressure on the trait, which, in turn, promotes choosiness.
In other words, sons of choosy females are more likely to be chosen, as they exhibit the trait. Subsequently developed theoretical models (O'Donald, 1967; Lande, 1981; Kirkpatrick, 1982; Bulmer, 1989) confirm this hypothesis, but also show that the runaway process quickly reaches a point when the choice is no longer advantageous: either all males already have the trait, or a balance is reached where the (possible) cost of the trait compensate the selective advantage it provides (see also Kuijper et al., 2012 for a review). Thus, if the choice is costly, then at equilibrium it is eliminated - a problem that came to be known as the lek paradox (Kirkpatrick and Ryan, 1991). A solution has been proposed by Pomiankowski et al. (1991); Pomiankowski and Moller (1995) in the form of biased mutation (against the chosen trait), though the costs it allows for are very small if realistic values of mutation bias are considered (e.g., Kokko et al., 2015). Subsequent approaches developed in the literature include dynamic costs of choice (Kokko et al., 2015) and frequency-dependent choice strategies, in particular, selecting for the rarer male type (Kokko et al., 2007). There is, however, still no consensus among evolutionists on the question: can choosiness be advantageous if the traits on which the choice is based do not confer any other information except related to the choice itself (the pure "sexy sons" explanation of the Fisher runaway)? Considering the "no" camp of this question, the "good genes" model posits that the sexual selection is for traits that somehow signal higher fitness. The "genetic capture" hypothesis proposes a resolution to the lek paradox within this model, suggesting that male sexual traits depend on the overall conditions and thus on many loci, thereby maintaining genetic variation by mutation (Rowe and Houle, 1996; Tomkins et al., 2004). Another related approach is "genotype-by-environment" interaction (Danielson-Francois et al., 2006), which suggests some genotypes are better in some environments and others in others, so that genetic variation is maintained in heterogenic or changing environments; Kokko and Heubel (2008) attempt to unify the two latter models. A recent attempt at combining the "sexy sons" and "good genes" approaches was made by Henshaw et al. (2022), who uses individual-based simulation to conclude that arbitrary ornaments are insufficient to sustain highly costly mate search, but can "piggy back" on quality-dependent ornaments.
Key to all these approaches is the question of how genetic variation is maintained. The Fisher runaway process needs somewhere to run, the choosy females need something to select from -- the male population should not be uniform. Moreover, if we step back from the lek paradox and consider the original phenomenon, the puzzling part is not that the female choice persists, but the sheer complexity of male ornaments and displays. It is particularly puzzling since often closely related species differ greatly in male sexual traits (Hulse et al., 2020). The question of how genetic variation is maintained in natural populations is sometimes called one of the most fundamental evolutionary questions (Danielson-Francois et al., 2006).
Choice depletes genetic variation, but, at the same time, the choice depends on the variation to exist. Despite a variety of mechanisms that have been proposed to address it (e.g., Radwan, 2008), this problem still appears unresolved (Barton and Turelli, 1989; Barton and Keightley, 2002; Kaufmann et al., 2023).
Thus, the _main problems addressed_ in this work are the maintenance of genetic variation in populations subjected to sexual selection, and, more specifically, the lek paradox.
To understand the suggested solution, first consider the following purely imaginary situation. Males and females congregate on a lek. Males have observable ornaments, and females are to select with whom to mate. Every female breeds once per season. Before the mating season starts, a Genie visits the lek and pins a highly visible poster to a tree. The image on the poster is that of a male with a particular ornament, which we shall call the _beauty beacon_. Females may ignore this image altogether and mate completely at random; for the sake of this example, we can assume that they may also make their choice based on any private criterion they wish to employ. However, they may also decide to base their choice solely on the beauty beacon: choose the male that most resembles the image displayed; call this _the beacon strategy_. Moreover, the Genie keeps changing the beauty beacon every few seasons, either entirely, or by altering some detail of the male's ornament on the poster image at random. The Genie is, thus, a fashion dictator of sorts -- but, of course, relevant only to those who choose to follow fashion. There is yet more to the story, as the Genie is evil: each season, he shall kill each female that uses the beacon strategy with a probability of, say, \(c\) (drawn independently for each female). It may seem that there is no reason for the females to follow the beacon strategy if \(c\) is anything but 0 (and why even then?). In particular, every time the Genie changes the beacon, the "sexy sons" link between the ornament and choosiness is weakened (if the change is small) or broken (if it is complete). Yet, the beacon strategy shall emerge, spread over the majority of the population and persist, with the values of \(c\) over 25%; and if the Genie is lenient in his killings and spares the first several generations, allowing the beacon strategy to emerge and spread, then this strategy persists with \(c\) even higher: well over 35% (as shown in the experiments below, even over 40% for some parameter values).
The intuitive explanation why this works is that the changing fashion, here dictated by the Genie and his poster beacon, by moving the target of the Fisher runaway process makes sure that there is always somewhere to run; in other words, the Genie helps the population maintain its genetic diversity by providing a common source of randomness. Comparing it to the biased-mutation solution to the lek paradox (Pomiankowski et al., 1991), the latter makes sure that there is still some genetic diversity closer to the end of the runaway process, that is to say, when almost all but not all males have the selected trait. However, the end of the process is not where the selection
pressure is the strongest, and so we shall see that the cost of choosiness that can be sustained is much lower. By varying the parameters of the way the Genie dictates the fashion and changes the beacon, he may sustain the genetic diversity and thus the runaway process closer to the maximum of the selection pressure.
Correlated strategies in game theory were introduced by Aumann (1974). A correlated strategy allows the participants to a game to use their observations of a public random signal. In general, the players may have different private observations of this signal, but for our purposes a fully observable public random event is sufficient. Simply put, the players may agree to flip a coin. A strategy based on this coin flip may be an equilibrium strategy, that is, each player stands to lose by deviating from it; the resulting notion of correlated equilibrium generalizes that of Nash equilibrium. The following example from (Gintis, 2000) illustrates the concept. There are two players, A and B; A can use the strategies up or down, and B can use the strategies left or right. The matrix of payouts is as follows.
\begin{tabular}{c|c|c|} & \(l\) & \(r\) \\ \hline \(u\) & (5,1) & (0,0) \\ \(d\) & (4,4) & (1,5) \\ \end{tabular} & First, consider the strategies without the use of common randomness. Both (5,1) and (1,5) are pure-strategy equilibria, and (2.5,2.5) is the payout of the mixed-strategy equilibrium where each player plays each strategy with probability 1/2 independently of each other. Now, if the two players can both observe a single unbiased coin-flip, and play \((u,l)\) on _heads_ and \((d,r)\) on _tails_, then the expected payout becomes (3,3). This is the correlated equilibrium of the game.
In the context of evolutionary strategies, the players that use the same strategy do so simply because they have the same genes. (Different genes mean possibly different strategies.) Thus, arbitrary complex strategies may develop and no agreements between the participants are necessary for them to use the same strategy. Next up is the question of where to find the Genie and his poster tree. The answer is that no explicit image of the ideal male is necessary. What is sufficient is simply a commonly observable source of randomness. The female strategy is then any function that maps this source of randomness into male traits. Random, or (equivalently, for our purposes) unpredictable observations are abound in nature: they may be related to the weather (temperature, pressure, humidity, winds, etc.), the condition of the populations of other species (including vegetation) or any other changing aspect of the environment. Any such aspect may be used, and the strategy is simply a mapping from this source of randomness into male features. What is important is that the random source changes with sufficient but not excessive frequency. We shall see from the model what this means. There is a growing body of evidence in the literature that both male sexual traits and female preferences depend on the environment, most notably in fish (Seehausen et al., 2008; Hulse et al., 2020; Heuschele et al., 2009;
Justin Marshall, 2000), but also in other animals including frogs, lizards and birds (Ryan and Rand, 1990; Fleishman, 1992; Hernandez et al., 2021). We do not attempt to find any actual mappings of the environment into sexual traits; they would be specific to both the species and the environment, but, as far as the considered sexual choice strategies are concerned, may otherwise be arbitrary as long as they preserve the necessary randomness.
Thus, in the experiments, we consider random source which already has the needed form of a male ornament, both represented simply by a sufficiently large array of binary loci.
However, to illustrate how a natural source of randomness can be harnessed by a beacon strategy, we construct an example based on the randomness present in the population itself. The construction is as follows. A source of randomness that we know to be present in every biological population is mutation. While each mutation is private, the average of each expressed feature over the population is somewhat random and publicly observable. Thus, consider a visible feature on which there is no sexual or natural selection, and consider its _average_ value over the population. This value is driven by the drift alone. This is a slow process, so we need a large ensemble of such features. This ensemble is our source of randomness. To take a concrete, event if grotesque in its artificiality, example, consider a peacock population in which females judge males based on the left half of their tail only. The males are never judged based on the right half of their tail; rather, the right half is used, by each female, to take an average over the whole population, or else at least over such part of the population that she can observe. This is the female-estimated beacon. She then compares the left half of the tail of each male that presents himself as a potential mate to this average over the right half that she constructed.
In the last example, a new challenge for the beacon strategy presents itself: the public information may actually be slightly different for different females, as each one may practically observe only a part of the male population. The same problem is undoubtedly present if other sources of public randomness are used. We shall see from the results that this, indeed, affects the performance of the strategy. This difficulty also sheds new light on the nature of the cost of choosiness itself, and gives rise to other strategies such as mate-choice copying. These implications are discussed further in the last section.
The correlated female choice strategies proposed are pitted against the random-choice strategy with zero cost in an individual-based simulation, which is one of the standard tools in theoretical research on evolutionary models (see, e.g., Kuijper et al., 2012 for a review). The results are unequivocal in that the choice emerges and persists despite very high costs: between 25% and 45% depending on the parameters of the strategies, and whether an external beacon or female-estimated beacon is used.
It also becomes clear from the experiments that the more attributes
there are to choose from, the better. Furthermore, for the female-estimated beacon, the mutation rate of the beacon ornament emerges as a crucial parameter. Specifically, the strategy performs well if the mutation rate is 0.01 at each locus, and rapidly deteriorates if it is decreased. While such (and higher) mutation rates are often used in theoretical models (e.g., Fawcett et al., 2011), it is perhaps too high to be realistic. However, higher mutation rate can be achieved by increasing the number of loci on which mutation operates and combining them. Mathematically, it is easy to do: if we have \(k\) binary loci with mutation rate \(m\), then taking a XOR operation over them we obtain a single binary locus with a mutation rate of almost \(km\) for small \(m\) and \(k\). Perhaps, say, a female cichlid fish does not perform this exact arithmetic trick when choosing her mating partner, and the one she uses might be different from those used by an anoline lizard; yet, an operation of this complexity appears to be well within their neurological possibilities. It is worth noting that increasing the number of loci in sexual traits (increasing the ornament size) has been proposed as a part of the solution to the lek paradox already by Pomiankowski et al. (1991), in particular, to explain the biased mutation rate; though it was criticized on the basis that trait expression at any given level of condition is likely to be subject to stabilizing, rather than escalating, non-linear selection (Rowe and Houle, 1996; Tomkins et al., 2004).
Here we obtain an explanation of the large number of observable parameters in the male ornaments and displays, by providing two ways the choosy females benefit from it. First, the more attributes to base the choice on, the more advantageous is the beacon strategy. Second, mutation on the traits on which the choice is not based directly provides the needed external source of randomness on which to base the choice, again making the beacon strategy more advantageous. This addresses another long-standing puzzle: the multitude of signals used for sexual choice. Courtship displays are extremely diverse, and most occur across at least two sensory modalities, such as visual and auditory or vibratory and olfactory (Mitoyen et al., 2019). For example, in birds these may include not only the colorful plumage itself but also coordinated wing and tail movements together with vocalizations (Bradbury et al., 1998). Given the (potential) costs of producing and receiving signals, why use more than a single cue (Bro-Jorgensen, 2010)? The answer we obtain is that these signals may be used at once as a source of randomness (the more randomness the better) and as a receiving field of genetic variation whereto this randomness is channeled.
Finally, this highlights the complexity of the female choice itself, giving rise to a number of intriguing questions, concerning alternative mating strategies such as mate-choice copying, and the effects of the evolution of the selection rules themselves, which may include speciation. These are further discussed in the last section.
The model
An individual-based simulation model is used to demonstrate the viability of costly beacon strategies. The parameters of the model are summarized in Table 1.
The model consists of a population of a constant size \(S\) with a fixed 1:1 sex ratio, and discrete generations. Individuals are characterized by their alleles at a number of haploid loci. These are: an array of size \(A\) (mostly 100 in the experiments) of binary attribute loci, which represent the _ornament_. These loci are only expressed in the male. In addition, there is a second array of binary loci, of the same size \(A\), which is also only expressed in the male (to be used by the females with the female-estimated beacon strategy as described below). We call it the _beacon ornament_. The strategy locus, which is only expressed in the female, also has two alleles, one representing the random choice strategy and the other the beacon strategy.
Each generation consists of reproduction, mutation and decimation (the latter often called viability selection in the literature).
The _reproduction_ takes place by selecting randomly a female among those currently alive, who then breeds and produces two offspring, one male and one female; the next female is selected and the process is repeated until \(S\) offspring are produced (i.e., the next generation is filled). The _breeding_ follows the so-called (Janetos, 1980) _best-of-\(N\)_ model, that is, each female is presented with a pool of \(N\) randomly selected males to make a choice from. The female chooses according to her strategy (the allele at the corresponding locus), as described below. Each offspring inherits the strategy from one of the parents with equal probability. The same goes for the the ornament and the beacon ornament, but each of these are inherited in its entirety (all \(A\) loci) from one of the parents.
_Mutation_ in the offspring is performed independently on each locus, with probabilities \(m\) for the ornament and the beacon ornament, and with probability \(m_{s}\) for the strategy. The _decimation_ takes place both on females and males: each female is taken out of the reproductive pool (killed) with a certain probability \(c\) which is the _cost of her strategy_. The cost of the random strategy is always set to 0, while the cost of the beacon strategy is some value \(c\in(0,1)\). For the sake of compatibility with previous models in the literature, there is a small decimation on the males, with the probability \(c_{m}\) of being killed proportional to the number of 1s in the ornament (thus slightly pushing towards the all-zeros ornament).
The female strategies are as follows. The random strategy is exactly as the name suggests. For the beacon strategy, we consider two variants. The first is called _global beacon_ strategy. For this strategy, all females observe the same array of binary variables of size \(A\) (the same size as the male ornament), which is the global beacon. The female using this strategy compares the ornament of each male in her pool (of \(N\) randomly selected males) to the
global beacon, and selects the one closest to the global beacon. The distance is simply the average absolute distance between the elements of the two arrays (\(l_{1}\) distance). The global beacon changes from time to time. We consider two variants of these changes: either it changes completely every \(t_{change}\) generations (for example, \(t_{change}=50\)) to a new random array; or only a few (1-5) elements of the array are changed at random. These changes are independent of everything else.
The other variant of the beacon strategy we consider is the _female-estimated beacon_: each female is given a random pool of \(B\) males to construct an estimate (the estimation pool). The estimate is simply the average beacon ornament of these males. She then uses this estimated beacon in the same way the global beacon is used in the previous strategy, that is, the ornament of each of the \(N\) males in her breeding pool is compared to the estimate and the closest one is selected. Note that the two pools of males available to each female -- the breeding pool of size \(N\) and the estimation pool of size \(B\) -- are selected independently. There is no direct selection on the beacon ornament of the males: these are only used for estimation.
### Estimating viability
To evaluate the viability of the beacon strategies considered, each of the tested strategies is paired against the random-choice strategy. This means that the strategy locus has only two alleles in each experiment: one corresponding to the random-choice strategy, and the other to the (beacon) strategy being tested. The random-choice strategy has cost \(0\), while the tested strategy has cost \(c\). The _expected viability at cost \(c\)_ of the tested strategy after \(T\) generations with initial population \(I\) is the expected ratio of the females using the tested strategy after \(T\) generations starting with population \(I\). Furthermore, we can define the \(\alpha\)_-critical viability_ of a strategy \(S\) as the largest cost \(c\) with which the expected viability of the strategy is greater than than \(\alpha\) (this variable also depends on the initial population \(I\) and the number of generations \(T\)).
In the experiments we consider three ways of setting the initial population: either all \(0\)s, which means that all the ornament attributes are set to \(0\) and the only strategy of females is the random one (the beacon strategy tested should then emerge due to mutation); the random initialization, where every attribute is set uniformly at random (including the strategy); and the _deferred decimation_ initialization, where the initial population is random but the cost \(c\) is set to zero for the first several generations (50 in the experiments), allowing the tested strategy to become established in the population. Note that the latter initialization is similar to fixating the tested strategy at the beginning, but addresses the issue of how to initialize the male ornaments. As would be expected, the viability depends on the initialization, with the zero initialization giving the lowest resulting value
and deferred decimation the highest.
Note that the expected ratio of females with the tested strategy in the definition of viability can hide extinction with non-zero probability. Indeed, this is what happens in the experiments with some values of the parameters: the beacon strategy either dominates the population or goes to extinction. An alternative definition would be to consider viability at threshold \(\alpha\) with confidence \(\epsilon\): say that a strategy has viability \(c\) at threshold \(s\) with confidence \(\epsilon\) after \(T\) generations and initial population \(I\) if the ratio of females with this strategy with cost \(c\) is at least \(s\) with probability at least \(1-\epsilon\) after \(T\) generations when starting with initial population \(I\). This value is perhaps more interesting, but since it involves yet another parameter (or two, if we consider the threshold \(\alpha\)), we use the expected viability instead.
Table 1 shows all the parameters of the model and their values used in the experiments. The values in bold are those used in the main series of experiments, while the rest are the values used in the experiments designed to elucidate the role of each parameter.
### Comparison model: biased mutation
Estimating viability as described already means comparing the tested strategy to the random choice. However, it is interesting to have another comparison strategy, tested against random choice in the same manner, using the same population parameters.
For this, we have chosen the _biased mutation_ model: the classical model
\begin{table}
\begin{tabular}{l|l|l} Notation & Parameter & Value(s) used in the experiments; \\ & **bold** for the main one \\ \hline \(S\) & Population size & 2000, constant 1:1 sex ratio \\ \hline \(A\) & Number of attributes in the ornament & **100**, 1000; **1** for the biased mutation comparison \\ \hline \(c\) & Cost (mortality rate) of the choosy strategy & variable \\ \hline \(c_{m}\) & Cost (mortality rate) of males & 0.01\(\times\) average ornament attribute \\ \hline \(N\) & Batch size: number of males a female can choose from & **20** \\ \hline \(m\) & Attribute mutation rate, ornament and beacon ornament & **0.01**, 0.001, 0.0001 \\ \hline \(m_{s}\) & Strategy mutation rate & **0.01** \\ \hline \(T\) & Total number of generations & 5000 \\ \hline \end{tabular} Parameters of the beacon strategy: global beacon
\begin{tabular}{l|l} Number of generations between changes & **1** (gradual change), **50** (complete change) \\ Number of attributes to change & **1**, 2, 3, 5,(gradual change) \(A\)**mall** (complete change) \\ \hline \end{tabular} Parameters of the beacon strategy: female-estimated beacon
\begin{tabular}{l|l} \(A\) & Number of attributes in the male’s beacon & Same as the number of attributes \\ \(m\) & Beacon attribute mutation rate & Same as attribute mutation rate \\ \(B\) & Estimation batch size: number of males used by each & 100 \\ \hline \end{tabular} Parameters of the comparison strategy: biased mutation
\end{table}
Table 1: Parameters of the model
of Pomiankowski et al. (1991), in which the simple Fisher runaway process is endowed with an asymmetric mutation, biased against the ornament selected. Thus, for this model we consider a single-locus ornament (\(A=1\)). The rest of the parameters are the same as in the experiments with the beacon strategies, except for an additional mutation parameter: the mutation bias \(m_{extra}\), whose meaning is as follows. After the mutation on the ornament allele takes place, an additional mutation is applied. If the value of the allele is 1 (ornamented male), the allele is flipped to 0 with probability \(m_{extra}\). The female strategy here is to always prefer the ornamented male, that is, the one with allele 1.
Separate experiments were run to determine the approximate value of mutation bias \(m_{extra}\) that would result in the highest critical variability. The value found was 0.45. While not biologically plausible, this value has been chosen in the experiments as the hardest to compete against for the beacon strategies.
## 3 Results
The results of the simulations show that the critical viability of the beacon strategy is between 25% and 40%, depending on the strategy used, as well as on the initialization as described above. The strategies that use an external target (global beacon) have the highest viability; the viability decreases if the females have to rely on their own estimates of the beacon. When compared to the runaway with biased mutation, all the beacon strategies evaluated show a significantly higher viability.
Figure 1 shows the viability of the beacon strategy with the global beacon. Each point corresponds to the percent of females with the beacon strategy at the end of the trial (\(T\)=5000 generations) for the corresponding cost \(c\) of the strategy. On Figure 0(a), the global beacon changes every 50 generations to a new, completely random one; whereas, on Figure 0(b), every generation one randomly selected attribute of the beacon is changed to a random value. Different colors correspond to different initialization: red to zero initialization (all females use the random choice strategy and all male ornaments are uniformly 0), green to random initialization (all values are uniformly random) and blue to random initialization with decimation deferred till generation 50 (the cost of the beacon strategy is 0 before that point). We can see that, on either of the plots, the estimated expected viability of the beacon strategy is significantly above 70% with the costs as high as 37% provided it is sufficiently established in the population; if, initially, there are no females with the beacon strategy, then it emerges and persists despite the cost of up to 30%. The difference between random initialization and deferred decimation is small, as is the difference between the two modes of changing the beacon.
Figure 2 shows the viability of the beacon strategy with female-estimated beacon. Thus, females using this strategy have to build their own estimate of the beacon using the beacon ornament of the males. Each one uses a separate pool of \(B=100\) males to take the average of their beacon ornaments and use this as the target. The viability is plotted against the comparison model of the runaway with biased mutation. Here each male has \(A=1\) ornament locus, and the females that do not choose randomly prefer males with allele 1 at this locus. The additional mutation \(1\to 0\) on this locus is \(m_{extra}=0.45\). We can see that the viability of the beacon strategy is significantly higher than that of the comparison model, although if the comparison model is run with deferred decimation and the beacon strategy is run with random or zero initialization then their viability is similar. Comparing to the beacon strategy with a global beacon, we can see that the viability deteriorates if females have to use their own estimates of the beacon in the way considered here. Still, the beacon strategy with female-estimated beacon emerges and persists with costs significantly above 25%, and, if it is sufficiently established in the population, it can persist with cost over 30%.
Figure 1: Viability of the beacon strategy with a global beacon. The values of the parameters are \(T=5000\), \(A=100\), \(N=20\), \(m=m_{s}=0.01\), \(S=2000\). Colors correspond to different initializations: red to zero initialization, green to random initialization and blue to random initialization with decimation deferred till generation 50. Confidence intervals are at 1% with either 10 or 50 replica runs.
### The role of the parameters
The female choice strategies considered depend on a number of parameters. It is, of course, impossible to explore exhaustively this parameter space using the experimental technique adopted. In our view, this does not diminish the value of the results, because showing that a strategy is viable with some values of the parameters demonstrates that it is evolutionary plausible - provided the values of the parameters are not outside the range of what is naturally possible. This said, it is still interesting to explore the role of each of the parameters, to see what could be the evolutionary path towards the strategy studied, as well as to understand where the process may be headed. We attempt to do so by varying the parameters one by one and looking how it affects the viability of the strategies.
Figure 2: Viability of the beacon strategy with female-estimated beacon, estimation batch size \(B=100\) (solid lines). The number of loci in the ornament and in the beacon ornament is \(A=100\). Comparison model (dotted lines) is the runaway with biased mutation; here \(A=1\), extra mutation bias \(0.45\). The rest of the parameters are the same in both models: \(T=5000\), \(N=20\), \(m=m_{s}=0.01\), \(S=2000\). Colors correspond to different initializations: red to zero initialization, green to random initialization and blue to random initialization with decimation deferred till generation \(50\). Confidence intervals are at \(1\%\) with either \(10\) or \(50\) replica runs.
#### 3.1.1 The batch size \(N\) and the estimation batch size \(B\)
First we take a look the role the batch size \(N\) on the viability of the beacon strategies. Not surprisingly, the results depend strongly on this parameter, with higher batch sizes resulting in a higher viability of all beacon strategies. This is in line with previous findings that examined the role of various parameters in individual-based simulations (Roff and Fairbairn, 2014).
Figure 4 shows the viability of the beacon strategy with global beacon with different values of \(N\). Each plot shows the percentage of choosy females at the end of the \(T=5000\) trials. The table in Figure 3 gives the estimated values of the \(0.5\)-critical viability for different values of \(N\) both for the the global beacon and for the female-estimaed beacon strategy; the estimated values in the table are with \(0.01\) confidence interval over \(50\) replica runs.
Figure 4: Viability of the beacon strategy for different values of \(N\), using a global beacon (Figures 3(b),3(c),3(d)) and female-estimated beacon (Figures 3(d),3(e),3(f)) with values \(N=2,5\) and \(100\). The values for \(N=20\) can be seen on Figures 2(b) and 2. The values of the rest of the parameters are \(T=5000\), \(A=100\), \(m=0.01\), \(S=2000\), \(B=100\) for female-estimated beacon and for the global beacon the change is every generation \(3\) loci at a time. Confidence intervals are at \(1\%\) with either \(10\) or \(50\) replica runs.
Figure 3: \(0.5\)-critical viability of beacon strategies for various values of batch size \(N\)
Let us next look at the role of the other batch size, \(B\), used in the female-estimated beacon strategy in order to construct estimates of the beacon. We can see that this parameter affects the performance in a predictable way -- the bigger the batch size, the better, but its influence is not as large. It is also interesting to consider the female-estimated beacon with the perfect estimation, that is, where each female can observe the whole population in order to estimate the beacon (but not to select a mating partner from), thus getting another version of a "global" beacon, but this time based on the average of the ornament beacon of all the males in the population; let us call this case \(B=\infty\). On Figure 5, the viability of the beacon strategy is plotted with various values of the parameter \(B\), from 2 to \(\infty\); recall that the value \(B=100\) was used in the experiments in the previous section.
#### 3.1.2 Mutation rates and the number of attributes
Lower mutation rates result in lower viability of the beacon strategies. Figure 6 presents the viability plots for mutation rates \(m=0.001,0.0001\) and \(0.01\) as before for comparison. These lower mutation rates are more realis
Figure 5: Viability of the beacon strategy with female-estimated beacon with various values of the parameter \(B\): 20 (red), 50 (green), 100 (yellow), 250 (blue) and \(\infty\) (black). The rest of the parameters are as before: \(T=5000\), \(N=20\), \(m=m_{s}=0.01\), \(S=2000\). Confidence intervals are at 1% with either 10 or 50 replica runs.
tic. However, as mentioned in the introduction, higher mutation rates can be achieved by increasing the number of attributes and combining them.
Indeed, as seen on Figure 7, decreasing the number of attributes decreases viability. On this plot, we also show the viability of the beacon strategy with a higher number of attributes (\(A=1000\)) and smaller mutation rate (\(m=0.001\)) which is more realistic for this number of attributes; it can be seen that this leads to a slightly increased viability, as compared to the values \(A=100\) and \(m=0.001\).
For female-estimated beacon, lowering the mutation rates results in a quick deterioration of viability, as the resulting beacon does not have enough randomness. The results are predictable and therefore omitted.
#### 3.1.3 Parameters of the global beacon and another look at the biased-mutation model
We tested various values of the parameters of the global beacon: gradual changes, that is, a change every generation with \(1,2,3,5\) or \(10\) loci at a time; and changing completely every \(10,20,50\) or \(100\) generations. The parameters affect the viability of the strategy but the changes are small, so we do not report them.
Figure 6: Viability of the beacon strategy with global beacon, with different mutation rates \(m\): \(0.01\) (black), \(0.001\)(blue) and \(0.0001\) (green). The beacon changes every generation \(1\) locus at a time. The rest of the parameters are as before: \(T=5000\), \(N=20\), \(m_{s}=0.01\), \(S=2000\). Confidence intervals are at \(1\%\) with either \(10\) or \(50\) replica runs.
For the beacon that changes completely every \(K\) generations, observe that when \(K\) goes to infinity, the resulting strategy comes back to a version of the biased mutation strategy. Indeed, since each locus of the ornament may mutate every time but the comparison (beacon) ornament never mutates, once the beacon-like ornament spreads over the population, the mutation becomes biased against the ornament, in the sense that each mutation makes the ornament more different from the beacon with a probability higher than that of making it closer to the beacon.
It is, therefore, interesting to look at the viability of the beacon strategy with a beacon that changes rarely, as it presents a middle ground between the biased mutation model of the Fisher runaway process and the fashion-led beacon strategy studied. Instead of presenting viability plots, we show, on Figure 8, the average number of choosy females during a single trial run, with the beacon changing completely every 300 generations. Over 1500 generations, we can clearly see the 5 peaks corresponding to the changes in the beacon (shortly after generations 0, 300, 600, 900 and 1200): a shot of new randomness gives a boost to the choosy strategy. Note that, between the peaks, while the choosy part of the population decreases, it does not die
Figure 7: Viability of the beacon strategy with global beacon, with different number of attributes \(A\) in the male beacon: 20 (red) and 100 (blue), with mutation rate \(m=0.01\). The black line corresponds to \(A=1000\) with \(m=0.001\). The beacon changes completely every 50 generations. The rest of the parameters are as before: \(T=5000\), \(N=20\), \(S=2000\). Confidence intervals are at 1% with either 10 or 50 replica runs.
off quickly despite the relatively high cost of \(0.25\), as we are in the biased-mutation mode here, which is a viable strategy in itself.
## 4 Discussion and future work
We have shown that correlated female choice strategies can effectively channel randomness from the environment into genetic diversity, providing the needed fuel for the runaway process to keep running, and thereby maintain the choice in the population. This gives new answers to some long-standing questions in evolutionary biology, including the lek paradox, but it also opens up various avenues for further modeling and research. Some of these are discussed in this section.
We start with some direct generalizations that can be made to the model in order to make it more realistic or amenable to theoretical analysis. We then continue with implications into related topics in evolutionary biology, including the driving forces for the evolution of intelligence, more complex mate choice strategies such as mate choice copying and other related questions.
Figure 8: Percentage of females following the beacon strategy with slowly changing beacon: the beacon changes completely every \(300\) generations. One cercle per generation, over \(T=1500\) generations. Cost of the choosy strategy is \(c=0.25\). The rest of the parameters are as before: \(m=0.01\), \(N=20\), \(S=2000\). The initial population is random
Direct extensions and generalizations
The model used in this work is perhaps the simplest one for the phenomenon in question (at least, in the framework of individual-based simulations); in this quality, it is suitable for answering qualitatively the questions at hand. However, a few generalizations would make it more realistic, and are therefore worth exploring in future research. It would also be interesting to see quantitatively how much each generalization affects the viability of the beacon strategies considered.
The first generalization that comes to mind is diploidy and genetic recombination of ornaments. Since these genetic mechanisms allow for more stochasticity, it can be conjectured that they would increase the viability of choosiness even further.
Another generalization concerns the cost of choosiness and the sampling methods used by choosy females. Following classical models of the Fisher runaway process (Lande, 1981; Kirkpatrick, 1982; Pomiankowski et al., 1991), here we used fixed costs of choice; moreover, each female is given a batch of a fixed size to choose from. It would perhaps be more realistic to let the cost depend on the batch size and allow the females to choose when to stop sampling. Andersson (1994) has suggested that preferences may have a cost that is inversely related to the frequency of the preferred type of male. Kokko et al. (2015) and subsequent works (e.g., Henshaw et al., 2022) have incorporated dynamic sampling costs into models of the runaway process, and their results show that this generalization is significant. In our model, every male is potentially unique, so that the decision of each female when to stop sampling is not as simple as "stop when the preferred type is found," and would necessarily involve extra parameters. Here we have also used exclusively the best-of-\(N\) sampling model, whereas there are various other possibilities both in theory (Janetos, 1980) and, of course, in nature (Rosenthal, 2017). Nevertheless, since the batch size \(N\) affects greatly the viability of the strategy, it appears important to explore the effects of various sampling methods and the corresponding cost structures in our model.
Kuijper et al. (2012) survey four most widely used approaches to modeling sexual selection and the Fisher runaway process, of which we have chosen the most complex: individual-based simulation. The other three (population genetics, quantitative genetics and invasion analysis) are mathematical approaches that attempt to fully describe the dynamics of the process at the cost of making some simplifying assumptions. While we did not find these models directly applicable to our scenario, invidual-based simulation still has some important disadvantages that only mathematical modeling can remedy. Specifically, the results of the simulations depend on the value of the parameters, which include the initial population, and this space is impossible to explore exhaustively. Rather than attempting to model the process precisely, we would like to call for a different, qualitative, approach,
that would help to answer questions of the following kind. Consider the exact same model used in the individual-based simulation. Find upper bounds on the cost sustainable by the beacon strategies: can, for example, the cost of 50% be sustained by these strategies with some values of the parameters? (Note that a trivial upper bound of 1 can be established mathematically.) Does a strategy A dominate a strategy B, in the sense that for every set of initial conditions the critical viability of strategy A is higher than that of strategy B? and so on. To study the model mathematically, one can note that if the beacon changes every generation then it is a finite-state Markov chain (the state space is finite but huge). The theory of Markov chains (e.g., Hernandez-Lerma and Lasserre, 2003) provides the necessary apparatus to address qualitative questions of the kind mentioned. The advantage of such an approach would be that the model studied mathematically is the same as the model studied empirically in simulations; this would come at the cost of giving up on full descriptions of the dynamics of the model. We leave this task, which appears highly non-trivial, as an avenue for further research.
### Complexity of choice, prediction and intelligence
Mate choice emerges from these results as a rather complex task. Indeed, it requires many observations of potential mates -- the more the better, and these observations are based on a large set of stimuli -- again, the more stimuli are used, the better. As can be observed in nature, these stimuli can be spread across a variety of modalities: visual, acoustic, etc. (e.g., Hegyi et al., 2022); that this should be so does not follow from the results of the model, but this can be simply a way to increase the number of attributes to select from. Furthermore, observations are required to construct the beacon for the beacon strategies; what are these observations we do not know, but at least if they are constructed from the population itself, then, once again, sampling is needed, and the larger the sample the better. The function that maps the beacon into male sexual features need not necessarily be complex (it may be some simple scrambling, like the one mentioned in the introduction), but the data processing task resulting from the sampling described is potentially rather demanding.
However, the mate-choice problem is perhaps yet a lot more complex than that, possibly pushing the cognitive abilities of the species to their limit; it may even be a force driving the evolution of intelligence -- albeit limited a one whose application is limited to females (or, more, generally, to the choosy sex), as we shall presently argue. Recall that the beacon strategies rely on an external source of randomness. Randomness is rarely really random in nature; much of the stochasticity in the environment is at least partially predictable. Suppose that some females are able to predict the next change of the beauty beacon. Then they would be clearly advantaged with respect to the rest, as they would be able to select an optimal match
for both the current and the next beauty target. However, if sufficiently many females are able to make the prediction, then the next target becomes the current target, and the prediction problem shifts to the subsequent one. Then the choosy females either deplete the source of randomness, in which case the beacon strategy ceases to become advantageous and they need to look for another source of randomness; or else, a prediction race opens up, eliminating all that the females of the population are able to predict well from the beauty beacon, and pushing what remains to the limit of their abilities. As mate choice is a never ending competition, it may well result being one of the most cognitively demanding problems that an individual can possibly face.
From this argument, it may be hypothesized that females should be more intelligent than males in at least some lek species. However, male intelligence may arise as well as a by-product, and males might also benefit from it in other ways, some of which are discussed below in Section 5.3. To our knowledge, cognitive difficulty of mate choice has not been studied explicitly, although there is a large literature on mate choice copying which is considered a case of (social) learning; this is discussed in Section 5.2.
### Mate choice copy and speciation
Mate choice copying is said to occur when a female is more likely to mate with a previously mated male (Galef and White, 1998), and is a part of a more general phenomenon of non-independent mate choice (Vakirtzis, 2011). It has been observed in a variety of species (Davies et al., 2020), from insects to humans (Mery et al., 2009; Eva and Wood, 2006), and studied both theoretically and empirically (Lill, 1974; Pruett-Jones, 1992; Dugatkin, 1992; Kirkpatrick and Dugatkin, 1994). Cost-avoidance and improving estimation accuracy are the typical explanations proposed for this phenomenon. It has been argued (Sapage et al., 2021) that mate choice copying may be advantageous in rapidly changing environments, where females might need updated information regarding better adapted or more popular mates.
While we do not attempt to model mate choice copying here, our results show the importance of estimating the attractiveness of potential males and the necessity of relatively large samples. Clearly, bootstrapping the choice of others may be helpful here. It can be conjectured that mate choice copying can significantly lower the batch size \(N\) required to sustain a given cost \(c\).
Perhaps more importantly, the cognitive difficulty of making the choice discussed above introduces the possibility that some females may be unable to make the choice purely by themselves, or, else, be less apt at it than others. Giving the constantly changing nature of the target, mate choice copying may enable the co-existence of choosers apt at different versions of the task, or different parts of it. Mate choice copying thus may not be simple copying but rather learning from others; indeed, it is often referred to as a
case of social learning (Witte et al., 2015; Davies et al., 2020; Sapage et al., 2021), and sometimes includes generalization, whereby the observed preference is generalized to potential mates with the same features (Vakirtzis, 2011; White and Galef Jr, 2000; Fowler-Finn et al., 2015). A closely related concept that appears applicable to the problem of choice in our model is that of distributed social learning. The latter is said to occur (Reznikova et al., 2022) when only a few members of a population are able to solve the task without looking at others (presumably, they have inherited the complete behavior genetically), whereas the rest are only able to complete some parts of it and learn the rest (complete the pattern) by observation. Thus, the genetic information needed for this behavior is distributed over the population. This phenomenon has been demonstrated in ants and rodents in the context of non-obligatory hunting behavior (Reznikova and Panteleeva, 2008; Reznikova et al., 2022). In the context of mating behavior, it has been shown in fruit flies that males with a missing gene responsible for essential details of mating behavior can learn these missing elements when kept in a group (Danchin et al., 2018).
Since the mate choice problem in our model is both complex and dynamic, simple mate choice copy as well as more general forms such as distributed social learning appear highly relevant, and deserve further modeling research. It is worth nothing that, despite the large literature on mate choice copying, the question of how difficult the problem is for the choosers, that is, its intellectual aspect, has remained largely unexplored so far.
Turning to speciation, it has been noted already by Darwin (1871) and the early researchers on evolution (cf. Lande, 1981 and references) that closely related species often differ the most in the characteristics of adult males. However, modeling attempts conclude that sympatric (that is, within the same geographical area) speciation by sexual selection is not likely (Arnegard and Kondrashov, 2004) even taking into account mate choice copying (Mery et al., 2009). More broadly, while sympatric speciation is a contentious topic, it is generally considered to be possible even if it is unclear how common it is (Bolnick and Fitzpatrick, 2007).
While we do not attempt to model speciation here (leaving this topic for further research), we can make the following two observations related to the argument. First, given the difficulty of the task of female choice described above and the uncertainty that arises from it, the choice function applied by females appears fragile and prone to both error and mutation; thus, sympatric speciation by sexual selection appears to be worth reconsidering in this light. Second, given that the choice function takes into account external information, it may depend on the environment, which may lead to allopatric speciation as a direct consequence.
### Male adaptations, male choice and other generalizations
As perhaps in all models of the runaway process, in the one considered here males have no way to influence their fate once their genotype has been determined. This is not always so in nature, as, even in lek species, males may change their courtship behaviour during their lifetime (for example, Dukas, 2005; Kahn et al., 2013). Females can also change their preferences, which is of course not addressed in our discrete-generation model. Adaptive behaviour by both males and females provide interesting directions for generalizations. One can hazard a guess that the result would be that the choice problem becomes even more complex and even more randomness is needed.
Another promising generalization is mutual choice. In many species, especially in those where males contribute to raring the young, both males and females are choosy. Whether, or to which extent, this leads to a runaway process on both sexes is not obvious. Furthermore, the interplay between male and female strategies may be non-trivial here, and may include the appearance of such strategies as playing-hard-to-get (e.g., Jonason and Li, 2013) which may become a part of the runaway process (Ryabko and Reznikova, 2015). These are all interesting topics for further modeling.
|
2309.03467 | Autoregressive Omni-Aware Outpainting for Open-Vocabulary 360-Degree
Image Generation | A 360-degree (omni-directional) image provides an all-encompassing spherical
view of a scene. Recently, there has been an increasing interest in
synthesising 360-degree images from conventional narrow field of view (NFoV)
images captured by digital cameras and smartphones, for providing immersive
experiences in various scenarios such as virtual reality. Yet, existing methods
typically fall short in synthesizing intricate visual details or ensure the
generated images align consistently with user-provided prompts. In this study,
autoregressive omni-aware generative network (AOG-Net) is proposed for
360-degree image generation by out-painting an incomplete 360-degree image
progressively with NFoV and text guidances joinly or individually. This
autoregressive scheme not only allows for deriving finer-grained and
text-consistent patterns by dynamically generating and adjusting the process
but also offers users greater flexibility to edit their conditions throughout
the generation process. A global-local conditioning mechanism is devised to
comprehensively formulate the outpainting guidance in each autoregressive step.
Text guidances, omni-visual cues, NFoV inputs and omni-geometry are encoded and
further formulated with cross-attention based transformers into a global stream
and a local stream into a conditioned generative backbone model. As AOG-Net is
compatible to leverage large-scale models for the conditional encoder and the
generative prior, it enables the generation to use extensive open-vocabulary
text guidances. Comprehensive experiments on two commonly used 360-degree image
datasets for both indoor and outdoor settings demonstrate the state-of-the-art
performance of our proposed method. Our code will be made publicly available. | Zhuqiang Lu, Kun Hu, Chaoyue Wang, Lei Bai, Zhiyong Wang | 2023-09-07T03:22:59Z | http://arxiv.org/abs/2309.03467v2 | # Autoregressive Omni-Aware Outpainting for Open-Vocabulary
###### Abstract
A 360-degree (omni-directional) image provides an all-encompassing spherical view of a scene. Recently, there has been an increasing interest in synthesising 360-degree images from conventional narrow field of view (NFoV) images captured by digital cameras and smartphones, for providing immersive experiences in various scenarios such as virtual reality. Yet, existing methods typically fall short in synthesizing intricate visual details or ensure the generated images align consistently with user-provided prompts. In this study, autoregressive omni-aware generative network (AOGNet) is proposed for 360-degree image generation by outpainting an incomplete 360-degree image progressively with NFoV and text evidences jointly or individually. This autoregressive scheme not only allows for deriving finer-grained and text-consistent patterns by dynamically generating and adjusting the process but also offers users greater flexibility to edit their conditions throughout the generation process. A global-local conditioning mechanism is devised to comprehensively formulate the outpainting guidance in each autoregressive step. Text guidances, omni-visual cues, NFoV inputs and omni-geometry are encoded and further formulated with cross-attention based transformers into a global stream and a local stream into a conditioned generative backbone model. As AOG-Net is compatible to leverage large-scale models for the conditional encoder and the generative prior, it enables the generation to use extensive open-vocabulary text guidances. Comprehensive experiments on two commonly used 360-degree image datasets for both indoor and outdoor settings demonstrate the state-of-the-art performance of our proposed method. Our code will be made publicly available.
1The University of Sydney, 2JD.com, 2Shanghai AI Laboratory,
[email protected], [email protected], [email protected], [email protected], [email protected]
## Introduction
A 360-degree (omni-directional) image offers a comprehensive spherical view of a scene and provides users the freedom to explore any direction from a singular view point. They have revolutionized the way that users consume, interact with, and produce visual content. Yet, the exclusive reliance on specialized cameras to capture these images poses significant challenges for their widespread adoption, limiting the scalability and accessibility of creating immersive content for broader audiences. In contrast, given the vast quantity of Narrow Field of View (NFoV) images captured daily via mobile phones and digital cameras, there has been a growing interest in transforming these conventional images into 360-degree panoramic visuals. By converting these NFoV images, extensive visual databases can be leveraged and enable more immersive experiences for the applications in Virtual Reality and Augmented Reality across various domains such as tourism, entertainment and education.
In recent years, deep learning methods have been explored to generate photo-realistic 360-degree images. For instance, OmniDreamer Akimoto et al. (2022) formulates a 360-degree image generation pipeline with a VQGAN Esser et al. (2021) by treating NFoV images as incomplete 360-degree images. Conditioned on text guidances, Text2Light Chen et al. (2022) introduces two VQGANs for a global-to-local modelling strategy in pursuit of generating high-resolution 360-degree images. ImmerseGAN Dastjerdi et al. (2022) applies domain adaptation methods on pretrained GANs, which can be conditioned on both NFoV images and text guidances. While
Figure 1: Examples of 360-degree image generation, showing the limitation of existing methods compared to ours. The top part above the dashed line depicts an NFoV-guided example and the bottom part below the dashed line is for a text-guided example. (a) Input condition. (b) Ours (AOGNet). (c) Top - OmniDreamer Akimoto et al. (2022) and Bottom - Text2LightChen et al. (2022).
these methods show encouraging performance, the challenge remains regarding the usage of given NFoV images and user-provided open vocabulary text guidances individually or jointly for enhanced control in 360-degree image generation. Specifically, the existing methods typically fall short in synthesizing intricate visual details as shown in the top part of Fig. 1, where the details are vague or missing with OmniDreamer compared to our approach, which are highlighted in the red bounding boxes. Moreover, the generated images and user-provided text guidances tend to be inconsistent, especially under an open-vocabulary setting, as depicted in the bottom part of Fig. 1 by comparing Text2Light and our solution.
In this study, we propose a novel autoregressive omniaware generative network (AOG-Net) for generating 360-degree images conditioned on open vocabulary text guidances and given NFoV images jointly or individually. Overall, the generation is formulated as an autoregressive stochastic process to outpoint an incomplete 360-degree image progressively, in which each step outpaints a local region under its corresponding NFoV view. This autoregressive scheme not only allows for deriving finer-grained and prompt-consistent patterns by dynamically observing and adjusting the generation process but also offers users greater flexibility to modify or introduce new conditions throughout the generation process. Furthermore, a global-local conditioning mechanism is devised to comprehensively formulate the outpainting guidance for each autoregression step. Text prompts, omni-visual cues, NFoV inputs and omnigoometry are encoded and further formulated with cross-attention based transformers into a global stream and a local stream for a conditioned generative backbone model. This study further explores the potential to leverage large-scale models for the conditional encoder and the generative prior, which helps complete the generation using open-vocabulary prompts. Comprehensive experiments on two commonly used 360-degree image datasets for both indoor and outdoor settings demonstrate the state-of-the-art performance of our proposed method.
In summary, the key contributions of this study are in three-fold:
* A novel autoregressive outpainting approach is proposed to produce photo-realistic 360-degree images by dynamically adjusting the generation process for improved finer-grained details and prompt-consistency.
* A global-local conditioning mechanism is devised to formulate the guidance encompassing open-vocab text guidances, omni-visual cues, NFoV inputs and omni-geometry with cross-attention based transformers.
* Comprehensive experiments were conducted on two commonly used benchmarks, demonstrating the state-of-the-art performance of AOG-Net in both indoor and outdoor settings with as few as 40 training samples.
## Related Work
We first review the studies in both the field of 360-degree image generation and the field of image outpainting which are relevant to our study. As our work takes image and text guidances as conditions, we further review the related studies on conditional image generation.
### 360-Degree Image Generation
Unlike general NFoV images, 360-degree image generation requires to take the omni-directional continuity into account. Early studies, for example, [12] estimates a coarse 360-degree image from an NFoV image with inverse rendering technique, which ignores such geometrical continuity and generates 360-degree images lack of fine details. To address this, 360IC [1] and SIGSS [17] were proposed to improve geometrical continuity by taking the intrinsic horizontal cyclicity into consideration and encoding it as positional conditions to connect the two ends of 360-degree images in equirectangular representations. EnvMapNet [18] improves visual quality of the outpainted 360-degree images by introducing a projection loss and a clustering loss for accurate lighting and shadowing. OmniDreamer [1] was further developed by leveraging the Taming-Transformer [13], where a circular inference scheme was introduced to fit the intrinsic horizontal cyclicity for 360-degree image synthesis, conditioned on provided NFoV images, yielding diverse and photo-realistic results. However, OmniDreamer is limited to a single condition where only an initial NFoV image is accepted, while the controllability of the overall synthesis process is limited. ImmenseGAN [1] aims for finer controllability over the outpainting by introducing a text guidances to fine-tune a generative model with a large-scale private text-image pair dataset. Due to the lack of public text-image paired dataset, Text2Light [10] introduces a zero-shot text-guided 360-degree image synthesis pipeline without using initial NFoV images, in which a pre-trained CLIP model is adopted [1].
However, the existing methods typically fall short in synthesizing intricate visual details and inconsistencies can be observed between generated images and user-provided text guidances, especially under an open-vocabulary setting, which demands further mechanisms to address these issues.
### Image Outpainting
Image outpainting, a fundamental task in computer vision, focuses on expanding the unknown regions outside the primary known content. Unlike inpainting, outpainting may not be able to leverage information from pixels adjacent to the unknown area [1, 2, 13], as seen in in inpainting methods. In [14], the semantic information of incomplete images was utilized to guide a GAN for outpainting. In [15], a query-based outpainting method was proposed, where an image is divided into small patches and the patches with unknown pixels are completed by taking the conditions from both distant and neighbour patches into account. In an iterative manner, [1] extends one side of the a regular image for outpainting step by step, using the context of the past generation as guidance. [12]
and Chiu (2021) delves into the idea of synthesizing unknown regions by exploiting the correlations between distant image patches to establish the global semantics of known pixels. Similarly, in Esser et al. (2021); Chang et al. (2022), image outpainting methods were studied with transformers Vaswani et al. (2017), which predict the most probable pixel value recursively. However, these conventional outpainting methods do not account for the unique omni-directional continuity inherent to 360-degree images, often leading to discontinuities and artifacts.
### Conditional Image Generation
Conditional image generation refers to the synthesis of images based on specific conditions, such as text prompts Rombach et al. (2022); Kang et al. (2023), semantic maps Esser et al. (2021); Chang et al. (2022) and audio cues Yariv et al. (2023). For instance, Isola et al. (2017) achieves conditional image generation using a conditional GAN Mirza and Osindero (2014) to formulate the joint probability of images and conditions. Chen et al. (2020); Esser et al. (2021) treat an image as a sequence of pixels and therefore generate pixels in an iterative manner. Building on the success of diffusion methods in image generation Ho et al. (2020); Song et al. (2021), various conditional diffusion models have been investigated. For example, Dhariwal and Nichol (2021) introduces an auxiliary classifier to guide the generation of images within a specific category. Ho and Salimans (2022) presents a unified framework for conditional generation using diffusion models, introducing a mechanism to control the correlation between the generated image and its input guidance. However, alignment of these conditions with omni-directional geometry is not trivial and further omni-aware alignment strategy is required.
## Methodology
As shown in Fig. 2, our proposed AOG-Net for 360-degree image generation follows an autoregressive manner by outpainting a local region progressively. In each step, a global-local conditioning mechanism is introduced to formulate text, omni-visual, NFoV and omni-geometry guidances with cross-attention based transformers into a global stream and a local stream. Such conditions are further adopted a backbone generative prior for the outpainting. The details of these components are discussed in this section.
### 360-Degree Images & Problem Formulation
Given a 360-degree image, denoted as \(\mathbf{I}\), there are three typical representations as shown in Fig. 3 (a) - (c). Each of them can be transformed into the others. Specifically, we have:
* _Spherical representation_\(\mathbf{I}(\omega,\phi)\), where \(\omega\) from \(-180^{\circ}\) to \(180^{\circ}\) denotes the longitude and \(\phi\) indicates the latitude from \(-90^{\circ}\) to \(90^{\circ}\) of a pixel. In practice, cos and sin transforms are adopted for \(\omega\) and \(\phi\), respectively, regarding the periodical property for traversal within an image.
* _Cubenap projection_ treats \(\mathbf{I}\) as a set of general 2D images, which are the faces of a cubic. In detail, we have \(\mathbf{I}=\{\mathbf{i}_{F},\mathbf{i}_{L},\mathbf{i}_{B},\mathbf{i}_{R}, \mathbf{i}_{U},\mathbf{i}_{D}\}\), where each image \(\mathbf{i}\in\mathbb{R}^{C\times H_{\mathbf{i}}\times W_{\mathbf{i}}}\) can be viewed as a general NFoV image, where \(H_{\mathbf{i}}\) and \(W_{\mathbf{i}}\) denote the height and the width of a face, respectively, and \(C\) is the number of channels.
* _Equictencangular projection_ maps \(\mathbf{I}\) to a general image in \(\mathbb{R}^{C\times H_{\mathbf{i}}\times W_{\mathbf{i}}}\), where \(H_{\mathbf{I}}\) and \(W_{\mathbf{I}}\) indicate height and width, respectively. Compared to cubenap projection, equirectangular maps the entire spherical 360-degree image into a single rectangular grid, characterized by its noticeable pixel distortation around the top and bottom regions.
As spherical representation inherently conforms to the 360-degree geometry coordinates, we project these geometry information to the cubenap form as shown in Fig. 3 (d) - (e). We denote such geometry information as \(\mathbf{\Gamma}=\{\gamma_{F},\gamma_{L},\gamma_{B},\gamma_{R},\gamma_{U}, \gamma_{D}\}\), where \(\gamma\in\mathbb{R}^{2\times H_{\mathbf{i}}\times W_{\mathbf{i}}}\) contains the geometry information of a cubic face.
Given an NFoV image \(\mathbf{X}\in\mathbb{R}^{C\times H_{\mathbf{X}}\times W_{\mathbf{X}}}\), such as a 2D image taken by smartphones, where \(H_{\mathbf{X}}\) and \(W_{\mathbf{X}}\) are its height and width, respectively; and a text guidance with its embedding \(\mathbf{T}\in\mathbb{R}^{C_{T}\times L}\), where \(C_{T}\) is the dimension of textual feature and \(L\) is the length of text guidance, our method aims to synthesize a 360-degree image \(\hat{\mathbf{I}}\) by given \(\mathbf{X}\) and \(\mathbf{T}\).
Figure 2: Illustration of the proposed AOG-Net architecture.
### Autoregressive Omni-Traversal for Outpainting
The autoregressive process outpaints the given NFoV image \(\mathbf{X}\) progressively to a complete 360-degree image \(\hat{\mathbf{I}}\) under the guidance of the text \(\mathbf{T}\). Specifically, each step completes a local NFoV view, which is extracted from the incomplete 360-degree image with an unknown region that is neighbouring to a known region.
**Local View Projection & Backprojection.** To leverage a wide range of NFoV domain knowledge such as the weights from large-scale pretrained weights on NFoV image datasets, we correspondingly retrieve a local view from a 360-degree image \(\mathbf{I}\) centred at the location \(\omega\) and \(\phi\) as a forward projection. In detail, we project the local view in \(\mathbf{I}\) to an NFoV image \(\mathbf{X}\) via a gnomonic projection, denoted as \(\mathbf{X}=O(\mathbf{I},\omega_{\mathbf{X}},\phi_{\mathbf{X}})\), where \(\omega_{\mathbf{X}}\) and \(\phi_{\mathbf{X}}\) are the centroid longitude and latitude of the local view. Similarly, we have a backprojection - an inverse gnomonic projection maps the pixels in \(\mathbf{X}\) back to \(\mathbf{I}\) partially within the scope of the corresponding NFoV view, denoted as \(\tilde{\mathbf{I}}=\,O^{-1}(\mathbf{X},\omega_{\mathbf{X}},\phi_{\mathbf{X}})\). Note that the pixel value out of the scope of \(\mathbf{X}\) in \(\tilde{\mathbf{I}}\) is defined as \(-inf\). Furthermore, we define an operator \(\oplus\) for two 360-degree inputs: \(\mathbf{I}_{\alpha}\) and \(\mathbf{I}_{\beta}\) as:
\[\mathbf{I}_{\alpha}\;\oplus\;\mathbf{I}_{\beta}(\omega,\phi)\;=\;\left\{ \begin{array}{l}\mathbf{I}_{\alpha}(\omega,\;\phi)\text{ if }\mathbf{I}_{\alpha}(\omega,\;\phi)\;\neq\;-inf,\\ \mathbf{I}_{\beta}(\omega,\;\phi)\text{ otherwise},\end{array}\right. \tag{1}\]
which is used for attaching a newly generated partial 360-degree image to the current incomplete image.
**Single-Step Outpainting**. In a single-step outpainting, without loss of generality, for the \(k^{\text{th}}\) step, an incomplete NFoV image \(\bar{\mathbf{X}}_{k}=O(\bar{\mathbf{I}}_{k},\omega_{\bar{\mathbf{X}}_{k}}, \phi_{\bar{\mathbf{X}}_{k}})\) is retrieved from an incomplete 360-degree image \(\bar{\mathbf{I}}_{k}\). Particularly, we denote a conditioned outpainting model \(F_{\mathbf{\Theta}}\), where \(\mathbf{\Theta}\) are learn-able parameters. \(F_{\mathbf{\Theta}}\) estimates the unknown pixels in \(\bar{\mathbf{X}}_{k}\), where the outpainted results is denoted as \(\hat{\mathbf{X}}_{k}=F_{\mathbf{\Theta}}(\bar{\mathbf{X}}_{k},\bar{\mathbf{I} }_{k},\mathbf{T})\). The estimation \(\hat{\mathbf{X}}_{k}\) is then backprojected to 360-degree view and a 360-degree outpainted estimation can be obtained as \(\hat{\mathbf{I}}_{k}=O^{-1}(\bar{\mathbf{X}}_{k},\omega_{\bar{\mathbf{X}}_{k} },\phi_{\bar{\mathbf{X}}_{k}})\oplus\bar{\mathbf{I}}_{k}\). Note that \(\omega_{\hat{\mathbf{X}}_{k}}=\omega_{\bar{\mathbf{X}}_{k}}\) and \(\phi_{\hat{\mathbf{X}}_{k}}=\phi_{\bar{\mathbf{X}}_{k}}\) as \(\hat{\mathbf{X}}_{k}\) retains its omni-geometry location. More details about \(F_{\mathbf{\Theta}}\) can be found in the subsequent discussions.
Generally, \(\bar{\mathbf{I}}_{1}=O^{-1}(\mathbf{X},\omega_{\mathbf{X}},\phi_{\mathbf{X}})\) is initialized with the input NFoV image \(\mathbf{X}\). To optimize \(F_{\mathbf{\Theta}}\), the known pixels in \(\bar{\mathbf{X}}_{k}\) and \(\bar{\mathbf{I}}_{k}\) can be extracted from the ground truth \(\mathbf{I}\); we denote \(\mathbf{X}_{k}\) and \(\mathbf{I}_{k}\) as the ground truth for the \(k^{\text{th}}\) step. For inference, the known pixels in \(\bar{\mathbf{X}}_{k}\) and \(\bar{\mathbf{I}}_{k}\) can be based on the accumulated estimations \(\hat{\mathbf{X}}_{k-1}\) and \(\bar{\mathbf{I}}_{k-1}\), respectively.
**Autoregressive Outpainting**. Following an autoregressive stochastic process, a 360-degree image can be produced progressively:
\[p(\mathbf{I}|\mathbf{T})=\prod_{k}p(\mathbf{I}_{k}|\mathbf{I}_{<k},\mathbf{T}), \tag{2}\]
where \(\mathbf{I}_{<k}\) indicates \(\mathbf{I}_{1}\),..., \(\mathbf{I}_{k-1}\), which are incomplete 360-degree images. As our proposed method mainly outpaints a small portion of unknown pixel in an incomplete 360-degree image, Eq. (2) can be written with the Markov property:
\[p(\mathbf{I}|\mathbf{T})=\prod_{k}p(\mathbf{I}_{k}|\bar{\mathbf{I}}_{k}, \mathbf{T})=\prod_{k}p(\mathbf{X}_{k}|\;\bar{\mathbf{I}}_{k},\mathbf{T}). \tag{3}\]
In line with the single-step outpainting, \(F_{\mathbf{\Theta}}\) is used to compute the conditioned probability terms as:
\[p(\mathbf{I}|\mathbf{T})\approx\prod_{k}F_{\mathbf{\Theta}}(\bar{\mathbf{X}}_{ k},\bar{\mathbf{I}}_{k},\mathbf{T}). \tag{4}\]
To this end, an autoregressive stochastic process has been formulated for 360-degree image generation. Note that we use an incremental pathway to identify \(\bar{\mathbf{X}}_{k}\) that prioritizes the generation process along the longitude direction.
### Global-Local Conditioning by Omni-Aware Open-Vocabulary Guidance
AOG-Net incorporates multiple conditions to ensure its alignment to user text guidances and known NFoV views regarding the omni-geometry. Specifically, in each autogressive step, a global-local conditioning mechanism is devised to thoroughly capture the following conditions:
* Text guidance \(\mathbf{c}_{\text{test}}\): a text encoder \(\mathcal{E}_{\text{text}}\) encodes user text description \(\mathbf{T}\), which is based on the CLIP pre-trained textual model and enables an open-vocabulary paradigm to align the text features within a latent space shared with visual patterns. Note that this text guidance remains constant for each autoregressive step \(k\) and acts as a global context. However, it can be modified according to user preferences to adjust during the generation process.
* Omni-visual guidance \(\mathbf{c}_{\text{360},k}\): a visual encoder \(\mathcal{E}_{\text{360}}\), which leverages the CLIP pre-trained visual model, transforms a 360-degree image into the latent space that is shared with \(\mathbf{c}_{\text{text}}\). Specifically, we encode each face in the cubemap representation of \(\bar{\mathbf{I}}_{k}\) and denote the results as \(\mathbf{c}_{\text{360},k}=\{\mathbf{c}_{F,k},\mathbf{c}_{L,k},\mathbf{c}_{B,k}, \mathbf{c}_{U,k},\mathbf{c}_{D,k}\}\).
* NFoV guidance \(\mathbf{c}_{\text{NFoV},k}\): a visual encoder \(\mathcal{E}_{\text{NFoV}}\) encodes the incomplete NFoV image \(\bar{\mathbf{X}}_{k}\) jointly with the 360-degree image \(\bar{\mathbf{I}}_{k}\) in its cubemap form aiming for a omni-visual local latent representation.
* Omni-geometry guidance \(\mathbf{c}_{\text{geometry},k}\): an omni-geometry encoder \(\mathcal{E}_{\text{geometry}}\) formulates the geometry \(\bar{\gamma}_{k}\) of an incompleted local NFoV image \(\bar{\mathbf{X}}_{k}\), jointly with \(\mathbf{\Gamma}\), to introduce the omni-geometry information for outpainting.
Figure 3: Different representations of a 360-degree image. (a) Equirectangular projection. (b) Spherical representation. (c) Cubemap projection. (d) A spherical representation with geometry coordinates. (e) Geometry projection on cubemap.
**Global-Local Conditioning.** This module aligns the derived conditions for 360-degree outpainting through both a global and a local stream, leveraging cross-attention mechanisms. Globally, the incomplete 360-degree visual guidance \(\mathbf{c}_{\text{360,$k$}}\) is cross-referenced with the text guidance \(\mathbf{c}_{\text{text}}\) to guarantee alignment between the content that already presents in \(\mathbf{c}_{\text{360,$k$}}\) and the content awaiting generation. Intuitively, we adopt a cross-attention based transformer for this purpose by treating the query as visual conditions \(\mathbf{c}_{\text{360,$k$}}\), while the value and key are as text conditions \(\mathbf{c}_{\text{text}}\). We denote the results as a global condition \(\mathbf{c}_{\text{global},k}\).
Likewise, the local stream incorporates the NFoV guidance and the omni-geometry guidance using a transformer grounded in cross-attention. This integration facilitates the local fine-grained details during the generation process. Specifically, the query adopts the NFoV condition \(\mathbf{c}_{\text{NFoV},k}\) supplemented by \(\mathbf{c}_{\text{geometry},k}\), while the value and the key are with the 360-degree visual guidance \(\mathbf{c}_{\text{360,$k$}}\) supplemented by \(\mathbf{c}_{\text{geometry},k}\). The resultant local condition is denoted as \(\mathbf{c}_{\text{local},k}\).
### Omni-Aware Diffusion for Outpainting
Leveraging the recent success of the diffusion approach for NFoV content generation, in each autoregressive step, \(F_{\boldsymbol{\Theta}}\) employs a stable diffusion backbone [14], incorporating the conditions \(\mathbf{c}_{\text{global},k}\) and \(\mathbf{c}_{\text{local},k}\). For the \(k\)th autoregressive step, in a diffusion, we further denote \(t\) as the diffusion temporal index and \(\epsilon_{\boldsymbol{\Theta}}(\mathbf{z}_{t},t)\) as the predicted noise introduced in the \(t\)th step, where \(\epsilon_{\boldsymbol{\Theta}}\) is a U-Net. To optimize \(\epsilon_{\boldsymbol{\Theta}}\), we minimize the following loss function:
\[\mathcal{L}:=\mathbb{E}_{\epsilon_{t}\sim\mathcal{N}(0,1),t}\left[\left\| \epsilon_{t}-\epsilon_{\theta}\left(\mathbf{z}_{t},t,\tau_{\boldsymbol{\Theta} }(\mathbf{c}_{\text{global},k},\mathbf{c}_{\text{local},k})\right)\right\|_{2 }^{2}\right], \tag{5}\]
where \(\tau_{\boldsymbol{\Theta}}\) maps the conditions to guide the denoising process in the latent space via a cross-attention mechanism [13].
## Experiments & Discussions
### Datasets
**360-Degree Images.** Following the existing studies [16, 17, 18], we evaluate our proposed method with the LAVAL indoor HDR dataset [1] for the 360-degree indoor image generation setting, which contains 2,233 360-degree images for extensive indoor scenes with a resolution of \(7,768\times 3,884\). For a fair comparison, we used the official training and testing split in our experiments, in which we have 1,921 training samples and 312 testing samples.
For the outdoor setting, we utilize the LAVAL outdoor HDR dataset [13], which contains 210 360-degree images with a resolution of \(7,768\times 3,884\). In this setting, we randomly sample 170 images as the training split and 40 images for testing purpose. In the training of both settings, the resolution of 360-degree image is downsampled to \(4,096\times 2048\) for computation efficiency.
**Text Captioning.** As the lack of text captions in both datasets, we adopted a large-scale captioning model BLIP2 [11] to generate captions for 360-degree images. We first caption an image in its equirectangular form to obtain an overall text guidance with an average of 5-6 words. Next, we caption the horizontal faces individually of its cubemap to obtain additional text guidances.
**Data Augmentation.** To increase the diversity of the 360-degree images generated, we augmented the training 360-degree image samples by adopting random clockwise rotation based on the intrinsic horizontal cyclicity. To improve the diversity of text guidance, besides randomly swapping words with TextAttack [15], we randomly combines the overall text guidance and one randomly selected text guidance associated with a face of the cubemap during training.
### Implementation Details
**Pre-Trained Models & Network Architecture.** In our experiment, we adopted the pretrained Stable Diffusion generative prior for each autoregressive generation step. In addition, We utilized the visual encoder and the text encoder of OpenCLIP [10] for \(\mathcal{E}_{\text{360}}\) and \(\mathcal{E}_{\text{text}}\), respectively. We utilized T2I-Adapter [12] as the architecture for NFoV guidance encoder \(\mathcal{E}_{\text{NFoV}}\) and omni-geometry guidance encoder \(\mathcal{E}_{\text{geometry}}\). In both local stream and global stream, we utilized a 16-layer cross-attention base transformer to compute \(\mathbf{c}_{\text{local},k}\) and \(\mathbf{c}_{\text{global},k}\) respectively.
**Training and Inference.** AOG-Net was trained using an AdamW optimizer [10] with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). It was trained for 240 epochs, with learning rate \(1\times 10^{-1}\) and batch size 1. For inference, we leveraged DPM-Solver++ [11] as sampler with a step set to 25 and classifier-free-guidance [12] scale set to 2.5. All experiments were conducted on an Nvidia RTX 3090.
### Comparison with State of the Art
**Baselines.** Our method is compared with the recent state-of-the-art 360-degree image outpainting methods from three perspectives. 1) NFoV image guided generation methods without text guidance: ImmerseGAN [13], OmniDreamer[1] and EnvMapNet [14]. For a fair comparison, the text guidance in our method is set as a blank prompt. 2) Text-guided generation method without NFoV guidance - Text2Light [2]. In this case, we generated our initial input NFoV image using Stable Diffusion Outpainting model for our method. 3) NFoV image and text guided generation method [13].
**Evaluation Metrics.** To quantitatively evaluate our AOG-Net, we adopted LPIPS [13] and Frechet Inception Distance (FID) [14] as the evaluation metrics to measure the similarity of latent representations between the generated 360-degree images and the ground truth. To evaluate the semantic consistency (SC) between the generated 360-degree image and the input text guidance, we compared the similarity between input text guidance and the captioning texts obtained from the generated image. Specifically, we leveraged a large-scale captioning model BLIP2 [11] and computed the similarity with sentence embeddings [15] for this purpose. In addition, we leverage Inception Score (IS) [12], which is a
2016) to measure the quality of the generated images as Text2Light does not involve ground truth images.
**Overall Performance.** For the methods requiring an initial NFoV image as guidance, our method achieve the best performance as shown in Table 1. It achieves an FID score 35.6 and an LPIPD value 0.37 under an indoor setting and an FID score 18.4 and an LPIPS value 0.36 under an outdoor setting. Note that only OmniDreamer conducted evaluation for outdoor setting in the literature. As shown in the first example (in the first column) in Fig. 5, our method outpaints the house and the garden smoothly, while OmniDreamer make the neighbouring region of the house smudged with sudden color changes in the garden. For the third example (in the third column), our method is able to deliver detailed outpainting regarding objects compared to OmniDreamer.
Regarding the comparison with text-conditioned method - Text2Light with open vocabulary text guidances, the performance metrics are listed in Table 2. Our method outperform Text2Light with an SC score 0.53 and an IS value 4.2 for an outdoor setting and an SC score 0.36 and an IS value 5.1 for an indoor setting. Due to the complexity of the indoor setting and the lack of in-depth text description, the semantic consistence of both methods drop. However, our method still provide overall better semantically consistent images with higher image quality. As depicted in Figure 4, our method is able to produce visually appealing images, while the images of Text2Light (Chen, Wang, and Liu 2022) are much dimmer, leading to degenerated visual quality and lack of details. Additionally, under the outdoor setting, our method generates 360-degree image with fine-grained details (such as trees, grasses), while the Text2Light produces smudged-out patterns in images (in the third column of Fig. 4).
For ImmenseGAN, which leverages both NFoV and text guidances, our method performs superior under the indoor setting according to the metrics the authors reported in their work. ImmenseGAN was trained with a private large-scale dataset and the authors did not evaluate their method under an outdoor setting. Note that our method leverages the pretrained diffusion models and only requires 40 randomly selected training samples to achieve its current performance.
components are excluded from our method. While the semantic consistency between the outputs and the text prompts is only slightly affected, the deterioration in the quality of the outpainted images is significant. In Fig. 6 (d), there are various artifacts such as black patches and human hands.
**Geometry Guidance.** In this setting, we removes all 360-degree geometry information \(\mathbf{c}_{\text{geo}}\) in computing \(\mathbf{c}_{local}\), which would only rely on pixel-wise semantics to connect distant patches. The results reveal minimal effects on SC, but there is a notable decrease in image quality. As illustrated in Fig. 6 (e), black patterns appear on the floor and the ceiling's color lacks consistency with distant regions.
**Backbone Only.** In this setting, only the pre-trained Stable Diffusion backbone is employed in a traditional manner, integrating the NFoV input image and text guidance. This settings produces the poorest SC values and FID scores, suggesting that the outpainted 360-degree images are of low quality and misaligned with the text guidance. Referring to the generation example shown in Fig. 6 (f), the model struggles to produce a text-consistent and sharp 360-degree image outpainting, with evident localized artifacts.
### Generalization
We further explore an open-image conditioned task with our method. Our method is required to outpaint unseen oil painting artworks to 360-degree images with text guidances. As shown in Fig. 7, the generated images are consistent with the style of input NFoV artworks, demonstrating the potential of our method in accepting out-of-domain NFoV image as conditions.
### Limitation and Future Work
AOG-Net relies on a pre-trained backbone model, which introduces two primary limitations. Firstly, AOG-Net is somewhat constrained by the data on which the backbone model was pre-trained, potentially limiting the method's generalizability. This could negatively influence its capability to synthesis a wide range of content with extensive styles. Secondly, the diffusion model's prolonged inference time affects its utility in applications that require real-time performance. Future endeavors might focus on developing a backbone tailored for the autoregressive stochastic process to enhance efficiency. Finally, by capitalizing on the autoregressive characteristics, our method has the potential to be extended to facilitate text-guided 360-degree video generation.
## Conclusion
In this work, we present a novel deep learning method AOG-Net for 360-degree image generation with an autoregressive scheme guided by NFoV images and open vocabulary text prompts. A global-local conditioning mechanism is devised to adaptively encode guidances considering omni-directional properties. With these design, AOG-Net is able to generate realistic 360-degree image with fine details while aligning with the text guidance. Comprehensive experiments demonstrate the effectiveness of AOG-Net.
Figure 5: Qualitative examples with input and ground truth. (a) Input, \(90^{\circ}\) in both longitude and latitude direction. (b) ground truth. (c) OmniDreamer. (d) AOG-Net (Ours).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & LPIPS\(\downarrow\) & FID\(\downarrow\) & SC\(\uparrow\) \\ \hline \hline AOG-Net (Ours) & 0.37 & **35.6** & **0.72** \\ w/o global condition & 0.38 & 40.08 & 0.70 \\ w/o local condition & 0.40 & 47.46 & 0.71 \\ w/o geometry condition & **0.36** & 37.2 & **0.72** \\ Autoregressive w/ backbone only & 0.43 & 67.4 & 0.61 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on the Laval Indoor HDR dataset.
Figure 6: Ablation study. (a) Ground truth. (b) AOG-Net. (c) w/o global condition. (d) w/o local condition. (e) w/o geometry condition. (f) Autoregressive w/ backbone only.
Figure 7: Open-image conditioned generation results, with the prompt “a 360 image of a city, oil painting, ultracolorful, impressionist style, Van Gogh style”. (a) Input. (b) Output. |
2310.01313 | Dissecting Resilience Triangle: Unravelling Resilience Curve Archetypes
and Properties in Human Systems Facing Weather Hazards | Resilience curves have been the primary approach for conceptualizing and
representing the resilience behavior of communities during hazard events;
however, the use of resilience curves has remained as a mere conceptual and
visual tool with limited data-driven characterization and empirical grounding.
Empirical characterizations of resilience curves provide essential insights
regarding the manner in which differently impacted systems of communities
absorb perturbations and recover from disruptions. To address this gap, this
study examines human mobility resilience patterns following multiple
weather-related hazard events in the United States by analyzing more than 2000
empirical resilience curves constructed from high-resolution location-based
mobility data. These empirical resilience curves are then classified using
k-means clustering based on various features into archetypes. Three main
archetypes of human mobility resilience are identified: Type I, with rapid
recovery after mild impact; Type II, exhibiting bimodal recovery after moderate
impact; and Type III, showing slower recovery after severe impact. The results
also reveal critical thresholds, such as the bimodal recovery breakpoint at a
20% impact extent, at which the recovery rate decreases, and the critical
functional threshold at a 60% impact extent, above which recovery rate would be
rather slow. The results show that a critical functional recovery rate of 2.5%
per day is necessary to follow the bimodal resilience archetype when impact
extent exceeds more than 20%. These findings provide novel and important
insights into different resilience curve archetypes and their fundamental
properties. Departing from using resilience curves as a mere concept and visual
tool, the data-driven specification of resilience curve archetypes and their
properties improve our understanding of the resilience patterns... | Chia-Wei Hsu, Ali Mostafavi | 2023-09-13T03:18:29Z | http://arxiv.org/abs/2310.01313v1 | Dissecting Resilience Triangle: Unravelling Resilience Curve Archetypes and Properties in Human Systems Facing Weather Hazards
###### Abstract
Resilience curves have been the primary approach for conceptualizing and representing the resilience behavior of communities during hazard events; however, the use of resilience curves has remained as a mere conceptual and visual tool with limited data-driven characterization and empirical grounding. Empirical characterizations of resilience curves provide essential insights regarding the manner in which differently impacted systems of communities absorb perturbations and recover from disruptions. To address this gap, this study examines human mobility resilience patterns following multiple weather-related hazard events in the United States by analyzing more than 2000 empirical resilience curves constructed from high-resolution location-based mobility data. These empirical resilience curves are then classified using k-means clustering based on various features (e.g., residual performance, disruptive duration, and recovery duration) into archetypes. Three main archetypes of human mobility resilience are identified: Type I, with rapid recovery after mild impact; Type II, exhibiting bimodal recovery after moderate impact; and Type III, showing slower recovery after severe impact. The results also reveal critical thresholds, such as the bimodal recovery breakpoint at a 20% impact extent (i.e., function loss), at which the recovery rate decreases, and the critical functional threshold at a 60% impact extent, above which recovery rate would be rather slow. The results show that a critical functional recovery rate of 2.5% per day is necessary to follow the bimodal resilience archetype when impact extent exceeds more than 20%. These findings provide novel and important insights into different resilience curve archetypes and their fundamental properties. Departing from using resilience curves as a mere concept and visual tool, the data-driven specification of resilience curve archetypes and their properties improve our understanding of the resilience patterns of human systems of communities and enable researchers and practitioners to better anticipate and analyze ways communities bounce back in the aftermath of disruptive hazard events.
Resilience curve archetypes, Human mobility, Disasters
## 1 Introduction
The characterization of resilience in human systems is of primal importance when evaluating their performance during and after disasters (Alexander, 2013; Gunderson, 2010; Hsu, Liu, Nguyen, Chien, & Mostafavi, 2022; Roy, Cebrian, & Hasan, 2019; Wang & Taylor, 2016). Studies conducted in recent years, have focused on characterizing resilience curves (Gama Dessavre, Ramirez-Marquez, & Barker, 2016; Kammouh, Zamani Noori, Cimellaro, & Mahin, 2019; Zobel & Khansa, 2014) which graphically represent the trajectory of a community's functionality or performance from the onset of a disaster to the eventual recovery (Bruneau et al., 2003; Hosseini, Barker, & Ramirez-Marquez, 2016; Manyena, 2006; Panteli, Mancarella, Trakas, Kyriakides, & Hatziargyriou, 2017; Tierney & Bruneau, 2007). The resilience curves are the primary approach in understanding and anticipating a community's response to perturbations induced by disasters. However, the use of resilience curves has remained as a mere conceptual and visual tool with
limited data-driven characterization and empirical grounding. Empirical characterizations of resilience curves provide essential insights regarding ways different systems of communities absorb perturbations and recover from disruptions (Bostick, Connelly, Lambert, & Linkov, 2018; Ganguly, Bhatia, & Flynn, 2018; Li, Zhang, Jia, Li, & Zhu, 2019). Resilience curves have also been primarily used in characterizing the vulnerability and recovery of infrastructure systems of communities during hazard events. Limited studies have examined the characteristics of resilience curves in human systems of communities. In recent years, a number of studies have examined fluctuations in human mobility patterns during hazard events as a way to capture both the loss of functionality and subsequent recovery of human systems in communities. (Chan & Schofer, 2016; Hong, Bonczak, Gupta, & Kontokosta, 2021; Kammouh et al., 2019; Platt, Brown, & Hughes, 2016; Rus, Kilar, & Koren, 2018; Zhang & Wang, 2016). Examining resilience curves associated with human mobility enables the capture of fluctuations in human activities in response to disruptive events, such as floods, wildfires, storms, pandemics, or conflicts (Coleman, Gao, DeLeon, & Mostafavi, 2022; Farahmand, Wang, Mostafavi, & Maron, 2022; Gao et al., 2021; Hsu, Liu, et al., 2022; Rajput & Mostafavi, 2022, 2023; Tang et al., 2023). Human mobility captures the overall functionality of human systems in communities. Under normal circumstances, human mobility would be at an equilibrium state, signifying routine movement patterns. When a disruptive event occurs, human mobility usually decreases as people seek shelter or infrastructure is disrupted, reflected by a dip in the resilience curve. Post-disaster, human mobility gradually recovers as people start to adapt, recover, and resume their routines (Nicholson, Barker, & Ramirez-Marquez, 2016). The fluctuations in human mobility patterns can be captured to construct the resilience curve of human systems of communities. Despite the growing number of studies examining resilience of human systems using human mobility patterns, limited attention has been paid to the resilience curve archetypes and their fundamental properties using empirical data. The specification of empirical resilience curve archetypes and their properties is essential to improve our understanding of the resilience patterns of human systems of communities and to enable researchers and practitioners to better anticipate and analyze ways communities bounce back in the aftermath of disruptive hazard events (Chang & Shinozuka, 2004; Hao, Chen, Mei, Huang, & Xu, 2017). Recognizing this important knowledge gap, the primary objective of this study is to examine the presence of universal archetypes in human mobility resilience curves and delineate their fundamental properties. Specifically, we seek to address two specific research questions: (1) What are the primary archetypes of the human system resilience curves? and (2) What fundamental characteristics explain the behavior of different resilience curve archetypes? To assess the extent of functionality loss, we measure the degree of human mobility change by computing the number of trips going in and out of a given area. For our analysis, we utilize high-resolution location-based mobility data related to multiple extreme weather events in the United States. In total, we constructed more than 2000 empirical resilience curves representing different regions and hazard events. Accordingly, we examined and computed the main features of each resilience curve, and subsequently classified them based on their main features into a set of universal archetypes. The following sections discuss the study data and methods.
## 2 Data Description
This study collected and analyzed data from the following major hazard events in the United States: Hurricane Ida, Hurricane Harvey, Hurricane Laura, and Winter Storm Uri. Hurricane Ida, a Category 4 storm, struck in August 2021, causing its most profound devastation in Louisiana. Hurricane Ida later moved on to cause significant flooding and damage in the northeastern U.S., particularly impacting New York and New Jersey. Hurricane Harvey, also a Category 4 storm, made landfall near Rockport, Texas, in August 2017, then unleashed catastrophic flooding on the Houston metropolitan region. The impact was so severe that it ranks among the most damaging natural disasters in U.S. history. Category 4 storm Hurricane Laura in August 2020 majorly affected the Gulf Coast of the U.S., impacting parts of Louisiana and Texas and leaving a trail of destruction in its wake. In 2021, Winter Storm Uri swept across multiple states, with Texas being the most affected. Bringing low temperatures, snow, and ice, Uri led to extensive power outages, immense property damage, and several tragic fatalities. The storm's severity was such that Texas's power grid was overwhelmed, resulting in widespread blackouts across the state. These four events were chosen due to their significant disruptions across the impacted regions. The data collection timeframes for each event were established according to the event's start date. Specifically, data collection commenced 9 days before the initiation of extreme weather events and spanned a total 35 days for such occurrences. In the case of the winter storm event, the data collection timeframe was limited to 24 days due to data availability constraints. The baseline periods--normal (steady-state) periods without perturbations from natural phenomena disasters--are set in the first week of the data collection timeframes. The mobility flow during this period is viewed as baseline performance. Table 1 summarizes the data collection details for each event.
To construct the empirical resilience curves, we used a location-based dataset obtained from Spectus, a company that collects vast amounts of anonymous location information from approximately 70 million mobile devices in the United States through a privacy-compliant framework. This data is gathered when users voluntarily opt in to location services provided by partner apps. Spectus captures data from nearly 20% of the U.S. population, representing around one in
four smartphone users. The location-based data from Spectus has proven valuable in previous research (Hsu, Ho, & Mostafavi, 2022; Hsu, Liu, et al., 2022) to be representative in terms of capturing human mobility and travel mode detection due to its high spatiotemporal resolution. To safeguard user privacy, Spectus de-identities the collected data and applies additional privacy measures, including obscuring home locations at the census-block group level and removing sensitive points of interest. The device-level data encompasses anonymized individual location information, ID, location coordinates, and timestamps. To ensure privacy preservation, Spectus offers access to its data, tools, and location-based datasets from other providers through a data cleanroom environment. The locations visited by each device are determined by identifying the census tract polygons each device resides in. The trajectory of a device's movement is then established based on the precedence relationship of their visit times. Daily trip counts between census tracts are aggregated at a census-tract level. For further analysis, a human mobility network is created using the daily trip counts among each census-tract pair. In this study, the focus lies on examining the fluctuation of total mobility flow in each census tract. Inflows and outflows are aggregated, as their separation within the period of interest does not yield additional information for characterizing resilience curves. This is because inflow and outflow are proportional to the total flow and share the same flow pattern.
## 3 Methodology
Our analysis framework consists of the following components: (1) constructing the resilience curves; (2) extracting the key features of resilience curves; (3) finding the universal resilience curve archetypes and (4) specifying the main properties of the archetypes. Figure 1 depicts the overview of the analysis framework.
Our initial step focused on constructing empirical resilience curves associated with human mobility of census-tract populations across the four hazard events examined in the study. These empirical resilience curves are constructed by comparing the number of trips each day within the 35 days data-collection timeframe with the baseline (steady-state) number of trips for each census tract. The baseline number of trips serves as the baseline functionality of each census tract. For example, if the number of trips observed on a certain day during Hurricane Ida is 70% of the baseline value, then the remaining system functionality is 70%; the impact extent is 30%. Figure 2 shows a conceptual resilience curve,
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Event} & \multirow{2}{*}{Type} & \multirow{2}{*}{Affected areas} & Number of & \multirow{2}{*}{Event start date} & Data collection \\ & & & census tracts & & time frame \\ \hline Ida & Hurricane & Louisiana & 402 & 2021/8/28 & 2021/8/19–2021/9/23 \\ Harvey & Hurricane & Texas & 786 & 2017/8/24 & 2017/8/15–2017/9/30 \\ Laura & Hurricane & Louisiana & 402 & 2020/8/27 & 2020/8/18–2020/9/22 \\ Uri & Winter Storm & Texas & 786 & 2021/2/13 & 2021/2/4–2021/2/28 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the weather events selected for this study
Figure 1: Overview of analysis framework: Location-based data are analyzed to estimate mobility flow as a proxy for the functionality of human systems. Key resilience features, including the critical transition time points, are extracted from the human mobility resilience curves for each census tract. Based on their features, resilience curves were clustered to specify universal archetypes. The archetypes for human mobility resilience curves are examined to delineate their fundamental properties.
critical temporal points that signal transitions, and resilience features characterizing the curve. Each resilience curve is composed of critical temporal points: th (exposure to hazard): the time when the system first experiences the disruptive event; te (initial system disruption): the time when the system experiences the maximum disruptive effects; td (end of cascading failures): the time point when disruption starts to diminish; ts (beginning of system recovery): the onset of the system's recovery period; tf (completion of system recovery): the time point when the system is considered to have fully recovered and tc (maximum recovery time): the complete end of the event, marking a return to normal conditions. These points encapsulate the timeline of the event and the corresponding system performance.
The subsequent step involves extracting key features from the resilience curves. The selection of key features is based on a review of the literature related to the characteristics of resilience curves (Table 2) (Hillebrand et al., 2018; Poulin & Kane, 2021). The key features collected in the study can be grouped into multiple categories to mitigate noise and enhance the curves' discernibility, we employed a Savitzky-Golay filter (Press & Teukolsky, 1990; Savitzky & Golay, 1964; Simonoff, 2012) after comparing the performance of multiple common smoothing techniques, such as rolling average and interpolators. However, curve smoothing should be applied judiciously to avoid potential over-smoothing, which could obscure important details. To reliably record the system resilience performance, we use the smoothed curve only for computing features related to integral and rate features and computed the other features from the original resilience curves. Upon completion of this step, the resilience curve for each census tract is represented by a vector of multiple key resilience features.
Next, we perform clustering algorithms to classify resilience curves based on the key features to examine if these clusters would represent different universal archetypes. For testing different clustering algorithms, we used the elbow method and silhouette scores. Based on these metrics, we chose k-means clustering, which is a widely used technique due to its efficiency and simplicity. The k-means (Han et al., 2009; Ikotun, Ezugwu, Abualigah, Abuhaija, & Heming, 2023; Jin & Han, 2010; Kanungo et al., 2002; Likas, Vlassis, & J. Verbeek, 2003; Steinley, 2006) algorithm partitions the data into k distinct, non-overlapping subsets (or clusters), with each data point belonging to the cluster with the closest mean. After clustering, we proceed to computing an average resilience curve for each cluster. This step provides us with a representative curve that encapsulates the typical behavior for each cluster. These representative curves serve as resilience curve archetypes.
In the last step, we aim to find out the fundamental properties of the resilience curve archetypes. We apply multivariate adaptive regression splines (MARS) (Balshi et al., 2009; Friedman, 1991; Kisi & Parmar, 2016; Miao, Shi, Zhang, & Wang, 2013) for piecewise linear regression to simplify the representative resilience curves of each cluster and to reveal their common structures. MARS is a form of nonparametric regression technique that identifies the main turning points and slopes (system performance change rates). We use simplified curves to specify their main properties of resilience curve archetypes.
## 4 Results
We implemented the method described in the previous section on the data collected from Hurricane Ida, Hurricane Harvey, Hurricane Laura and winter storm Uri. Evaluation metrics for k-means clustering such as the elbow method and silhouette score suggest that the optimum number of clusters for the events is six. Among the six clusters, one or
Figure 2: Illustrative features for a conceptual resilience curve. Metrics may be derived with respect to performance p(t). In this illustration, the system does not fully recover within the control interval, so disruptive duration may be undefined. The area above the resilience curve represents cumulative impact, while the area under the resilience curve represents cumulative performance.
two clusters can be viewed as outliers which should be removed for the interpretation of results. After the clustering and cleaning, we performed MARS to identify the critical turning points as the slope of line segments between these points. Figures 3 through 6 show the average resilience curve for each cluster, critical turning point and slopes of the line segments between the turning points. The x-axes of the plots represent the dates within our data collection timeframe while y-axes represent the system functional performance.
The shapes of resilience curves are all triangular instead of trapezoidal, due to the more adaptive nature of human systems compared to infrastructure systems. Human mobility starts to recover immediately after the shock while infrastructure systems may experience a sustained period of impact before their functionality starts recovering. The areas with larger impact are either the ones with communities with larger populations or coastal areas (specifically for hurricane events), which is intuitive.
Figure 3 shows the clustering results for Hurricane Ida in which the extent of impact varies, ranging from 20% to 70%. We can observe that except for cluster 5, all the other clusters fully recover at some point within our data collection timeframe. Based on this result, if the impact extent exceeds 70%, then a region may take much longer to fully recover, which is not captured due to our data collection timeline. Except for cluster 3, we observe a significant slow-down in the recovery rate after passing a certain level of system performance in all the other clusters, 20% in this case.
Figure 4 shows the clustering results for Hurricane Harvey; the extent of impact ranges from 55% to 70%. We can observe that except for Cluster 1, all the other regions fully recover at some point within our data collection timeframe. For all the clusters, we see a significant decrease in the recovery rate after achieving a certain level of system functional performance, 20% in this case.
Figure 5 shows the clustering result for Hurricane Laura; the extent of impact ranges from 50% to 85%. We note that all regions, with the exception of cluster 3, show complete recovery within our data collection period. Additionally, after reaching a specific threshold of system functionality, 20% in this instance, there is a noticeable decline in the recovery speed across the other clusters.
Figure 6 shows the clustering result for Winter Storm Uri; the extent of impact ranges from 40% to 60%. We notice that every region achieves full recovery within the duration of our data collection; however, once they reach a system functional performance threshold of 20%, there is a marked slowdown in their pace.
The results for the selected events revealed multiple important insights regarding fundamental properties of resilience curve archetypes of human mobility. Figure 7 shows the conceptual resilience curve resilience archetypes found in our
\begin{table}
\begin{tabular}{l l l l} \hline
**Types** & **Metrics** & **Formula** & **Definition** \\ \hline Magnitude & Residual performance & \(p(t_{d})\) & System performance following the disruption, generally after cascading failures. \\ & Depth of impact & \(1-p(t_{d})\) & Complement of residual performance. \\ & Restored performance & \(\frac{p(t_{f})}{p(t_{e})}\frac{p(t_{f})-p(t_{d})}{p(t_{e})-p(t_{d})}\) & System’s performance after recovery efforts are complete. \\ Duration & Disruptive duration & \(t_{f}-t_{e}\) & Entire period of degraded performance. \\ & Recovery duration & \(t_{f}-t_{d}\) & Period of the recovery phase, starting from the lowest performance. \\ Integral & Cumulative impact & \(\int 1-p(t)dt\) & Integrated difference between performance and its reference. \\ & Cumulative performance & \(\int p(t)dt\) & Complement of cumulative impact. \\ Rate & Failure rate & \(\frac{p(t_{d})-p(t_{e})}{t_{d}-t_{e}}\) & Resilience and adaptive capability at failure phases. \\ & Recovery rate & \(\frac{p(t_{f})-p(t_{e})}{t_{f}-t_{e}}\) & Restorative capability at recovery phases. \\ Stability & Temporal stability & \(d=1/std(\text{residual}_{b})\) & Performance fluctuations around the trend. No benchmark, a larger \(d\) corresponds to lower fluctuations. \\ \hline \end{tabular}
\end{table}
Table 2: Key resilience features extracted from the resilience curve regarding human mobility recovery for clustering.
empirical study. The three archetypes are as follows: Type I: This archetype comprises areas that experienced the least impact and exhibited relatively rapid recovery. Human mobility in these regions resumed quickly, showcasing efficient recovery processes. Type II: Areas with resilience curves of this archetype encountered moderate levels of impact. Notably, we observed a bimodal recovery rate, where the system's recovery speed significantly slowed down upon reaching a certain functional performance level. This bimodal phase transition was distinct from the initial recovery and had a noticeable impact on the overall recovery process. Type III: Representing the most affected areas, this group demonstrated considerably slower recovery rates compared to the other archetypes. It took significantly longer for these regions to return to full system performance. Further investigation into these three archetypes led to identification of significant critical thresholds and distinguishing properties of the archetypes. Bimodal recovery breakpoint (BRB): The BRB is the point at which the recovery rate changes. The BRB was specified at the impact extent of 20%. If the impact extent is less than the BRB, human mobility recovers swiftly after hazard perturbations. If the impact extent is
Figure 4: Human mobility recovery clustering for Hurricane Harvey. Each curve represents the MARS regression on the average resilience curve of each cluster; the numbers represent failure rates and recovery rates. The extent of impact ranges from 55% to 70%. The turning point of bimodal recovery rate can be seen in clusters 1, 2, 4, and 5 when the system recovers to between 80% to 90% of performance. Cluster 3 represents areas with relatively low impact and fast recovery.
Figure 3: Human mobility resilience curve clustering for Hurricane Ida. Each curve represents the MARS regression on the average resilience curve of each cluster; the numbers represent failure rates and recovery rates. The extent of impact ranges from 20% to 70%. The turning point of bimodal recovery rate can be seen in cluster 1, 2 and 4 when the system recovers to between 80% to 90% of performance. Cluster 3 represents areas with relatively low impact and fast recovery. Cluster 5 represents areas with large impact and slow recovery.
greater than the BRB, however, human mobility initially recovers to the BRB level with a faster recovery rate; after the BRB is achieved, the recovery rate slows down. Critical functional threshold (CFT): The CFT is the impact extent beyond which the recovery of the system would proceed at a slow rate causing long recovery durations. The CFT for human mobility was identified with the impact extent of 60%. An impact extent greater than this threshold would lead to significantly slower recovery speed. The initial recovery rate right after impact (RR1) needs to follow the critical functional recovery rate (CFRR) which is 2.5% per day. The extent of impact played a significant role in determining the recovery behavior, particularly when it is greater than the CFT. Table 3 summarizes properties of different resilience curve archetypes related to human mobility and their fundamental properties. In summary, our examination of these distinct archetypes sheds light on resilience behavior of human mobility in hazard events. The identification of the BRB and CFT provides valuable insights into the recovery patterns of different areas, facilitating better anticipation and evaluations of ways different areas recover from hazard-induced perturbations.
Figure 5: Human mobility recovery clustering for Hurricane Laura. Each curve represents the MARS regression on the average resilience curve of each cluster; the numbers represent failure rates and recovery rates. The extent of impact ranges from 50% to 85%. The turning point of bimodal recovery rate can be seen in clusters 1, 2, and 4, when the system recovers to between 80% to 90% of performance. Cluster 3 represents areas with large impact and slow recovery.
Figure 6: Human mobility recovery clustering for Winter Storm Uri. Each curve represents the MARS regression on the average resilience curve of each cluster; the numbers represent failure rates and recovery rates. The extent of impact ranges from 40% to 60%. The turning point of bimodal recovery rate can be seen when the system recovers to between 80% to 90% of performance.
## 5 Discussion and Concluding Remarks
The primary objective of this study is to explore the existence of universal archetypes in human mobility resilience curves when communities face weather hazards. Over the past two decades, resilience curves have been widely used to characterize the fluctuations in the functional performance of human and physical infrastructure systems of communities during disruptions; however, the majority existing characterizations of resilience curves of communities do not consider variations in the types of resilience curve patterns a system might exhibit depending on the extent of impact. Also, the majority of existing resilience curve characterizations focus on physical infrastructure; limited studies have examined the characteristics of resilience curves in human systems of communities in an empirical manner. Recognizing these important gaps, this study examined more than 2000 empirical resilience curves related to human mobility behaviors across multiple geographic regions and various weather events. The analyses examined datasets collected from Hurricanes Ida, Harvey, and Laura, and Winter Storm Uri in the United States. The results show that the resilience curves can generally be divided into three universal archetypes with low, medium, and high impact extent. The group with low impact exhibited the fastest recovery speed. In the second archetype with moderate impact extent, the recovery follows a bimodal rate: the initial recovery to the bimodal recovery breakpoint proceeds at a faster rate followed by and a slower recovery rate after a breakpoint. This finding suggests that if the impact extent is not severe, human systems of community would strive to recover to the BRB as fast possible. After the system functionality reaches BRB, the recovery rate slows. The existence of the bimodal recovery pattern might be a combination of two different behaviors. In the first stage of recovery, system functional performance quickly bounces back to around 80% to 90% of their normal functional performance because a large proportion of their daily activities return to normal. When this level of functional performance is achieved, the human system has a functional performance state, thus the remaining
\begin{table}
\begin{tabular}{l l l l} \hline
**Archetypes** & **Excent of Impact** & **Recovery Rate** & **Description** \\ \hline Type I & \(I<BRB\) & \(RR_{1}>CFRR\) & Least impact \\ & & & Fastest recovery \\ & & & Fully recovered \\ Type II & \(CFT>I>BRB\) & \(RR_{1}>CFRR\) & Moderate impact \\ & & & Bimodal recovery rates \\ & & & Fully recovered \\ Type III & \(I>CFT\) & \(RR_{1}<CFRR\) & Largest impact \\ & & & Slowest recovery \\ \hline \multicolumn{3}{l}{\(I\): Extent of impact} \\ RR1: Initial recovery rate right after impact & & \\ BRB: Bimodal recovery breakpoint & & \\ CFT: Critical functional threshold & & \\ CFRR: Critical functional recovery rate & & \\ \end{tabular}
\end{table}
Table 3: The three archetypes of human mobility recovery behavior following weather events, and the governing characteristics that distinguish archetypes.
Figure 7: Conceptual representation of the archetypes of human mobility recovery behavior post weather events. Each colored curve represents a cluster found in the analysis which is then further categorized into three archetypes. Critical functional threshold, bimodal recovery breakpoint, and critical functional recovery rate are the governing characteristics that separate the archetypes.
recovery would follow a slower rate. In the third resilience archetype and when the impact exceeds critical functional threshold (i.e., 70%), the recovery rate would follow a slow rate with a consistent slope. This finding suggests that the critical functionality threshold of 70% is the point beyond which the human system would struggle to recover. There are differences between the clusters of resilience curves observed from different types of events and different geographical areas; however, the three archetypes revealed in this study show good representativeness covering all possible post-event human mobility recovery behaviors. The study findings provide multiple important scientific and practical contributions. First, resilience curves have remained a mere conceptual and visual tool for understanding fluctuations in behaviors of community systems during and after hazard events. The absence of empirical grounding and specific characterization of resilience curves have hindered the ability to properly analyze and understand recovery trajectories of community systems. The findings of this study reveal the presence of universal resilience curve archetypes with specific properties (i.e., bimodal recovery breakpoint, critical functional threshold, and bimodal recovery rates) that enable evaluation of the way community systems behave in the aftermath of hazard-induced perturbations. Second, departing from the majority of the existing studies that focus on characterizing resilience in physical infrastructure, this study examined the resilience of human systems based on fluctuations in human mobility. By leveraging fine-grained location-based data from multiple hazard events, this study evaluated the functional performance of human systems of communities based on fluctuations in mobility flows to reveal universal resilience curve archetypes. From a practical perspective, the outcomes of this study provide important insights for emergency managers and city officials. Based on the understanding of which areas are likely to belong to a certain archetype in an extreme weather event, we can anticipate the functional performance and recovery patterns. With the insights about the extent of impact and recovery trajectory for a short period after impact, decision-makers can take proactive actions to restore infrastructure and allocate resources to areas that are expected to follow a slow recovery rate. By assessing the extent of impact and recovery trajectory during the immediate post-event period, decision-makers can gain insights into an area's future performance. These contributions move us closer to a deeper understanding and a greater predictive insight regarding the resilience behaviors of community systems in hazard events. Future studies can build upon the findings of this study to examine empirical resilience curves and their characteristics in other community systems (such as infrastructure systems) to depart from using resilience curves as a mere conceptual and visual tool and reveal data-driven and empirical characteristics of resilience curves in different systems. For example, future studies can examine empirical data from other systems to evaluate whether similar universal resilience curve archetypes exist in other systems and if they exhibit properties such as critical functionality threshold and bimodal recovery rate. Such insights would move the field of community resilience forward with empirical evidence needed to have a deeper and more predictive understanding of the way different community systems behave during hazard-induced perturbations.
## Acknowledgments
This material is based in part upon work supported by the National Science Foundation under CRISP 2.0 Type 2 No. 1832662 and CAREER 1846069. The authors also would like to acknowledge the data support from Spectus. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or Spectus.
## Author Contributions
All authors critically revised the manuscript, gave final approval for publication, and agree to be held accountable for the work performed therein. C.H. was the lead Ph.D. student researcher and first author, who was responsible for supervising data collection, performing final analysis, and writing the majority of the manuscript. A.M. was the faculty advisor for the project and provided critical feedback on the project development and manuscript.
## Data availability
All data were collected through a CCPA- and GDPR-compliant framework and utilized for research purposes. The data that support the findings of this study are available from Spectus ([https://spectus.ai/product/](https://spectus.ai/product/)), but restrictions apply to the availability of these data, which were used under license for the current study. The data can be accessed upon request submitted to the providers (Spectus representative: Brennan Lake; email: [email protected]). The data was shared under a strict contract through Spectus' academic collaborative program, in which they provide access to de-identified and privacy-enhanced mobility data for academic research. All researchers processed and analyzed the data under a non-disclosure agreement and were obligated not to share data further or to attempt to re-identify data.
## Code availability
The code that supports the findings of this study is available from the corresponding author upon request.
|
2309.13674 | Asymmetrical braneworlds and the charged lepton mass spectrum | A braneworld mechanism for explaining the mass spectrum of the charged
leptons is proposed. Based on the existence of an asymmetric warp factor for a
$5+1$-dim braneworld scenario, the proper fractions between the masses of the
electron, muon and tauon are achieved. As a straightforward consequence, our
results coincide with the Koide's mass formula. | Henrique Matheus Gauy, Alex E. Bernardini | 2023-09-24T15:48:18Z | http://arxiv.org/abs/2309.13674v1 | # Asymmetrical braneworlds and the charged lepton mass spectrum
###### Abstract
A braneworld mechanism for explaining the mass spectrum of the charged leptons is proposed. Based on the existence of an asymmetric warp factor for a \(5+1\)-dim braneworld scenario, the proper fractions between the masses of the electron, muon and tauon are achieved. As a straightforward consequence, our results coincide with the Koide's mass formula.
_Introduction_ - Within the Standard Model (SM), the masses and mixings of the quarks and leptons originates from their interactions with the Higgs field. Even though such interactions have been experimentally confirmed, the interaction coupling constants are free parameters and the generating mechanism of the relations between them is still unveiled. Notwithstanding the well-defined mass spectrum exhibited by the three families of charged leptons, an explanation for the mass values and their relative gaps is indeed an open problem. Phenomenological approaches [1; 2] have been proposed through empirical relations among the fermion masses, as an attempt of uncovering some of its underlying physics1.
Footnote 1: For instance, the so-called Koide’s mass formula [1; 2],
\[\mathcal{K}=\frac{m_{e}+m_{\mu}+m_{\tau}}{\left(\sqrt{m_{e}}+\sqrt{m_{\mu}}+ \sqrt{m_{\tau}}\right)^{2}}=\frac{2}{3}, \tag{1}\]
provides such a speculative relation which can be translated as a weaker condition for the fractions \(m_{\mu}/m_{e}\approx 207\) and \(m_{\tau}/m_{\mu}\approx 17\), here \(m_{e}\), \(m_{\mu}\) and \(m_{\tau}\).
In a parallel context, extra dimensions have played a prominent role in our understanding of the hierarchy between the Planck and weak scales [3; 4]. Thus, it is natural to assume that other properties of the SM could also be understood from such paradigm. The mass spectrum of fermions should be no different. The most promising higher dimensional scenarios are based on braneworld models with non-factorizable geometries [3; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], where a \(\mathbb{Z}_{2}\)-symmetric brane is mostly assumed2. Nevertheless, generalization of these models are obtained by relaxing the mirror symmetry across the brane [28; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. The term "asymmetric" brane refers to any braneworld model for which the mirror symmetry is not required. Here, an asymmetric brane model will be an essential feature for realizing the spectrum of the fermions.
Footnote 2: Since they are generally motivated by the Horava-Witten model [29; 30].
In this letter, a braneworld mechanism for explaining the charged lepton mass spectrum is evaluated. Modeling the fermion spectrum through extra dimensions is indeed not a new idea. It has been addressed in the literature through different contexts [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. However, instead of either considering that the distinct chiralities are differently placed over the extra dimensions [46; 47] or relying on a non-trivial higher dimensional Higgs and several other fields [48; 49; 50; 51; 52; 53; 54; 55; 56], a simpler mechanism shall be here admitted. By considering a six-dimensional braneworld constructed from an asymmetric conformally flat metric, a non-trivial bulk profile for the gauge boson and a dark scalar field, fermionic fields, whose dynamics are driven by an ordinary \(SU(2)\times U(1)\) action, are shown to give rise to several massive four-dimensional spinors. In particular, for the right choice of the parameters, the number of massive spinors becomes exactly three, and their mass spectrum shall coincide with that for the charged leptons.
The setup for the proposed mechanism comes from a six-dimensional braneworld \(\mathbb{E}^{6}\) that is, as a set, equivalent to the product space \(\mathbb{M}^{4}\times\mathbb{R}\times\mathbb{S}^{1}\), where \(\mathbb{M}^{4}\) is some four dimensional pseudo-Riemannian manifold, \(\mathbb{R}\) is the real line and \(\mathbb{S}^{1}\) is the circle. The following ansatz is assumed for the metric of \(\mathbb{E}^{6}\),
\[\mathbf{g}=e^{-2A(y)}\left(\eta_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}+r^{2} \mathrm{d}\theta^{2}+\rho^{2}\mathrm{d}y^{2}\right), \tag{2}\]
where \(A\) is the warp factor, \(\eta_{\mu\nu}\) is the Minkowski metric3 of the space-time \(\mathbb{M}^{4}\), \(\theta\in\mathbb{S}^{1}\), \(r\) is the radius of \(\mathbb{S}^{1}\), \(y\in\mathbb{R}\) and \(\rho\) is the brane model scale. An asymmetry of the braneworld is achieved by imposing that \(e^{-A}=f_{+}+f_{-}\), where \(f_{+}\) and \(f_{-}\) are even and odd non-null functions, respectively, and \(f_{+}\geq\left|f_{-}\right|\) for all \(y\).
The fermions are thus represented by the action [57]
\[S_{D}=\int\mathrm{d}^{6}x\sqrt{-g}\left\{\overline{\Gamma}^{(6)} \Gamma^{M}\!\left[\nabla_{M}-\left(i\mathbf{g}\tau_{a}\mathsf{W}_{M}^{a}-\frac{ i}{2}\mathbf{g}^{\prime}\mathsf{B}_{M}\right)\right]\!\mathsf{L}^{(6)}+\overline{ \mathbf{e}}_{-}^{(6)}\Gamma^{M}\!\left[\nabla_{M}+(i\mathbf{g}^{\prime}\mathsf{ B}_{M}+\zeta_{,M})\,\right]\!\mathbf{e}_{-}^{(6)}\\ -m_{0}\left[\overline{L}^{(6)}\mathsf{H}\,\mathbf{e}_{-}^{(6)}+ \overline{\mathbf{e}}_{-}^{(6)}\mathsf{H}^{\dagger}\,L^{(6)}\right]\Bigg{\}}, \tag{3}\]
where \(\nabla_{M}:=\partial_{M}+\mathfrak{C}_{M}\), \(\mathfrak{C}_{M}\) is the spin connection of \(\left(\mathbb{E}^{6},\mathbf{g}\right)\), \(\zeta\) is some dark scalar field, \(\mathsf{B}_{M}\) is the hypercharge gauge boson, \(\mathsf{W}_{M}^{a}\) are the \(SU(2)\) gauge bosons, \(\mathbf{g}\) and \(\mathbf{g}^{\prime}\) are the \(SU(2)\) and \(U(1)\) couplings, \(\tau_{a}=\sigma_{a}/2\) are the \(SU(2)\) generators, \(\mathsf{H}=\begin{cases}\mathsf{H}_{+}\\ \mathsf{H}_{0}\end{cases}\) is the Higgs doublet, and \(m_{0}\) is the coupling constant with the Higgs field. The left-handed leptons4 pair up to transform under \(SU(2)\),
Footnote 4: One chooses a representation such that \(\Gamma^{\mathcal{I}}=\begin{bmatrix}I^{4}&0^{4}\\ 0^{4}&-I^{4}\end{bmatrix}\).
\[\mathsf{L}^{(6)}=\begin{cases}\nu_{+}^{(6)}\\ \mathbf{e}_{+}^{(6)}\end{cases}, \tag{4}\]
where
\[\nu_{+}^{(6)}=\begin{bmatrix}\Psi_{\nu}^{L+}\\ \Psi_{\nu}^{R+}\\ 0\\ 0\end{bmatrix}\text{ and }\mathbf{e}_{+}^{(6)}=\begin{bmatrix}\Psi_{\mathbf{e}}^{L+}\\ \Psi_{\mathbf{e}}^{R+}\\ 0\\ 0\end{bmatrix} \tag{5}\]
represent the left-handed charged leptons and neutrinos, respectively. While right-handed leptons5, which are uncharged under \(SU(2)\), are represented by
Footnote 5: Right-handed neutrinos have been disregarded from the formalism since the interest in this work is in the charged lepton masses.
\[\mathsf{e}_{-}^{(6)}=\begin{bmatrix}0\\ 0\\ \Psi_{\mathbf{\nu}}^{L-}\\ \Psi_{\mathbf{\nu}}^{R-}\end{bmatrix}. \tag{6}\]
The lepton masses are a consequence of the existence of a Higgs field, which is driven by the action
\[S_{H}=\int\mathrm{d}x^{6}\sqrt{-g}\left\{\left[\left(\nabla_{M}-i\mathbf{g} \tau_{a}\mathsf{W}_{M}^{a}-\frac{i}{2}\mathbf{g}^{\prime}\mathsf{B}_{M}\right) \mathsf{H}\right]^{\dagger}\left(\nabla^{M}-i\mathbf{g}\tau^{b}\mathsf{W}_{b}^ {M}-\frac{i}{2}\mathbf{g}^{\prime}\mathsf{B}^{M}\right)\mathsf{H}-\zeta^{,M} \zeta_{,M}\mathsf{H}^{\dagger}\mathsf{H}+V(\mathsf{H})\right\}, \tag{7}\]
where \(V(\mathsf{H})=\mu^{2}\mathsf{H}^{\dagger}\mathsf{H}-\lambda\left(\mathsf{H}^{ \dagger}\mathsf{H}\right)^{2}\).
For conformally flat metrics (cf. (2)), fermionic fields can not be localized at the vicinity of the brane. Hence, our proposal thus relies on the existence of a non-trivial bulk profile for the hypercharge gauge boson \(\mathsf{B}=\mathsf{B}_{M}\mathrm{d}x^{M}\) and the scalar field6\(\zeta\), each defined by
Footnote 6: The scalar field is not essential for the mechanism, however, simpler expressions and more elegant properties are obtained for the reduced resulting effective four dimensional action in that case.
\[\mathsf{B}_{\mu}=\mathsf{B}_{\mu}\left(x^{\nu}\right),\mathsf{W}_{\mu}^{a}= \mathsf{W}_{\mu}^{a}\left(x^{\nu}\right),\,\mathsf{B}_{y}=\mathsf{W}_{i}^{a}= 0,\,\mathsf{B}_{\theta}=\frac{r}{\rho\mathsf{g}^{\prime}}\frac{F_{,y}}{F}\text { and }\zeta=\frac{1}{2}\ln\left(F\right), \tag{8}\]
where \(F\) is some positive even function of \(y\) and the subscript index "," stands for partial derivatives. Gauge and scalar fields defined in Eq. (8) drive the localization of fermmonic modes7[58; 59], and can be interpreted as background fields. Their corresponding chiralities are distinguished at the action, in the exact same way as they are identified for four dimensional models. The six dimensional models, as constructed here, do not provide any required structure for distinguishing chiralities by the localization mechanism, which is indeed unnecessary in this case.
Footnote 7: With the exception of \(\zeta^{,M}\zeta_{,M}\mathsf{H}^{\dagger}\mathsf{H}\), which serves the purpose of achieving a trivial bulk profile for the Higgs, as shall be presented later.
The parameters \(\mu^{2}\), \(\lambda\) and \(m_{0}\) set the scale of the charged lepton masses, which are much smaller than the brane model scale \(1/\rho\), and are treated as a perturbation of the system. Therefore the terms \(\mu^{2}\), \(\lambda\) and \(m_{0}\) do not affect the co-dimensional wave functions, which are determined as if the fermions and the Higgs were massless. The zero modes of leptons are thus described by the equations
\[\Gamma^{i}\left(\nabla_{i}+i\mathsf{g}^{\prime}\frac{1}{2}\mathsf{B}_{i} \right)\mathsf{L}_{0}^{(6)}=0, \tag{9}\]
and
\[\Gamma^{i}\left(\nabla_{i}+i\mathsf{g}^{\prime}\mathsf{B}_{i}+\zeta_{i,i} \right)\mathsf{e}_{-0}^{(6)}=0. \tag{10}\]
Following a separation of variables technique, Eqs. (9) and (10) are reduced to Schrodinger-like equations and the fermionic zero modes are described by8
Footnote 8: The components \(\nu_{+0}^{(6)}=e^{\frac{5A}{2}}\sum_{k}R_{0k}^{+}\begin{bmatrix}\psi_{\nu 0k}^{R+}(x^{ \mu})\\ 0\\ 0\end{bmatrix},\,\mathsf{e}_{+}^{(6)}=e^{\frac{5A}{2}}\sum_{k}R_{0k}^{+} \begin{bmatrix}\psi_{\epsilon 0k}^{L+}(x^{\mu})\\ 0\\ 0\end{bmatrix}\) and \(\mathsf{e}_{-0}^{(6)}=e^{\frac{5A}{2}}\sum_{k}R_{0k}^{-}\begin{bmatrix}0\\ 0\\ 0\end{bmatrix}\) are non-normalized, since \(L_{0k}^{-}=A_{k}e^{-ik\theta}e^{\frac{k\kappa\theta}{r}}F^{-\frac{3}{2}}\) and \(R_{0k}^{+}=B_{k}e^{-ik\theta}e^{\frac{k\kappa\theta}{r}}F^{-\frac{1}{2}}\).
\[\nu_{+0}^{(6)}=e^{\frac{5A}{2}}\sum_{k}L_{0k}^{+}\begin{bmatrix}\psi_{\nu 0k}^{L+}(x^{\mu})\\ 0\\ 0\end{bmatrix},\,\mathsf{e}_{+}^{(6)}=e^{\frac{5A}{2}}\sum_{k}L_{0k}^{+} \begin{bmatrix}\psi_{\epsilon 0k}^{L+}(x^{\mu})\\ 0\\ 0\end{bmatrix}\text{ and }\mathsf{e}_{-0}^{(6)}=e^{\frac{5A}{2}}\sum_{k}R_{0k}^{-} \begin{bmatrix}0\\ 0\\ \psi_{\epsilon 0k}^{R-}(x^{\mu})\end{bmatrix}, \tag{11}\]
where
\[L_{0k}^{+}=C_{k}e^{ik\theta}e^{\frac{k\kappa\theta}{r}}F^{\frac{1}{2}}\text{ and }R_{0k}^{-}=D_{k}e^{ik\theta}e^{\frac{k\kappa\theta}{r}}F^{\frac{1}{2}}, \tag{12}\]
represent the fermionic co-dimensional wave functions, \(k\in\mathbb{Z}\), and \(C_{k}\) and \(D_{K}\) are integration constants.
On the other hand, the zero mode of the Higgs field satisfies the equation
\[\nabla^{i}\nabla_{i}\mathsf{H}-i\frac{\mathsf{g}^{\prime}}{r^{2}}\mathsf{B}_{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}} \,\,\, \tag{13}\]
which after a rescaling and a separation of variables technique, with \(\mathsf{H}=\sum_{k}e^{ik\theta}e^{2A}\hat{\phi}_{k}H_{k}\left(x^{\mu}\right)\), reduces to
\[\hat{\phi}_{k,vv}+2\left(A_{,vv}-2A_{,v}{}^{2}\right)\hat{\phi}_{k}+k\frac{ \mathsf{g}^{\prime}}{r^{2}}\mathsf{B}_{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf { \mathsf{ }}}}}}}}}}}} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
where
\[\mathsf{L}_{k}=\begin{Bmatrix}\nu_{Lk}\\ \mathsf{e}_{Lk}\end{Bmatrix},\;\nu_{Lk}=\begin{bmatrix}\psi_{\nu\mathsf{0}k}^{L+} \\ 0\end{bmatrix},\,\mathsf{e}_{Lk}=\begin{bmatrix}\psi_{\nu\mathsf{0}k}^{L+}\\ 0\end{bmatrix},\,\mathsf{e}_{Rk}=\begin{bmatrix}0\\ \psi_{\nu\mathsf{0}k}^{R-}\end{bmatrix} \tag{17}\]
and
\[\mathsf{m}_{k}=\frac{m_{0}\int\mathrm{d}y\,e^{-A}\overline{L_{0k}^{+}}R_{0k}^{ +}}{\sqrt{\int\mathrm{d}y\,\left|L_{0k}^{+}\right|^{2}}\sqrt{\int\mathrm{d}y\, \left|R_{0k}^{+}\right|^{2}}\sqrt{\int\mathrm{d}y\,\dot{\phi}_{0}^{2}}}=\frac{ m_{0}\int\limits_{-\infty}^{\infty}\mathrm{d}y\,\left(f_{+}+f_{-}\right)Fe^{ \frac{2\rho k}{r}y}}{\int\limits_{-\infty}^{\infty}\mathrm{d}y\,Fe^{ \frac{2\rho k}{r}y}\sqrt{\int\limits_{-\infty}^{\infty}\mathrm{d}y\,\left(f_{ +}+f_{-}\right)^{4}}}. \tag{18}\]
Analogously, a four dimensional observer will model the Higgs field by an effective action, which follows from substituting Eq. (15) into Eq. (7),
\[S_{H}^{(4)}=\int\mathrm{d}x^{4}\left\{\left[\left(\nabla^{\mu}-i\mathsf{g} \tau^{a}W_{a}^{\mu}-\frac{i}{2}\mathsf{g}^{\prime}B^{\mu}\right)H_{0}\right]^ {\dagger}\left(\nabla_{\mu}-i\mathsf{g}\tau_{b}W_{\mu}^{b}-\frac{i}{2}\mathsf{ g}^{\prime}B_{\mu}\right)H_{0}+V_{eff}(H)\right\}, \tag{19}\]
where \(V_{eff}(H_{0})=\mu_{eff}^{2}H_{0}^{\dagger}H_{0}-\lambda_{eff}\left(H_{0}^{ \dagger}H_{0}\right)^{2}\), with
\[\mu_{eff}^{2}=\mu^{2}\frac{\int\limits_{-\infty}^{\infty}\mathrm{d}ye^{-6A}}{ \int\limits_{-\infty}^{\infty}\mathrm{d}ye^{-4A}}\text{ and }\lambda_{eff}= \lambda\frac{\int\limits_{-\infty}^{\infty}\mathrm{d}ye^{-6A}}{ \left(\int\limits_{-\infty}^{\infty}\mathrm{d}ye^{-4A}\right)^{2}}. \tag{20}\]
After breaking the gauge symmetry, the Higgs field acquires a vacuum expectation value, driven by \(\mu_{eff}\) and \(\lambda_{eff}\), as
\[v=\frac{\mu_{eff}}{\sqrt{\lambda_{eff}}}=\frac{\mu}{\sqrt{\lambda}}\sqrt{ \int\limits_{-\infty}^{\infty}\mathrm{d}y\,(f_{+}+f_{-})^{4}}, \tag{21}\]
and \(\mathsf{m}_{k}v/\sqrt{2}\), from Eqs. (18) and (21), gives the effective masses as measured by a four-dimensional observer.
The mechanism for the fine-tuning of the charged leptons masses can finally be explained. By assuming \(F=\mathrm{sech}^{\mathsf{a}}\left(y\right)\), which is straightforwardly connected with
\[\mathsf{B}_{\theta}=-\frac{\mathsf{a}r}{\rho\mathsf{g}^{\prime}}\tanh\left(y \right)\text{ and }\zeta=\frac{\mathsf{a}}{2}\ln\left[\mathrm{sech}\left(y\right)\right], \tag{22}\]
and that \(\mathsf{a}/2\leq 2\rho/r<\mathsf{a}\), solely three normalizable fermionic zero modes can be identified, each of them associated with \(k=-1\), \(k=0\) and \(k=1\), which are now to be labeled, respectively, as the electron, tauon and muon, i.e. \(\mathsf{m}_{-1}=m_{e}\), \(\mathsf{m}_{0}=m_{\tau}\) and \(\mathsf{m}_{1}=m_{\mu}\). After some straightforward manipulations one finds
\[\frac{m_{\mu}}{m_{e}}=\frac{\int\limits_{-\infty}^{\infty} \mathrm{d}y\,\operatorname{sech}^{\mathsf{a}}\left(y\right)e^{2\frac{\rho}{r} y}\left(f_{+}+f_{-}\right)}{\int\limits_{-\infty}^{\infty}\mathrm{d}y\, \operatorname{sech}^{\mathsf{a}}\left(y\right)e^{-2\frac{\rho}{r}y}\left(f_{+}+ f_{-}\right)}, \tag{23}\]
and
\[m_{\tau}=\left(m_{\mu}+m_{e}\right)\frac{\Gamma\left(\frac{ \mathsf{a}-\frac{2\rho}{r}}{2}\right)\Gamma\left(\frac{\mathsf{a}+\frac{2\rho} {r}}{2}\right)\int\limits_{-\infty}^{\infty}\mathrm{d}y\,\operatorname{sech}^{ \mathsf{a}}\left(y\right)f_{+}}{2\Gamma\left(\frac{\mathsf{a}}{2}\right)^{2} \int\limits_{-\infty}^{\infty}\mathrm{d}y\,\operatorname{sech}^{\mathsf{a}} \left(y\right)\cosh\left(2\frac{\rho}{r}y\right)f_{+}}. \tag{24}\]
The largeness of the tauonic mass is an effect of canonical normalization and it is independent from the space-time asymmetry. If \(\mathsf{a}\) is larger but of similar value to \(2\rho/r\), then the wave functions of the electron and muon, which are not localized at the center, become spread out, while the wave function of the tauon gets localized at the center of the system of coordinates. After canonical normalization, the electronic and muonic masses pickup a very small term when compared with the tauonic term, thus explaining the largeness of the tauon mass. In this way, charged lepton mass constraints can be straightforwardly attained regardless of the asymmetry, since the tauon mass can be made as large as necessary, albeit not correctly valued. On the other hand, the relation between the electron and muon masses relies on the asymmetry of the warp factor. The wave functions of the electron and muon are a mirror of one each other, and the overlap between each and an asymmetric warp factor leads to different masses. Yet, not all asymmetric warp factors can conclude the correct masses, since a very light electron is only realized when \(f_{+}-f_{-}\) goes to zero much faster than \(f_{+}+f_{-}\) for positive \(y\).
Being the SM particles represented by the zero modes, the massive modes are thus associated with beyond SM physics. From a phenomenological point of view, it is important to realize models that allow for the existence of a mass gap [60], between the zero and massive modes, in the spectrum of leptons. Then the energy scale at which the massive modes can be excited is fixed by this gap, and its existence is relevant for distinguishing the footprints of the massless modes, identified with stable four dimensional SM particles, from those coming from the massive modes, either discrete or continuous. When no mass gap is present, then there exist several massive modes with masses small to the point of being indistinguishable from the massless ones. Specifically, for configurations with a non-trivial profile for the gauge and scalar field, like Eq. (22), the fermionic massive modes are solutions of Morse-Rosen equations11, for which a discrete set of eigenstates can be identified, each associated with the mass eigenvalues
Footnote 11: For further details see App. D.
\[m_{jk}^{+}=\frac{2}{\rho}\frac{\sqrt{\left(j+1\right)\left(\mathsf{a}-j-1 \right)}}{\left|\mathsf{a}-2j-2\right|}\sqrt{\left(\frac{\mathsf{a}}{2}-j-1 \right)^{2}-\frac{k^{2}\rho^{2}}{r^{2}}}, \tag{25}\]
and
\[m_{jk}^{-}=\frac{1}{\rho}\frac{\sqrt{\left(j+1\right)\left(2\mathsf{a}-j-1 \right)}}{\left|\mathsf{a}-j-1\right|}\sqrt{\left(\mathsf{a}-j-1\right)^{2}- \frac{k^{2}\rho^{2}}{r^{2}}}, \tag{26}\]
where \(j\) is an integer, \(0\leq j<\mathsf{a}/2-1\) and \(m_{jk}^{\pm}\) references the discrete masses of positive and negative components of spinors, respectively. While continuous modes are related to masses
\[m^{+}\geq\frac{1}{2\rho}\left|\mathsf{a}-\frac{2\rho\left|k\right|}{r}\right| \text{ and }m^{-}\geq\frac{1}{\rho}\left|\mathsf{a}-\frac{\rho\left|k\right|}{r}\right|, \tag{27}\]
therefore, a mass gap between the zero and massive modes is found whenever the zero modes are normalizable.
To exemplify the mechanism in effect we propose a model for which \(f_{+}=\mathrm{sech}^{l}\left(y\right)\cosh\left(oy\right)\) and \(f_{-}=\mathrm{sech}^{l}\left(y\right)\sinh\left(oy\right)\), which, after substitution into Eq. (2), leads to
\[\mathbf{g}=\mathrm{sech}^{2l}\left(y\right)e^{2oy}\left(\eta_{\mu\nu}\mathrm{d}x ^{\mu}\mathrm{d}x^{\nu}+r^{2}\mathrm{d}\theta^{2}+\rho^{2}\mathrm{d}y^{2} \right). \tag{28}\]
The fractions of the charged lepton masses associated with the metric (28), calculated from Eqs. (23) and (24), become
\[\frac{m_{\mu}}{m_{e}}=\frac{\Gamma\left(\frac{l-o+\mathsf{a}-\frac{2\ell}{2}} {2}\right)\Gamma\left(\frac{l+o+\mathsf{a}+\frac{2\ell}{2}}{2}\right)}{\Gamma \left(\frac{l+o+\mathsf{a}-\frac{2\ell}{2}}{2}\right)\Gamma\left(\frac{l-o+ \mathsf{a}+\frac{2\ell}{2}}{2}\right)}, \tag{29}\]
and
\[\frac{m_{\tau}}{m_{\mu}}=\frac{\Gamma\left(\frac{\mathsf{a}-\frac{2\ell}{2}} {2}\right)\Gamma\left(\frac{\mathsf{a}+\frac{2\ell}{2}}{2}\right)\Gamma\left( \frac{l+\mathsf{a}-o}{2}\right)\Gamma\left(\frac{l+\mathsf{a}+o}{2}\right)}{ \Gamma\left(\frac{\mathsf{a}}{2}\right)^{2}\Gamma\left(\frac{l+\mathsf{a}- \frac{2\ell}{2}-o}{2}\right)\Gamma\left(\frac{l+\mathsf{a}+\frac{2\ell}{2}+o }{2}\right)}. \tag{30}\]
If \(o\) is an integer and \(l-o\) is an even integer, then Eqs. (29) and (30) become polynomial equations. Particularly, for \(o=2\) and \(l=4\) one can solve Eqs. (29) and (30) analytically to find \(\mathsf{a}=34.9562\) and \(\mathsf{a}-2\rho/r=0.28488\). In fact, there are many combinations of \(o\) and \(l\) for which the lepton spectrum is achievable. Fig. 1 depicts the values of \(l\), \(\mathsf{a}\) and \(\mathsf{a}-2\rho/r\) with fixed \(l-o\) for which the proper fractions of the masses and Koide's formula are realized for the metric Eq. (28). Correspondently, Fig. 2 depicts the values of \(l\), \(l-o\) and \(\mathsf{a}-2\rho/r\) with fixed \(\mathsf{a}\) for which the proper fractions of the masses and Koide's formula are realized for the metric Eq. (28). The intersection of the curves in both Figs. 1 and 2 are identified according to the parameter values that lead to the charged lepton mass spectrum.
The lepton mass gap, between the continuous and massless modes, is \(m\sim 10^{-2}/\rho\). A realistic mass gap, for small values of \(l\), is thus achievable for \(\rho\ll 10^{-18}\,\mathsf{m}\), since \(m_{0}v/\sqrt{2}\approx m_{\tau}=1.7\,GeV\). On the other hand, for large values of \(l\), tiny values of \(1-o/l\) and \(\mathsf{a}\) of the order of unity, a realistic mass gap is achievable for \(\rho\ll 2^{l}\Gamma(\mathsf{a})/\left[l^{\mathsf{a}/2}\Gamma(\mathsf{a}/2) \right]10^{-16}\mathsf{m}\), since \(m_{0}v/\sqrt{2}\approx l^{\mathsf{a}/2}\Gamma(\mathsf{a}/2)/\left[2^{l} \Gamma(\mathsf{a})\right]GeV\). Therefore, for \(l\sim 50\) the mass gap is achievable for \(\rho\ll 1\,\mathsf{m}\). The mass gap for the Higgs field follows similar patterns.
To conclude, from a classical perspective, fermions are localized at the vicinity of an asymmetric conformally flat brane by the inclusion of a non-trivial bulk profile for the hypercharge gauge boson and a dark scalar field; and the proper mass fractions are a consequence of canonical normalization and the overlap between the fermionic wave functions with the asymmetric warp factor. The leptons of the SM are represented by the zero modes of six dimensional spinors, leading to a tower of linear independent modes driven by an integer value. After dimensional reduction, the setup reproduces the SM, with massless neutrinos, and there is no mismatch between the flavor and mass bases of the charged and neutral lepton sector. Otherwise, if a mechanism for neutrinos masses is included, the possibility of lepton flavor mixing and the non-conservation of lepton number apparently arises. Noticing that flavor mixing is indeed relevant for the neutrino sector, fitting neutrino masses in intersecting braneworlds have been recently discussed [61] in a framework where the near-tribimaximal mixing for neutrinos may arise naturally due to the structure of the Yukawa matrices. Consistency with the quark and charged lepton mass matrices [62; 63; 64; 65] in combination with obtaining near-tribimaximal mixing fixes the Dirac neutrino mass matrix which is then driven by the seesaw mechanism for different choices of right-handed neutrino masses. In this formulation, running the obtained neutrino parameters down to the electroweak scale via the renormalization group equations leads to neutrino mass predictions [61]. Of course, given the intricate pattern of mass hierarchies and mixings between the different generations of quarks and neutrinos, our model was constrained to the charged lepton sector, where the effects of flavor mixing due to the coherent superpositions of flavor defined leptons, as assumed for neutrinos, are suppressed [66].
Figure 1: (Color online) (a) The charged lepton mass spectrum associated with the metric (28) for \(l-o=1\). (b) The charged lepton mass spectrum associated with the metric (28) for \(l-o=2\). The solid black, solid red and dashed black lines represent the equations \(m_{\mu}/m_{e}=206.768\), \(m_{\tau}/m_{\mu}=16.817\) and \(\mathcal{K}=2/3\), respectively. Results are for triple intersecting points at \(l=6,\,7,\,8,\text{ and }9\).
A mechanism for explaining the spectrum of the charged lepton masses was therefore build upon two parameters: the gauge field strength, \(\mathsf{a}\), and the ratio between the co-dimensional sizes, \(\rho/r\), both related to a seminal \(5+1\)-dimensional braneworld scenario discussed in [6]. In particular, from a model represented by metric Eq. (28), which was proposed for the sole reason of achieving analytical expressions, the proper fractions between the electron, muon and tauon masses were obtained, solely requiring that \(\mathsf{a}-2\rho/r\) be tiny. Even when \(\mathsf{a}\) is of the order of unity, the correct fractions for the masses can be realized when the warp factor presents a large asymmetry, which for Eq. (28) is achievable when \(1-o/l\) is tiny. Furthermore, other asymmetric warp factors, in principle, also have the necessary structure to realize the needed mass spectrum, but an adjustment of the parameter values would be necessary. Even if surprising simple integrals have emerged when one assumed the metric as Eq. (28), other setups may not be so treatable as for finding analytical values for the masses. In this matter, relevant for next investigations, the same mechanism could also be employed for describing quark and neutrino mass hierarchy problems, since their diagonal masses should satisfy similar spectra. Of course, specific features related to quark or neutrino interactions, and their implications for the mass generation mechanism, turn the problem sufficiently more complex for such a preliminary analysis.
H.M.G. is grateful for the financial support provided by CNPq (Grant No. 141924/2019-5). The work of A.E.B. is supported by the Brazilian Agencies FAPESP (Grant No. 2023/00392-8) and CNPq (Grant No. 301485/2022-4).
## Appendix A Six-dimensional spinorial fields
To investigate bulk spinorial matter, one first considers that fermion localization on brane-worlds is usually achieved when the 6-dimensional Dirac algebra is realized by the objects \(\varGamma^{M}=\mathfrak{e}^{M}_{\phantom{M}\bar{N}}\varGamma^{\bar{N}}\), where \(\mathfrak{e}^{M}_{\phantom{M}\bar{N}}\) denotes a 6-dimensional vielbein, \(\varGamma^{M}\) satisfy the Cliford relation \(\left\{\varGamma^{M},\varGamma^{N}\right\}=2g^{MN}\), and \(\varGamma^{N}\) are the gamma matrices in 6-dimensional flat
Figure 2: (Color online) (a) The charged lepton mass spectrum associated with the metric (28) for \(\mathsf{a}=1\). (b) The charged lepton mass spectrum associated with the metric (28) for \(\mathsf{a}=2\). The solid black, solid red and dashed black lines represent the equations \(m_{\mu}/m_{e}=206.768\), \(m_{\tau}/m_{\mu}=16.817\) and \(\mathcal{K}=2/3\), respectively. Results are for triple intersecting points at \(l=3\), \(4\), \(5\), and \(6\).
space-time, for which the following representation12 is chosen
Footnote 12: In representation (11) the chirality matrix is diagonal, i.e. \(\varGamma^{\bar{\imath}}=\begin{bmatrix}I^{4}&0^{4}\\ 0^{4}&-I^{4}\end{bmatrix}\).
\[\varGamma^{\bar{\mu}}=\begin{bmatrix}0^{4}&\gamma^{\mu}\\ \gamma^{\mu}&0^{4}\end{bmatrix},\;\varGamma^{\bar{\imath}}=-\begin{bmatrix}0^{ 4}&\gamma^{5}\\ \gamma^{5}&0^{4}\end{bmatrix},\;\text{and}\;\varGamma^{\bar{\varsigma}}=i \begin{bmatrix}0^{4}&-I^{4}\\ I^{4}&0^{4}\end{bmatrix}. \tag{12}\]
An \(SU(2)_{L}\times U(1)\) action of a massless spinor, in 6 dimensions, can be expressed as
\[S_{D}=\int\mathrm{d}^{6}x\sqrt{-g}\;\left[\mathbbm{L}^{(6)}\varGamma^{M}\left( \nabla_{M}-i\mathfrak{g}\tau_{a}\mathsf{W}_{M}^{a}+\mathfrak{g}^{\prime}\frac {i}{2}\mathsf{B}_{M}\right)\mathsf{L}^{(6)}+\overline{\mathbbm{e}}_{+}^{(6)} \varGamma^{M}\left(\nabla_{M}+i\mathfrak{g}^{\prime}\mathsf{B}_{M}+\zeta_{,M} \right)\mathsf{e}_{+}^{(6)}\right], \tag{13}\]
where \(\nabla_{M}:=\partial_{M}+\mathfrak{C}_{M}\), \(\zeta\) is some dark scalar field, \(\mathsf{B}_{M}\) is the hypercharge gauge boson, \(\mathsf{W}_{M}^{a}\) are the \(SU(2)\) gauge bosons, \(\mathfrak{g}\) and \(\mathfrak{g}^{\prime}\) are the \(SU(2)\) and \(U(1)\) couplings, \(\tau_{a}=\sigma_{a}/2\) are the \(SU(2)\) generators, and \(\mathfrak{C}_{M}\) is the spin connection of \(\left(\mathbbm{E}^{6},\mathbf{g}\right)\), defined by
\[\mathfrak{C}_{M}=\frac{1}{4}\left\{-\frac{1}{2}\mathfrak{\epsilon}_{M}{}^{ \bar{T}}\mathfrak{\epsilon}^{T\bar{R}}\mathfrak{\epsilon}^{Q\bar{S}}\partial_ {[\bar{T}}\,\mathfrak{\epsilon}_{Q]\bar{T}}+\frac{1}{2}\mathfrak{\epsilon}^{S[ \bar{R}}\partial_{[\bar{M}}\,\mathfrak{\epsilon}_{\,S]}{}^{\bar{S}]}\right\} \varGamma_{\bar{R}}\varGamma_{\bar{S}}. \tag{14}\]
The spin connection \(\mathfrak{C}_{M}\), compatible with a conformally flat metric13, can be determined as follows
Footnote 13: That is a metric like \(\mathbf{g}=e^{-2A}\left(\eta_{\mu\nu}\mathrm{d}\mathbf{x}^{\mu}\mathrm{d}\mathbf{x}^{\nu}+ r^{2}\mathrm{d}\theta^{2}+\rho^{2}\mathrm{d}y^{2}\right)\).
\[\mathfrak{C}_{\mu}=\mathfrak{A}_{\mu}-\frac{1}{4}A_{,i}\mathfrak{\epsilon}^{ \bar{i}\bar{j}}\hat{\mathfrak{\epsilon}}_{\mu}{}^{\bar{\rho}}\left(\varGamma_ {\bar{\jmath}}\varGamma_{\bar{\jmath}}-\varGamma_{\bar{\jmath}}\varGamma_{ \bar{\jmath}}\right), \tag{15}\]
and
\[\mathfrak{C}_{i}=\frac{1}{4}\varGamma_{\bar{\jmath}}\varGamma_{\bar{\jmath}} \left\{-\frac{1}{2}\mathfrak{\epsilon}_{i}{}^{\bar{\jmath}}\mathfrak{ \epsilon}^{k\bar{r}}\mathfrak{\epsilon}^{l\bar{s}}\partial_{[k}\mathfrak{ \epsilon}_{l]\bar{j}}+\frac{1}{2}\mathfrak{\epsilon}^{j[\bar{s}}\partial_{[j} \mathfrak{\epsilon}_{i]}{}^{\bar{r}]}\right\}=\mathfrak{B}_{i}. \tag{16}\]
Where \(\mathfrak{A}_{\mu}\) and \(\mathfrak{B}_{i}\) are the spin connections compatible with space-time \(\left(\mathbb{M}^{4},\mathbf{\omega}\right)\) and internal space \(\left(\mathbb{B}^{2},\mathbf{\sigma}\right)\), respectively. The left-handed leptons pair up to transform under \(SU(2)\),
\[\mathsf{L}^{(6)}=\begin{cases}\nu_{+}^{(6)}\\ \mathsf{e}_{+}^{(6)}\end{cases}, \tag{17}\]
where
\[\nu_{+}^{(6)}=\begin{bmatrix}\Psi_{+}^{L+}\\ \Psi_{+}^{R+}\\ 0\\ 0\end{bmatrix}\text{ and }\mathsf{e}_{+}^{(6)}=\begin{bmatrix}\Psi_{-}^{L+}\\ \Psi_{+}^{R+}\\ 0\\ 0\end{bmatrix} \tag{18}\]
represent the left-handed charged leptons and neutrinos, respectively. While right-handed leptons14, which are uncharged under \(SU(2)\), are represented by
Footnote 14: That is a metric like \(\mathbf{g}=e^{-2A}\left(\eta_{\mu\nu}\mathrm{d}\mathbf{x}^{\mu}\mathrm{d}\mathbf{x}^{\nu}+ r^{2}\mathrm{d}\theta^{2}+\rho^{2}\mathrm{d}y^{2}\right)\).
\[\mathsf{e}_{-}^{(6)}=\begin{bmatrix}0\\ 0\\ \Psi_{-}^{L-}\\ \Psi_{-}^{R-}\end{bmatrix}. \tag{19}\]
Varying the action \(S_{d}\) with relation to \(\overline{\mathsf{L}}^{(6)}\) or \(\overline{\mathsf{e}}_{-}^{(6)}\) implies in the Dirac equation for 6-dimensional curved space-time,
\[\left\{\hat{\varGamma}^{\mu}\mathcal{D}_{\mu}+e^{-A}\varGamma^{i}\left[ \partial_{i}+\mathfrak{B}_{i}-2A_{,i}+\frac{1-\varGamma^{\bar{\imath}}}{2} \left(i\mathfrak{g}^{\prime}\mathsf{B}_{i}+\zeta_{,i}\right)-\frac{1+\varGamma ^{\bar{\imath}}}{2}\left(i\mathfrak{g}\tau_{a}\mathsf{W}_{i}^{a}-\frac{i}{2} \mathfrak{g}^{\prime}\mathsf{B}_{i}\right)\right]\right\}\Psi^{(6)}=0, \tag{20}\]
where \(\mathcal{D}_{\mu}=\partial_{\mu}+\mathfrak{A}_{\mu}+\frac{1-\Gamma^{7}}{2}i \mathfrak{g}^{\prime}\mathsf{B}_{\mu}-\frac{1+\Gamma^{7}}{2}\left(i\mathfrak{g} \tau_{a}\mathsf{W}_{\mu}^{a}-\frac{i}{2}\mathfrak{g}^{\prime}\mathsf{B}_{\mu}\right)\) represents the usual covariant derivative of \(\left(\mathbb{M}^{4},\boldsymbol{\omega}\right)\), \(\Psi^{(6)}\) can represent either \(\mathsf{L}^{(6)}\) or \(\mathbf{e}_{-}^{(6)}\) and \(\hat{\Gamma}^{\mu}=\hat{\mathfrak{e}}^{\mu}{}_{\hat{\rho}}\Gamma^{\hat{\rho}}= e^{-A}\mathfrak{e}^{\mu}{}_{\hat{\rho}}\Gamma^{\hat{\rho}}\), which is independent on the co-dimensions.
Eq. (34) is the Dirac equation for a general braneworld of any dimension15. For six dimensional spaces, a choice16 of a vielbein, compatible with a conformally flat metric, is simply \(\mathfrak{e}_{4\bar{4}}=\mathfrak{e}_{4}{}^{3}=\mathfrak{e}_{55}=e^{-A}\) and \(\mathfrak{e}^{4\bar{4}}=\mathfrak{e}^{4}{}_{\bar{4}}=\mathfrak{e}^{55}=e^{5} \bar{\mathfrak{e}}=e^{A}\). Therefore, the Dirac equation for conformally flat six-dimensional braneworlds becomes
Footnote 15: Which is true since \(\mathfrak{B}_{i}\) can represent the spin connection of any co-dimensional space.
Footnote 16: There exists infinitely many choices of vielbeins compatible with some metric.
\[\left(\hat{\Gamma}^{\mu}\mathcal{D}_{\mu}+\mathcal{D}^{\bar{i}}\Gamma_{\bar{ i}}\right)\Psi^{(6)}=0, \tag{35}\]
where one defined the operators
\[\mathcal{D}^{\bar{5}}=\frac{1}{\rho}\left[\partial_{y}-\frac{5}{2}A_{,y}+ \frac{1-\Gamma^{7}}{2}\left(i\mathfrak{g}^{\prime}\mathsf{B}_{y}+\zeta_{,y} \right)-\frac{1+\Gamma^{7}}{2}\left(i\mathfrak{g}\tau_{a}\mathsf{W}_{y}^{a}- \frac{i}{2}\mathfrak{g}^{\prime}\mathsf{B}_{y}\right)\right], \tag{36}\]
and
\[\mathcal{D}^{\bar{4}}=\frac{1}{r}\left[\partial_{\theta}-\frac{5}{2}A_{,\theta }+\frac{1-\Gamma^{7}}{2}\left(i\mathfrak{g}^{\prime}\mathsf{B}_{\theta}+ \zeta_{,\theta}\right)-\frac{1+\Gamma^{7}}{2}\left(i\mathfrak{g}\tau_{a} \mathsf{W}_{\theta}^{a}-\frac{i}{2}\mathfrak{g}^{\prime}\mathsf{B}_{\theta} \right)\right]. \tag{37}\]
Eq. (35) can be refined by separating the different chiralities17, implying in two equations
Footnote 17: Any six dimensional spinor can be broken into \(\Psi_{(6)}=\Psi_{(6)}^{+}+\Psi_{(6)}^{-}=\begin{bmatrix}\Psi_{\pm}\\ 0\end{bmatrix}+\begin{bmatrix}0\\ \Psi_{-}\end{bmatrix}=\begin{bmatrix}\Psi_{+}\\ \Psi_{-}\end{bmatrix}\). Therefore, when \(\Psi_{(6)}\) represents \(\mathsf{L}^{(6)}\) or \(\mathbf{e}_{-}^{(6)}\) it only contains \(\Psi_{(6)}^{+}\) or \(\Psi_{(6)}^{-}\), respectively.
\[i\hat{\gamma}^{\nu}\mathcal{D}_{\nu}^{-}\Psi_{-}-\mathcal{D}_{-}^{\bar{4}} \gamma^{5}\Psi_{-}-i\mathcal{D}_{-}^{\bar{5}}\Psi_{-}=0, \tag{38}\]
and
\[i\hat{\gamma}^{\nu}\mathcal{D}_{\nu}^{+}\Psi_{+}-\mathcal{D}_{-}^{\bar{4}} \gamma^{5}\Psi_{+}+i\mathcal{D}_{-}^{\bar{5}}\Psi_{+}=0, \tag{39}\]
where \(\Psi_{\pm}\) are to be understood as four dimensional spinors, and
\[\mathcal{D}_{\mu}^{+}=\partial_{\mu}+\mathfrak{A}_{\mu}-\left(i\mathfrak{g} \tau_{a}\mathsf{W}_{\mu}^{a}-\frac{i}{2}\mathfrak{g}^{\prime}\mathsf{B}_{\mu} \right), \tag{40}\]
\[\mathcal{D}_{\mu}^{-}=\partial_{\mu}+\mathfrak{A}_{\mu}+i\mathfrak{g}^{\prime }\mathsf{B}_{\mu}, \tag{41}\]
\[\mathcal{D}_{+}^{\bar{5}}=\frac{1}{\rho}\left[\partial_{y}-\frac{5}{2}A_{,y}-i \mathfrak{g}\tau_{a}\mathsf{W}_{y}^{a}+\frac{i}{2}\mathfrak{g}^{\prime}\mathsf{ B}_{y}\right], \tag{42}\]
\[\mathcal{D}_{-}^{\bar{5}}=\frac{1}{\rho}\left[\partial_{y}+\zeta_{,y}-\frac{5}{ 2}A_{,y}+i\mathfrak{g}^{\prime}\mathsf{B}_{y}\right], \tag{43}\]
\[\mathcal{D}_{+}^{\bar{4}}=\frac{1}{r}\left[\partial_{\theta}-\frac{5}{2}A_{, \theta}-i\mathfrak{g}\tau_{a}\mathsf{W}_{\theta}^{a}+\frac{i}{2}\mathfrak{g}^{ \prime}\mathsf{B}_{\theta}\right], \tag{44}\]
and
\[\mathcal{D}_{-}^{\bar{4}}=\frac{1}{r}\left[\partial_{\theta}+\zeta_{,\theta}- \frac{5}{2}A_{,\theta}+i\mathfrak{g}^{\prime}\mathsf{B}_{\theta}\right]. \tag{45}\]
## Appendix B The quantum analogue problem for spinors
Suppose that the gauge and scalar fields satisfy
\[\mathsf{B}_{\mu}=\mathsf{B}_{\mu}\left(x^{\nu}\right),\,\mathsf{W}_{\mu}^{a}= \mathsf{W}_{\mu}^{a}\left(x^{\nu}\right),\,\mathsf{B}_{y}=\mathsf{W}_{i}^{a}=0,\,\mathsf{B}_{\theta}=\mathsf{B}_{\theta}\left(y\right)\text{ and }\zeta=\zeta\left(y\right). \tag{10}\]
Thus a separation of variables technique,
\[\Psi_{\pm}=\sum_{m}\begin{bmatrix}L_{m}^{\pm}(\theta,v)\,\psi_{m\pm}^{L}(x^{ \mu})\\ R_{m}^{\pm}(\theta,v)\,\psi_{m\pm}^{R}(x^{\mu})\end{bmatrix}=\sum_{m}\begin{bmatrix} L_{m}^{\pm}(\theta,v)\,\Psi_{m\pm}^{L}(x^{\mu})+R_{m}^{\pm}(\theta,v)\,\Psi_{m \pm}^{R}(x^{\mu})\end{bmatrix}, \tag{11}\]
can be employed for Eqs. (11) and (12), implying in the equations
\[\hat{\gamma}^{\mu}\mathcal{D}_{\mu}^{\pm}\Psi_{m\pm}^{L}=m\Psi_{m\pm}^{R}, \quad\hat{\gamma}^{\mu}\mathcal{D}_{\mu}^{\pm}\Psi_{m\pm}^{R}=m\Psi_{m\pm}^{L}, \tag{12}\]
and for the co-dimensional components
\[mR_{m}^{\pm}-\mathcal{D}_{\pm}^{\bar{1}}L_{m}^{\pm}\pm i\mathcal{D}_{\pm}^{ \bar{5}}L_{m}^{\pm}=0, \tag{13}\]
and
\[mL_{m}^{\pm}+\mathcal{D}_{\pm}^{\bar{4}}R_{m}^{\pm}\pm i\mathcal{D}_{\pm}^{ \bar{5}}R_{m}^{\pm}=0. \tag{14}\]
A Schrodinger-like equation can be thus accomplished for the left and right-handed modes of spinors18, respectively, as
Footnote 18: For the zero modes one does not need any refinement, Eqs. (10) and (14) can be solved as presented.
\[m^{2}L_{m}^{+}+\mathcal{D}_{+}^{\bar{4}}\mathcal{D}_{+}^{\bar{4}}L_{m}^{+}-i \left[\mathcal{D}_{+}^{\bar{4}},\mathcal{D}_{+}^{\bar{5}}\right]L_{m}^{+}+ \mathcal{D}_{+}^{\bar{5}}\mathcal{D}_{+}^{\bar{5}}L_{m}^{+}=0, \tag{15}\]
\[m^{2}R_{m}^{+}+\mathcal{D}_{+}^{\bar{4}}\mathcal{D}_{+}^{\bar{4}}R_{m}^{+}+i \left[\mathcal{D}_{+}^{\bar{4}},\mathcal{D}_{+}^{\bar{5}}\right]R_{m}^{+}+ \mathcal{D}_{+}^{\bar{5}}\mathcal{D}_{+}^{\bar{5}}R_{m}^{+}=0, \tag{16}\]
\[m^{2}L_{m}^{-}+\mathcal{D}_{-}^{\bar{4}}\mathcal{D}_{-}^{\bar{4}}L_{m}^{-}+i \left[\mathcal{D}_{-}^{\bar{4}},\mathcal{D}_{-}^{\bar{5}}\right]L_{m}^{-}+ \mathcal{D}_{-}^{\bar{5}}\mathcal{D}_{-}^{\bar{5}}L_{m}^{-}=0, \tag{17}\]
and
\[m^{2}R_{m}^{-}+\mathcal{D}_{-}^{\bar{4}}\mathcal{D}_{-}^{\bar{4}}R_{m}^{-}-i \left[\mathcal{D}_{-}^{\bar{4}},\mathcal{D}_{-}^{\bar{5}}\right]R_{m}^{-}+ \mathcal{D}_{-}^{\bar{5}}\mathcal{D}_{-}^{\bar{5}}R_{m}^{-}=0, \tag{18}\]
where \(\left[\mathcal{D}^{\bar{4}},\mathcal{D}^{\bar{5}}\right]=\mathcal{D}^{\bar{4}} \mathcal{D}^{\bar{5}}-\mathcal{D}^{\bar{5}}\mathcal{D}^{\bar{4}}\), which is generally non-null. On the other hand, the action for spinors is then given by
\[S_{d}= \sum_{\hat{m}}\sum_{m}\int\mathrm{d}^{2}x\sqrt{\hat{\sigma}}e^{-5A }\overline{R}_{\hat{m}}R_{m}\int\mathrm{d}^{4}x\sqrt{-\omega}\,\overline{\Psi} _{\hat{m}+}^{R}\left(\hat{\gamma}^{\mu}\mathcal{D}_{\mu}\Psi_{m+}^{R}-m\, \Psi_{m+}^{L}\right)\] \[+\sum_{\hat{m}}\sum_{m}\int\mathrm{d}^{2}x\sqrt{\hat{\sigma}}e^{ -5A}\overline{L}_{\hat{m}}L_{m}\int\mathrm{d}^{4}x\sqrt{-\omega}\,\overline{ \Psi}_{\hat{m}+}^{L}\left(\hat{\gamma}^{\mu}\mathcal{D}_{\mu}\Psi_{m+}^{L}-m\, \Psi_{m+}^{R}\right)\] \[+\sum_{\hat{m}}\sum_{m}\int\mathrm{d}^{2}x\sqrt{\hat{\sigma}}e^{ -5A}\overline{L}_{\hat{m}}L_{m}\int\mathrm{d}^{4}x\sqrt{-\omega}\,\overline{ \Psi}_{\hat{m}-}^{R}\left(\hat{\gamma}^{\mu}\mathcal{D}_{\mu}\Psi_{m-}^{R}-m\, \Psi_{m-}^{L}\right)\] \[+\sum_{\hat{m}}\sum_{m}\int\mathrm{d}^{2}x\sqrt{\hat{\sigma}}e^{ -5A}\overline{R}_{\hat{m}}R_{m}\int\mathrm{d}^{4}x\sqrt{-\omega}\,\overline{ \Psi}_{\hat{m}-}^{L}\left(\hat{\gamma}^{\mu}\mathcal{D}_{\mu}\Psi_{m-}^{L}-m\, \Psi_{m-}^{R}\right) \tag{19}\]
which corresponds to the same problem driven by Eq. (12), if and only if the modes are
1. Normalizable, i.e. \(\int\mathrm{d}^{2}x\sqrt{\hat{\sigma}}e^{-5A}\overline{R}_{m}R_{m}=1\) and \(\int\mathrm{d}^{2}x\sqrt{\hat{\sigma}}e^{-5A}\overline{L}_{m}L_{m}=1,\,\,\,\forall m\);
2. Orthogonal, i.e. \(\int\mathrm{d}^{2}x\sqrt{\tilde{\sigma}}e^{-5A}\overline{R}_{\tilde{m}}R_{m}=0\) and \(\int\mathrm{d}^{2}x\sqrt{\tilde{\sigma}}e^{-5A}\overline{L}_{\tilde{m}}L_{m}=0\) if \(\tilde{m}\neq m\).
When the above conditions are satisfied then the action reads
\[S_{d}=\sum_{m}\int\mathrm{d}^{4}x\sqrt{-\omega}\,\overline{\Psi}_{m}^{+}\left( \hat{\gamma}^{\mu}\mathcal{D}_{\mu}\Psi_{m}^{+}-m\,\Psi_{m}^{+}\right)+\sum_{m }\int\mathrm{d}^{4}x\sqrt{-\omega}\,\overline{\Psi}_{m}^{-}\left(\hat{\gamma} ^{\mu}\mathcal{D}_{\mu}\Psi_{m}^{-}-m\,\Psi_{m}^{-}\right). \tag{111}\]
Eqs. (104) and (105) can be readily integrated for the zero modes (\(m=0\)), leading to
\[L_{0}^{+} =e^{\frac{5A}{2}}\sum_{k}L_{0k}^{+}=\sum_{k}C_{k}^{L+}e^{ik \theta}e^{\frac{5A}{2}}e^{\frac{k\mu\mu}{\tau}}e^{-\frac{\varepsilon\sigma}{ 2\tau}\int\mathsf{B}_{\theta}\mathrm{d}y}, \tag{112}\] \[R_{0}^{+} =e^{\frac{5A}{2}}\sum_{k}R_{0k}^{+}=\sum_{k}C_{k}^{R+}e^{-ik \theta}e^{\frac{5A}{2}}e^{\frac{k\mu\nu}{\tau}}e^{\frac{\varepsilon\sigma}{2 \tau}\int\mathsf{B}_{\theta}\mathrm{d}y},\] (113) \[L_{0}^{-} =e^{\frac{5A}{2}}\sum_{k}L_{0k}^{-}=\sum_{k}C_{k}^{L-}e^{-ik \theta}e^{\frac{5A}{2}}e^{\frac{\varepsilon\sigma}{\tau}\int\mathsf{B}_{ \theta}\mathrm{d}y}e^{-\zeta}, \tag{114}\]
and
\[R_{0}^{-}=e^{\frac{5A}{2}}\sum_{k}R_{0k}^{-}=\sum_{k}C_{k}^{R-}e^{ik\theta}e^{ \frac{5A}{2}}e^{\frac{k\mu\nu}{\tau}}e^{-\frac{\varepsilon\sigma}{\tau}\int \mathsf{B}_{\theta}\mathrm{d}y}e^{-\zeta}, \tag{115}\]
where \(k\in\mathbb{N}\) and \(C_{k}^{R(L)\pm}\) are constants. And the zero mode normalization reads
\[2\pi\left(C_{k}^{L+}\right)^{2}\int\mathrm{d}y\,e^{\frac{2k\rho \mu}{\tau}}e^{-\frac{\varepsilon\sigma}{\tau}\int\mathsf{B}_{\theta}\mathrm{ d}y}=1, \tag{116}\] \[2\pi\left(C_{k}^{R+}\right)^{2}\int\mathrm{d}y\,e^{\frac{2k\rho \mu}{\tau}}e^{\frac{\varepsilon\sigma}{\tau}\int\mathsf{B}_{\theta}\mathrm{ d}y}=1,\] (117) \[2\pi\left(C_{k}^{L-}\right)^{2}\int\mathrm{d}y\,e^{\frac{2k\rho \mu}{\tau}}e^{-2\zeta}e^{\frac{2\sigma\sigma}{\tau}\int\mathsf{B}_{\theta} \mathrm{d}y}=1, \tag{118}\]
and
\[2\pi\left(C_{k}^{R-}\right)^{2}\int\mathrm{d}y\,e^{\frac{2k\rho \nu}{\tau}}e^{-2\zeta}e^{-\frac{2\kappa\rho}{\tau}\int\mathsf{B}_{\theta} \mathrm{d}y}=1. \tag{119}\]
Clearly \(L_{0}^{+}\) and \(R_{0}^{-}\) can not be normalized at the same time as \(R_{0}^{+}\) and \(L_{0}^{-}\). Normalizable zero modes are thus written as
\[\Psi_{0}^{(6)}=e^{\frac{5A}{2}}\begin{bmatrix}\sum_{k}L_{0k}^{+}(\theta,y)\, \psi_{0k}^{L+}(x^{\mu})\\ 0\\ 0\\ \sum_{k}R_{0k}^{-}(\theta,y)\,\psi_{0k}^{R-}(x^{\mu})\end{bmatrix}, \tag{120}\]
where \(L_{0k}^{+}\) and \(R_{0k}^{-}\) are the zero mode wave functions described by Eqs. (112) and (115).
## Appendix C On the inclusion of a mass perturbation
Fermions can be described by the action
\[\overline{S}_{d}=\int\mathrm{d}^{6}x\sqrt{-g}\,\left(\overline{\mathsf{L}}^{( 6)}\varGamma^{M}\mathsf{D}_{M}\mathsf{L}^{(6)}+\overline{\mathsf{e}}_{-}^{(6) }\varGamma^{M}\mathsf{D}_{M}\mathsf{e}_{-}^{(6)}\right)-m_{0}\int\mathrm{d}x^ {6}\sqrt{-g}\,\left(\overline{\mathsf{L}}^{(6)}H\mathsf{e}_{-}^{(6)}+ \overline{\mathsf{e}}_{-}^{(6)}H^{\dagger}\mathsf{L}^{(6)}\right), \tag{121}\]
where the mass term coupling \(m_{0}\) should be small if compared to the scale of the brane, \(r\) and \(\rho\), and should be treated as perturbation, presenting negligible effects in the wave functions. This can be justified directly from the action
\[\overline{S}_{d}= \int\mathrm{d}^{6}x\sqrt{-g}\,\left[\overline{\mathsf{L}}^{(6)} \left(\varGamma^{\mu}\mathsf{D}_{\mu}+\frac{1}{\rho}\varGamma^{\prime\prime} \mathsf{D}_{y}+\frac{1}{r}\varGamma^{\prime\prime}\mathsf{D}_{\theta}\right) \mathsf{L}^{(6)}+\overline{\mathsf{e}}_{-}^{(6)}\left(\varGamma^{\mu}\mathsf{D} _{\mu}+\frac{1}{\rho}\varGamma^{\prime\prime}\mathsf{D}_{y}+\frac{1}{r}\varGamma^ {\prime\prime}\mathsf{D}_{\theta}\right)\mathsf{e}_{-}^{(6)}\right]\] \[-m_{0}\int\mathrm{d}x^{6}\sqrt{-g}\,\left(\overline{\mathsf{L}}^{(6 )}H\mathsf{e}_{-}^{(6)}+\overline{\mathsf{e}}_{-}^{(6)}H^{\dagger}\mathsf{L}^{(6) }\right), \tag{122}\]
which implies that the co-dimensional portion of the action is of the order of \(1/r\) or \(1/\rho\), while the rest is of the order of \(m_{0}\). Thus the fermionic co-dimensional wave functions even with the perturbative term can be determined exactly as the previous sections, and should be described by the Eqs. (10), (11), (11) and (12).
To realize the effective theory that is observed in four dimensions one substitutes the zero modes, Eqs. (14) to (15), into action (13). The mass term is thus described by
\[\int\mathrm{d}x^{6}\sqrt{-g}\,\left(\overline{\mathsf{L}}^{(6)}H \mathsf{e}_{-}^{(6)}+\overline{\mathsf{e}}_{-}^{(6)}H^{\dagger}\mathsf{L}^{(6 )}\right) =\int\mathrm{d}x^{6}\sqrt{-g}\,\left(\mathsf{L}^{(6)\dagger}\varGamma^ {0}H\mathsf{e}_{-}^{(6)}+\mathsf{e}_{-}^{(6)\dagger}\varGamma^{0}H^{\dagger} \mathsf{L}^{(6)}\right)\] \[=\sum_{p,q,m,n}\int\mathrm{d}x^{6}\sqrt{-g}\left[\overline{L_{0p}^ {+}}\psi_{0p}^{L+\dagger}\begin{array}{cc}0&0&0\end{array}\right]\begin{bmatrix} 0\\ \gamma^{0}&0^{4}\end{bmatrix}\Phi H_{0}\begin{bmatrix}0\\ 0\\ 0\\ R_{0n}^{-}\psi_{0n}^{R-}\end{bmatrix}\] \[+\sum_{p,q,m,n}\int\mathrm{d}x^{6}\sqrt{-g}\begin{bmatrix}0&0&0& \overline{R_{0q}^{-}}\psi_{0q}^{R-\dagger}\end{bmatrix}\begin{bmatrix}0^{4}& \gamma^{0}\\ \gamma^{0}&0^{4}\end{bmatrix}\Phi H_{0}\begin{bmatrix}L_{0m}^{+}\psi_{0m}^{L+} \\ 0\\ 0\end{bmatrix}\] \[=\sum_{p}\int\mathrm{d}y^{2}\sqrt{\tilde{\sigma}}e^{-6A}\varPhi \overline{L_{0p}^{+}}R_{0p}^{-}\int\mathrm{d}x^{4}\overline{\Psi}_{0p}^{L+}H_ {0}\varPsi_{0p}^{R-} \tag{16}\] \[+\sum_{k}\int\mathrm{d}y^{2}\sqrt{\tilde{\sigma}}e^{-6A}\varPhi \overline{R_{0k}^{-}}L_{0k}^{+}\int\mathrm{d}x^{4}\overline{\Psi}_{0k}^{R-}H_ {0}^{\dagger}\varPsi_{0k}^{L+}, \tag{17}\]
where \(\varPsi_{0k}^{L+}\) can represent both neutrinos and charged leptons, one has already employed the fact that \(L_{0k}^{\pm}\) is orthogonal to \(R_{0p}^{\pm}\) if \(k\neq p\), assumed that \(\varPhi\) is real valued and excluded the non-normalizable states, i.e. \(L^{-}\) and \(R^{+}\). Finally, resuming from the total action and canonically normalizing it, one finds
\[S_{D}^{(4)}= \sum_{k}\int\mathrm{d}^{4}x\Bigg{\{}i\,\mathsf{L}_{k}\,\gamma^{ \mu}\Bigg{[}\nabla_{\mu}-\left(i\mathsf{g}\tau_{a}\mathsf{W}_{\mu}^{a}-\frac{ i}{2}\mathsf{g}^{\prime}\mathsf{B}_{\mu}\right)\Bigg{]}\,\mathsf{L}_{k}+i\, \mathsf{g}_{Rk}\gamma^{\mu}\Bigg{(}\nabla_{\mu}+i\mathsf{g}^{\prime}\mathsf{B} _{\mu}\Bigg{)}\mathsf{e}_{Rk}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
\[-\mathsf{R}^{+}_{mk,yy}+\left[-\frac{\lambda}{2}\left(\frac{\lambda}{2}-1\right) \operatorname{sech}^{2}\left(y\right)-ks\tanh\left(y\right)\right]\mathsf{R}^{+} _{mk}=\left(m^{2}\rho^{2}-\frac{k^{2}s^{2}}{\lambda^{2}}-\frac{\lambda^{2}}{4} \right)\mathsf{R}^{+}_{mk}, \tag{47}\]
\[-\mathsf{L}^{-}_{mk,yy}+\left[-\lambda\left(\lambda+1\right) \operatorname{sech}^{2}\left(y\right)-2ks\tanh\left(y\right)\right]\mathsf{L}^ {-}_{mk}=\left(m^{2}\rho^{2}-\frac{k^{2}s^{2}}{\lambda^{2}}-\lambda^{2} \right)\mathsf{L}^{-}_{mk}, \tag{48}\]
and
\[-\mathsf{R}^{-}_{mk,yy}+\left[-\lambda\left(\lambda-1\right) \operatorname{sech}^{2}\left(y\right)-2ks\tanh\left(y\right)\right]\mathsf{R}^ {-}_{mk}=\left(m^{2}\rho^{2}-\frac{k^{2}s^{2}}{\lambda^{2}}-\lambda^{2} \right)\mathsf{R}^{-}_{mk}. \tag{49}\]
where \(\lambda=\frac{\mathsf{g}^{\prime}\rho\mathsf{g}}{\tau}\), \(s=\frac{\rho\lambda}{\tau}\) and \(\mathsf{q}\) is a real valued constant. Eqs. (48) and (49) are equivalent to Eqs. (46) and (47), one just needs to switch \(\lambda\) for \(\lambda/2\). Therefore, from now on, only Eqs. (48) and (49) shall be considered for further investigation. The quantum mechanical potential associated with left and right-handed spinors are, respectively,
\[U_{R}=\left[-\lambda\left(\lambda-1\right)\operatorname{sech}^{2}\left(y \right)-2ks\tanh\left(y\right)\right], \tag{50}\]
and
\[U_{L}=\left[-\lambda\left(\lambda+1\right)\operatorname{sech}^{2}\left(y \right)-2ks\tanh\left(y\right)\right]. \tag{51}\]
The potentials \(U_{R}\) and \(U_{L}\), Eqs. (50) and (51), have an associated global minima19 if \(\lambda>1\) and \(\lambda\left(\lambda-1\right)>s\left|k\right|\), and \(\lambda>0\) and \(\lambda\left(\lambda+1\right)>s\left|k\right|\), respectively. Thus creating the conditions for producing bound states for all real valued \(s\) and integer \(k\). The quantum mechanical potential, Eqs. (50) and (51), of the left and right-handed equations are depicted in Fig. 3.
Footnote 19: It is assumed that \(\lambda>0\).
The general solution of Eqs. (48) and (49) are, respectively,
\[\mathsf{R}^{-}_{mk}= c_{1}\operatorname{sech}^{q}\left(y\right)e^{py}{}_{2}F_{1} \left(q+1-\lambda,\lambda+q;q-p+1;\frac{e^{-y}\operatorname{sech}(y)}{2}\right)\] \[+c_{2}\operatorname{sech}^{p}\left(y\right)e^{qy}{}_{2}F_{1}\left( -\lambda+p+1,\lambda+p;p-q+1;\frac{e^{-y}\operatorname{sech}(y)}{2}\right), \tag{52}\]
Figure 3: (Color online) (a) The quantum mechanical potential associated with right-handed spinors, \(U_{R}\), for \(\lambda=4\) (solid lines), \(\lambda=3\) (dashed lines) and \(\lambda=2\) (dotted lines). (b) The quantum mechanical potential associated with left-handed spinors, \(U_{L}\), for \(\lambda=4\) (solid lines), \(\lambda=3\) (dashed lines), \(\lambda=2\) (dot-dashed lines) and \(\lambda=1\) (dotted lines). The plots are for \(k=0\) (blue lines), \(k=1\) (black lines) and \(k=-1\) (red lines), with \(s=1\).
and
\[\mathsf{L}_{mk}^{-}= d_{1}\operatorname{sech}^{q}\left(y\right)e^{py}{}_{2}F_{1}\left(q- \lambda,\lambda+1+q;q-p+1;\frac{e^{-y}\operatorname{sech}(y)}{2}\right)\] \[+d_{2}\operatorname{sech}^{p}\left(y\right)e^{qy}\,{}_{2}F_{1} \left(-\lambda+p,\lambda+1+p;p-q+1;\frac{e^{-y}\operatorname{sech}(y)}{2} \right), \tag{101}\]
where \(q=\frac{\sqrt{\sqrt{e^{2}-4k^{2}s^{2}}-\epsilon}}{\sqrt{2}}\) and \(p=\operatorname{sign}\left(k\right)\frac{\sqrt{-\epsilon-\sqrt{e^{2}-4k^{2}s^{ 2}}}}{\sqrt{2}}\). If \(\epsilon\geq-2\left|k\right|s\) or \(m\geq\frac{1}{\rho}\left|\lambda-\frac{\rho\left|k\right|}{r}\right|\) then Eqs. (100) and (101) correspond to the propagating modes. Otherwise, for \(\epsilon<-\left|k\right|s\), the solutions (100) and (101) lead to the bound states
\[\mathsf{R}_{mk}^{-}=c_{1}\operatorname{sech}^{\lambda-j-1}\left(y\right)e^{ \frac{k_{s}}{\left(\lambda-j-1\right)}y}{}_{2}F_{1}\left(-j,2\lambda-j-1; \lambda+\frac{ks}{\left(\lambda-j-1\right)}-j;\frac{e^{y}\operatorname{sech}( y)}{2}\right), \tag{102}\]
and
\[\mathsf{L}_{mk}^{-}=d_{1}\operatorname{sech}^{\lambda-j-1}\left(y\right)e^{ \frac{k_{s}}{\left(\lambda-j-1\right)}y}{}_{2}F_{1}\left(-j-1,2\lambda-j; \lambda+\frac{ks}{\left(\lambda-j-1\right)}-j;\frac{e^{y}\operatorname{sech}( y)}{2}\right), \tag{103}\]
both associated with the mass eigenvalues
\[m_{jk}^{-}=\frac{1}{\rho}\frac{\sqrt{\left(j+1\right)\left(2\lambda-j-1\right) }}{\lambda\left(\lambda-j-1\right)}\sqrt{\lambda^{2}\left(\lambda-j-1\right) ^{2}-k^{2}s^{2}}, \tag{104}\]
where20\(j\) is a natural number. In this way there is always a mass gap between the zero and massive modes, be it the discrete or continuous modes. Clearly the solutions and spectrum of mass of \(\mathsf{L}_{mk}^{+}\) and \(\mathsf{R}_{mk}^{+}\) are similar to Eqs. (102), (103) and (104), but with \(\lambda\to\lambda/2\).
Footnote 20: With the exception of the zero mode that is constructed from Eq. (103) with \(j=-1\).
## Appendix E Massive Modes: The Higgs Field
In the same line of reasoning, the Higgs field must also have a mass gap between the zero and massive modes. But the massive modes of the Higgs depends upon the choice of metric. Particularly, for the metric considered in this paper,
\[\mathbf{g}=\operatorname{sech}^{2l}\left(y\right)e^{2\mathrm{o}y}\left(\eta_{\mu \nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}+r^{2}\mathrm{d}\theta^{2}+\rho^{2} \mathrm{d}y^{2}\right). \tag{105}\]
and the gauge boson is once again
\[\mathsf{B}_{\theta}=-\mathsf{q}\tanh\left(y\right), \tag{106}\]
the bulk profile of the Higgs field follows from the equation
\[-\phi_{,yy}+\left[-2l\left(2l+1\right)\operatorname{sech}^{2}\left(y\right)- \left(8ol-ks\right)\tanh(y)+4l^{2}+4o^{2}+\frac{k^{2}s^{2}}{\lambda^{2}}\right] \phi=m^{2}\rho^{2}\phi \tag{107}\]
where \(\lambda=\frac{\mathsf{g}^{\prime}\rho\mathsf{q}}{r}\) and \(s=\frac{\lambda\rho}{r}\). The general solution of Eq. (107) takes the form
\[\phi_{kj}= c_{1}\operatorname{sech}^{q}\left(y\right)e^{py}{}_{2}F_{1}\left(q-2 l,2l+1+q;q-p+1;\frac{e^{-y}\operatorname{sech}(y)}{2}\right)\] \[+c_{2}\operatorname{sech}^{p}\left(y\right)e^{qy}\,{}_{2}F_{1} \left(p-2l,2l+1+p;p-q+1;\frac{e^{-y}\operatorname{sech}(y)}{2}\right), \tag{108}\]
where \(q=\frac{\sqrt{\sqrt{\epsilon^{2}-(8ol-ks)^{2}-\epsilon}}}{\sqrt{2}}\), \(p=\frac{\left|8ol-ks\right|}{2q}\) and \(\epsilon=m^{2}\rho^{2}-4l^{2}-40^{2}-\frac{k^{2}s^{2}}{\lambda^{2}}\). If \(\epsilon\geq-\left|8ol-ks\right|\) then Eq. (10) corresponds to the propagating modes. Otherwise, for \(\epsilon<-\left|8ol+ks\right|\), the solution (10) leads to the bound states
\[\phi_{kj}(y)=ce^{\frac{\left|8ol-ks\right|}{2(2l-j)^{2}}y}\,\mathrm{sech}^{2l-j} \left(y\right)\,{}_{2}F_{1}\left(-j,4l+1-j;2l+1-j-\frac{\left|4ol-ks\right|}{2 l-j};\frac{e^{-y}\,\mathrm{sech}\left(y\right)}{2}\right). \tag{11}\]
both associated with the mass eigenvalues
\[m^{2}\rho^{2}=4l^{2}+4o^{2}+\frac{k^{2}s^{2}}{\lambda^{2}}-(2l-j)^{2}-\frac{ \left(8ol-ks\right)^{2}}{4\left(2l-j\right)^{2}}, \tag{12}\]
where \(j\) is a natural number. In this way there is always a mass gap between the zero and massive modes, be it a discrete or continuous mode.
|
2309.12581 | Sampling-Frequency-Independent Universal Sound Separation | This paper proposes a universal sound separation (USS) method capable of
handling untrained sampling frequencies (SFs). The USS aims at separating
arbitrary sources of different types and can be the key technique to realize a
source separator that can be universally used as a preprocessor for any
downstream tasks. To realize a universal source separator, there are two
essential properties: universalities with respect to source types and recording
conditions. The former property has been studied in the USS literature, which
has greatly increased the number of source types that can be handled by a
single neural network. However, the latter property (e.g., SF) has received
less attention despite its necessity. Since the SF varies widely depending on
the downstream tasks, the universal source separator must handle a wide variety
of SFs. In this paper, to encompass the two properties, we propose an
SF-independent (SFI) extension of a computationally efficient USS network,
SuDoRM-RF. The proposed network uses our previously proposed SFI convolutional
layers, which can handle various SFs by generating convolutional kernels in
accordance with an input SF. Experiments show that signal resampling can
degrade the USS performance and the proposed method works more consistently
than signal-resampling-based methods for various SFs. | Tomohiko Nakamura, Kohei Yatabe | 2023-09-22T02:16:37Z | http://arxiv.org/abs/2309.12581v1 | # Sampling-Frequency-Independent Universal Sound Separation
###### Abstract
This paper proposes a universal sound separation (USS) method capable of handling untrained sampling frequencies (SFs). The USS aims at separating arbitrary sources of different types and can be the key technique to realize a source separator that can be universally used as a preprocessor for any downstream tasks. To realize a universal source separator, there are two essential properties: universities with respect to source types and recording conditions. The former property has been studied in the USS literature, which has greatly increased the number of source types that can be handled by a single neural network. However, the latter property (e.g., SF) has received less attention despite its necessity. Since the SF varies widely depending on the downstream tasks, the universal source separator must handle a wide variety of SFs. In this paper, to encompass the two properties, we propose an SF-independent (SFI) extension of a computationally efficient USS network, SuDoRM-RF. The proposed network uses our previously proposed SFI convolutional layers, which can handle various SFs by generating convolutional kernels in accordance with an input SF. Experiments show that signal resampling can degrade the USS performance and the proposed method works more consistently than signal-resampling-based methods for various SFs.
Tomohiko Nakamura\({}^{\dagger}\) and Kohei Yatabe\({}^{\ddagger}\)+\({}^{\dagger}\)National Institute of Advanced Industrial Science and Technology (AIST), Tokyo 135-0064, Japan
\({}^{\ddagger}\)Tokyo University of Agriculture and Technology (TUAT), Tokyo 184-8588, Japan
Footnote †: This work was supported by JSPS KAKENHI under Grant JP23H03418 and JST ACT-X under Grant JPMJAX210G.
Universal sound separation, sampling-frequency-independent convolutional layer, deep neural networks
## 1 Introduction
Audio source separation is a technique of separating concurrent sources from their mixture and can be used for preprocessing of various audio signal processing tasks. Its performance has been greatly improved by the introduction of a deep neural network (DNN) [1, 2]. It has also expanded the range of sources that can be handled by a single source separator [3, 4]. These advances have opened the door to achieving one of the ultimate developmental goals in audio source separation: the realization of a source separator that can be used universally as a preprocessor for any downstream tasks.
To realize such a universal source separator, universality with respect to source types is crucial. In usual audio source separation tasks, the target source types are specified in advance: for example, different musical instrument sounds in music source separation [5, 6, 7, 8, 9, 10, 11, 12, 13], voices of different speakers in speech separation [14, 15, 16, 17, 18, 19, 20, 21], and singing voices of different singers in vocal ensemble separation [22, 23]. Different from these domain-specific tasks, a universal sound separation (USS) aims at separating arbitrary sounds of different types [3, 4]. That is, its purpose is to acquire the universality with respect to source. Recent studies developed USS methods capable of handling weakly labeled data that contain labels of source types in each mixture but do not contain the source signals of the mixture [24, 25, 26]. These advances in USS have increased the variety of source types that can be handled by a single network.
Another important property to realize the universal source separator is universality with respect to recording conditions. Despite its necessity, it has received less attention than the universality to source types. Sampling frequency (SF) is one of the essential recording conditions. The usable SFs depend on the specification of the recording devices. They also depend on the acoustic conditions of target tasks. Thus, the universal source separator must be able to handle various SFs with a single neural network. However, conventional audio source separation methods (including the USS methods) commonly assume that the SF is the same in the training and test stages. Owing to this assumption, they cannot directly handle untrained SFs and require additional preprocessing such as signal resampling. Furthermore, we previously found that signal resampling can degrade the separation performance in music source separation [13]. In fact, this degradation due to signal resampling can also occur in USS, as we will show later in Section 4. Thus, we should explore another way to develop a single network that encompass the two properties.
In this paper, we propose a USS method capable of handling various SFs (Fig. 1(a)). We apply our previously proposed SFI convolutional layer [13] to SuDoRM-RF [27], one of the state-of-the-art USS networks. The SFI layer can generate the weights of a usual convolutional layer in accordance with the input SF, which enables the network to handle various SFs (including untrained SFs). A usual convolutional layer can be replaced with the SFI convolutional layer; thus, we can extend SuDoRM-RF to be universal for various SFs without losing the universality with respect to source types. Note that the effectiveness of the SFI layer was demonstrated only for music source separation [13]. This paper is the first to substantiate that signal resampling can degrade the USS performance and the SFI layer is more effective than signal resampling for handling various SFs in USS. We believe that the proposed method paves the way to realize a source separator that encompasses the universalities with respect to source types and recording conditions.
## 2 SUDORM-RF: A State-of-the-art USS Network
SuDoRM-RF [27] is one of the state-of-the-art networks for USS. In this section, we briefly show the network architecture of SuDoRM-RF and the loss function for handling a variable number of sources.
### Network Architecture
Fig. 1(a) shows the architecture of SuDoRM-RF1. It is based on the widely used framework [15], which combines trainable analysis and
synthesis filterbanks with a mask predictor. The analysis and synthesis filterbanks are called the encoder and decoder, respectively.
Let \(\mathbb{R}\) and \(\mathbb{R}_{\geq 0}\) be the sets of the real and nonnegative numbers, respectively. The encoder consists of a \(1\times C\) ID convolutional layer and a rectified linear unit nonlinearity. It converts \(\mathbf{x}\in\mathbb{R}^{L}\) into the pseudo time-frequency representation \(\mathbf{v}\in\mathbb{R}_{\geq 0}^{C\times T}\), where \(L\) is the signal length. The number of frames \(T\) is given as \(T=\lfloor(L+2P-K)/S+1\rfloor\), where the \(K,S\), and \(P\) are the kernel size, stride, and padding of the convolutional layer in the encoder.
The mask predictor2 transforms \(\mathbf{v}\) to the \(M\) masks \(\mathbf{\hat{v}}_{m}\in\mathbb{R}_{\geq 0}^{C\times T}\), where \(m\) is the output source index. \(M\) denotes the number of the output signals of the network and may be different from the number of sources present in the input mixture \(N(\leq M)\). Fig. 1(c) shows the architecture of the mask predictor. The first and last convolutional layers have a kernel size of \(1\) and a stride of \(1\). The main module of the mask predictor is the stack of \(B\) U-ConvBlocks. Each U-ConvBlock has the U-Net architecture of five levels and processes the input feature in multiple time resolutions by successive downsampling and upsampling. This characteristic can capture long-term temporal dependencies without significantly increasing the number of parameters. See [27] for the details of U-ConvBlock.
Footnote 2: In [27], the architecture that directly predicts the pseudo time-frequency representations of all sources was proposed to improve the separation performance. However, we experimentally observed that it made the training numerically unstable. Thus, we adopted the architecture that predicts the masks for the pseudo time-frequency representation.
The decoder is the \(CM\times M\) 1D transposed convolutional layer with a kernel size of \(K\) and a stride of \(S\). After concatenating \(\{\mathbf{v}\odot\mathbf{\hat{v}}_{m}\}_{m}\) along the channel axis, the decoder converts it into the output signals \(\mathbf{\hat{a}}_{m}\in\mathbb{R}^{L}\), where \(\odot\) is the elementwise multiplication.
### Loss Function
In a practical situation, the number of sources \(N\) is usually unknown and may vary mixture by mixture. SuDoRM-RF can handle a variable number of sources up to \(M\) by using a loss function similar to [4]. Let \(\mathcal{P}\) be the set of all assignments between the output and groundtruth signals, \(p\) be the element of \(\mathcal{P}\), and \(p(n)\) be the output channel index assigned to source \(n\) under assignment \(p\). The loss function is given as
\[\mathcal{L}= \begin{cases}\min_{p\in\mathcal{P}}(\mathcal{L}_{1,p}+\mathcal{L}_ {2,p})&(M>N)\\ \min_{p\in\mathcal{P}}\mathcal{L}_{1,p}&(M=N)\end{cases}, \tag{1}\]
\[\mathcal{L}_{1,p}= \frac{1}{N}\sum_{n=1}^{N}10\log_{10}\left(\frac{d_{n,p(n)}+\epsilon }{\|\mathbf{s}_{n}\|^{2}+\epsilon}\right) \tag{2}\] \[\mathcal{L}_{2,p}= \frac{1}{M-N}\sum_{n=N+1}^{M}10\log_{10}(d_{n,p(n)}+\tau\|\mathbf{x} \|^{2}+\epsilon),\] (3) \[d_{n,p(n)}=\|\mathbf{s}_{n}-\mathbf{\hat{s}}_{p(n)}\|^{2}, \tag{4}\]
where \(\mathbf{s}_{n}\in\mathbb{R}^{L}\) is the groundtruth signal of source \(n\), \(\epsilon\) and \(\tau\) are small values to avoid zero divisions. \(\mathcal{L}_{1,p}\) is the negative average source-to-noise ratio (SNR) for the output signals under assignment \(p\), which induces the output signals to match the groundtruth signals. \(\mathcal{L}_{2,p}\) is the loss function for the output signals that are not assigned to any groundtruth sources. It induces the unassigned output signals to zero vectors.
## 3 Proposed Method
We propose an SFI extension of SuDoRM-RF by introducing our previously proposed SFI layers. In this section, we review the SFI convolutional layer and apply it to SuDoRM-RF.
### SFI Convolutional Layer [13]
The SFI convolutional layer is an extension of a usual convolutional layer. Its idea is based on the fact that the weights of the usual convolutional layer can be interpreted as a collection of digital filters. This interpretation reveals that the weights inherently depend on the SF and must be constructed with respect to the input SF. To overcome this problem, we focused on an analog-to-digital filter conversion, whereby a digital filter is designed from an analog filter. Since analog filters are SFI, we can use them as archtypes of the weights for all SFs and use this conversion to generate the weights in accordance with an input SF. We call these archtypes the latent analog filters.
Fig. 2 shows the SFI convolutional layer using the frequency-domain filter design method. The latent analog filters are given as continuous frequency responses for all input and output channel pairs. Since these responses are processed in the same manner, we omit the indices of the input and output channel pair. Let \(G(\omega;\theta)\) be the continuous-time frequency response, where \(\omega\in\mathbb{R}\) is the (unnormalized) angular frequency and \(\theta\) is the set of the parameters of the latent analog filters. Given the input SF \(F_{s}\), the SFI layer computes the discrete-time impulse response \(\mathbf{b}\in\mathbb{R}^{K}\) that its discrete-time Fourier transformation approximates \(G(\omega;\theta)\) for \(\omega\in[0,\pi F_{s}]\) in
Figure 1: Network architectures of (a) proposed SFI version of SuDoRM-RF, (b) original SuDoRM-RF, and (c) mask predictor. ”Conv1D”, “ReLU”, “Trans. Conv1D”, and “GLN” denote one-dimensional (1D) convolutional layer, rectified linear unit, 1D transposed convolutional layer, and global layer normalization layer respectively. See Sections 2 and 3 for other variables and modules in these figures.
the least-squares sense:
\[\mathbf{b}=\operatorname*{argmin}_{\mathbf{b}^{\prime}\in\mathbb{R}^{K}}\|\mathbf{G}-\mathbf{D} \mathbf{b}^{\prime}\|^{2}, \tag{5}\]
where \(\mathbf{G}=[G(\omega_{i};\theta),\dots,G(\omega_{i};\theta)]^{\top}\in\mathbb{C}^{I}\). The sampled angular frequency \(\omega_{i}\) is given as \(\omega_{i}=\pi F_{s}i/(I-1)\), where \(i=1,\dots,I\) is the index of the sampled angular frequency. \(\mathbf{D}\) is the \(I\times K\) matrix and its \((i,k)\)th element is \(e^{\omega_{i}(k-K/2)/F_{s}}\), where \(\jmath\) is the imaginary unit. This problem can be solved analytically:
\[\mathbf{b}=\begin{bmatrix}\mathbf{D}^{\text{(re)}}\\ \mathbf{D}^{\text{(im)}}\end{bmatrix}^{\top}\begin{bmatrix}\mathbf{G}^{\text{(re)}}\\ \mathbf{G}^{\text{(im)}}\end{bmatrix}, \tag{6}\]
where the superscript \(\dagger\), (re), and (im) denote the Moore-Penrose pseudo inverse of a matrix, the real part of a matrix or vector, and the imaginary part of a matrix or vector, respectively. By reversing \(\mathbf{b}\) of all input and output channel pairs in time, we use them as the weights of the usual convolutional layer. Owing to this weight generation, the SFI layer can generate consistent weights for various SFs.
In the test stage, we only need to generate the weights once for each \(F_{s}\) because the weight generation depends on \(F_{s}\) and \(G(\omega)\). Thus, the SFI layer does not increase the computational cost except for the first weight generation. In addition, we can construct an SFI version of a transposed convolutional layer (SFI transposed convolutional layer) by replacing the usual convolutional layer with the usual transposed convolutional layer in the SFI convolutional layer.
### Application of SFI Layers to SuDoRM-RF
We can apply the SFI layers to SuDoRM-RF in a similar manner to the SFI network for music source separation [13]. Fig. 1(a) shows the proposed network architecture. The encoder uses the \(1\times C\) SFI convolutional layer with the kernel size of \(K\) and the stride of \(S\). The decoder is the \(C\times 1\) SFI transposed convolutional layer with the kernel size of \(K\) and the stride of \(S\). It is applied to \(\mathbf{v}\odot\mathbf{v}_{m}\) for each \(m\). Although we can use a different SFI transposed convolutional layer for each \(m\), we experimentally observed that using the same SFI transposed convolutional layer for all \(m\) provided a higher separation performance. Thus, we adopted this configuration for the decoder.
The architecture of the mask predictor is the same as SuDoRM-RF and depends on the input SF. Thus, we use a method of adjusting \(K\) and \(S\) in accordance with the input SF [13]. It adequately changes the values of \(K\) and \(S\) to keep the time resolution of the pseudo time-frequency representation unchanged in units of second. For example, when we trained the network with \(K=240\) (5 ms) and \(S=120\) (2.5 ms) at \(F_{s}=48\) kHz, we set \(K=40\) (5 ms) and \(S=20\) (2.5 ms) at \(F_{s}=8\) kHz. In equation,
\[K^{\text{(target)}}=\frac{F_{s}^{\text{(target)}}}{F_{s}^{\text{(train)}}}K^{ \text{(train)}},\quad S^{\text{(target)}}=\frac{F_{s}^{\text{(target)}}}{F_{ s}^{\text{(train)}}}S^{\text{(train)}}, \tag{7}\]
where we use the superscripts (train) and (target) to denote the values for the training and test data, respectively. When either \(K^{\text{(target)}}\) or \(S^{\text{(target)}}\) becomes non-integer, we can use the algorithms of handling noninteger kernel sizes and strides for the SFI layers [28]. Thus, we can ensure that the entire network is SFI.
In summary, the proposed network works as follows: Given the input mixture, (i) it first generates the weights of the SFI layers using the input SF, (ii) then adjusts the kernel sizes and strides of the SFI layers in accordance with Eq. (7), and (iii) finally separates the mixture into \(M\) output signals. Steps (i) and (ii) can be omitted after the first weight generation whenever the input SF is kept unchanged.
As described in Section 3.1, the SFI layers have the same computational cost as their usual counterparts, except for the first weight generation in the test stage. Thus, we can construct the proposed SFI extension of SuDoRM-RF without sacrificing the computational efficiency, one of the advantages of SuDoRM-RF. In addition, we can use the loss function defined in Eq. (1) for the training because the SFI layers do not require any special constraints on the loss function. This allows the proposed network to inherit the capability of handling a variable number of sources.
## 4 Experiments
### Experimental Conditions
To evaluate the effectiveness of the proposed method, we conducted a USS experiment with a variable number of sources.
**Dataset**: The free universal sound separation (FUSS) dataset [4] has been popularly used in the USS studies, but the SF of the audio signals is too low (16 kHz) for the evaluation at a wide range of SFs. Thus, we created a 48 kHz-sampled version of the FUSS dataset, which we call the _FUSS48k dataset_.
We synthesized mixture signals of the FUSS48k dataset by modifying the implementation in the official repository of the FUSS dataset3. We set the SF to 48 kHz and the duration of each mixture to 8 s. The maximum number of sources was set to four and the number of sources \(N\) varied from one to four. Table 1 shows the number of the mixtures for training, validation and test data. After the synthesis, we resampled the test data at 8, 12,..., 44 kHz, which were used for the evaluation as well as the 48 kHz-sampled test data.
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline Split & \(N=1\) & \(N=2\) & \(N=3\) & \(N=4\) & Total \\ \hline Training & 4992 & 4893 & 5072 & 5053 & 20000 \\ Validation & 254 & 253 & 229 & 264 & 1000 \\ Test & 237 & 249 & 262 & 252 & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of training, validation, and test data in FUSS48k dataset
Figure 2: Architecture of SFI convolutional layer using frequency-domain digital filter design (adapted from Fig. 1(b) in [13]).
**Network**: We used the proposed network with \(M=4\) for all methods4. For the SFI layers, we set \(K=240,S=120,\) and \(I=960\). As the latent analog filters, we used a modulated Gaussian function [13]:
Footnote 4: Since the MGF has only three parameters and the generated weights have much less degrees of freedom, it may limit the separation performance compared with the original SuDoRM-RF. We should confirm its impact on separation of performance, although the scope of this paper is not to achieve the state-of-the-art separation performance. Thus, we trained the original SuDoRM-RF by setting \(K=240\) and \(S=120\), which were the closest even values to the values used for the FUSS dataset in [27]. However, all elements of the output signals of SuDoRM-RF were zeros after the first epoch in the training. Although we trained the network with several different values of \(K\) and \(S\), this numerical instability was not resolved. We concluded that the original SuDoRM-RF did not work well for 48-kHz sampled data and we did not use it for the comparison.
\[G(\omega;\mu,\sigma,\varphi)=e^{-(\omega-\mu)^{2}/(2\sigma^{2})+j\varphi}+e^{-( \omega+\mu)^{2}/(2\sigma^{2})-j\varphi}, \tag{8}\]
where \(\mu\) is the center angular frequency, \(\sigma^{2}\) is the variance of the Gaussian, and \(\varphi\) is the initial phase. These parameters were defined by each input and output channel pair. They were initialized as in [13], but \(\sigma^{2}\) was initialized with \((50\pi)^{2}\). The other parameters were the same as SuDoRM-RF (\(C=512\) and \(B=16\)).
We used the training setup as [27]. The loss function was given as Eq. (1). The optimizer was Adam with an initial learning rate of \(1.0\times 10^{-3}\), which was multiplied by \(1/3\) every 10 epochs. The gradient clipping was applied so that the maximum of the \(l_{2}\) norms of the gradient was five. The batch size was four and the number of epochs was 150. As a data augmentation, we shuffled the source signals between samples in a minibatch and multiplied each source by a random gain.
**Compared methods**: The proposed method (_Proposed_) was compared with two signal-resampling-based methods used in [13]. The signal-resampling-based methods resample the input mixture at 48 kHz, apply the trained model to the resampled mixture, and resample the output signals back to the input SF. For signal resampling, we used two different resampling methods that prioritized resampling accuracy and fast computation, respectively. _Best Signal Resampling_ used the accurate but slow method and _Fast Signal Resampling_ used the fast but less accurate method5. We stress again that all the methods used the same trained model and differ only in handling untrained SFs.
Footnote 5: As in [13], we used the resample function in the librosa library [29]. The restype argument of this function was set to kaiser_best for Best Signal Resampling and kaiser_fast for Fast Signal Resampling.
**Evaluation metric**: We used the scale-invariant signal-to-distortion ratio (SI-SDR) for \(N=1\) and the SI-SDR improvement (\(\Delta\)SI-SDR) for \(N=2,3\), and 4. To reduce the dependency of the parameter initialization, we trained the network with four different random seeds and computed the averages and standard errors of the metrics.
### Results
Fig. 3 shows the separation performances of all methods. The standard errors for \(N=1\) were greater than those for the other \(N\). This should be because the difficulty of reconstructing the source signal depends on the source types. As the SF decreased, the performance of the signal-resampling-based methods became lower for all \(N\). Fast Signal Resampling provided lower SI-SDRs and \(\Delta\)SI-SDRs than Best Signal Resampling. The gap in these metrics between the two methods increased as the SF decreased. These results show that the signal resampling (especially with lower resampling accuracy) can degrade the USS performance.
The proposed method provided comparable and higher performances on average than the signal-resampling methods for all \(N\). The improvement of Proposed from Best Signal Resampling increased as the test SF moved away from the trained SF. Although the performance gap between Proposed and Best Signal Resampling tends to be lower for the greater \(N\), this should be caused by the fact that the separation generally becomes more difficult as the number of sources increases. This result shows that the SFI layers and the adjustment method of \(K\) and \(S\) are effective for USS.
## 5 Conclusion
We proposed a USS method capable of handling various SFs by applying the SFI layers to SuDoRM-RF. The SFI layers generate the convolutional kernels in accordance with the input SF, which enables the network to handle various SFs. Once the convolutional kernels are generated, these layers work as their usual counterparts whenever the input SF is kept unchanged. Thus, the SFI layer can install the SF universality into SuDoRM-RF without sacrificing the universality with respect to source types. While the proposed network contains the non-SFI subnetwork, it can be entirely SFI by adequately adjusting the kernel sizes and strides of the SFI layers in accordance with the input SF. Experiments demonstrated that the signal resampling can degrade the USS performance and the proposed method can handle untrained SFs more effectively than the signal-resampling-based methods. We believe that the proposed method is an important step toward the realization of the _literally universal_ sound separation.
Figure 3: Separation performances of proposed and signal-resampling-based methods for each \(N\). Red dotted lines denote trained SF. Error bars show standard errors. |
2302.14282 | Dynamic locational marginal emissions via implicit differentiation | Locational marginal emissions rates (LMEs) estimate the rate of change in
emissions due to a small change in demand in a transmission network, and are an
important metric for assessing the impact of various energy policies or
interventions. In this work, we develop a new method for computing the LMEs of
an electricity system via implicit differentiation. The method is model
agnostic; it can compute LMEs for any convex optimization-based dispatch model,
including some of the complex dispatch models employed by system operators in
real electricity systems. In particular, this method lets us derive LMEs for
dynamic dispatch models, i.e., models with temporal constraints such as ramping
and storage. Using real data from the U.S. electricity system, we validate the
proposed method against a state-of-the-art merit-order-based method and show
that incorporating dynamic constraints improves model accuracy by 8.2%.
Finally, we use simulations on a realistic 240-bus model of WECC to demonstrate
the flexibility of the tool and the importance of incorporating dynamic
constraints. Namely, static LMEs and dynamic LMEs exhibit a normalized average
RMS deviation of 28.40%, implying dynamic constraints are essential to
accurately modeling emissions rates. | Lucas Fuentes Valenzuela, Anthony Degleris, Abbas El Gamal, Marco Pavone, Ram Rajagopal | 2023-02-28T03:32:57Z | http://arxiv.org/abs/2302.14282v3 | # Dynamic locational marginal emissions via implicit differentiation
###### Abstract
Locational marginal emissions rates (LMEs) estimate the rate of change in emissions due to a small change in demand in a transmission network, and are an important metric for assessing the impact of various energy policies or interventions. In this work, we develop a new method for computing the LMEs of an electricity system via implicit differentiation. The method is model agnostic; it can compute LMEs for any convex optimization-based dispatch model, including some of the complex dispatch models employed by system operators in real electricity systems. In particular, this method lets us derive LMEs for _dynamic_ dispatch models, i.e., models with temporal constraints such as ramping and storage. Using real data from the U.S. electricity system, we validate the proposed method against a state-of-the-art merit-order-based method and show that incorporating dynamic constraints improves model accuracy by 8.2%. Finally, we use simulations on a realistic 240-bus model of WECC to demonstrate the flexibility of the tool and the importance of incorporating dynamic constraints. Namely, static LMEs and dynamic LMEs exhibit a normalized average RMS deviation of 28.40%, implying dynamic constraints are essential to accurately modeling emissions rates.
## 1 Introduction
Policy-makers interested in decarbonizing the electricity grid require reliable emissions data in order to quantify the impact of a particular policy or intervention strategy. Similarly, grid operators conducting generation and transmission expansion studies [1, 2, 3, 4] are increasingly looking to reduce emissions [5, 6, 7, 8, 9] and require detailed information on the relationship between demand and emissions. The need for this data will only grow as various systems, such as transportation and heating, begin to electrify and place additional strain on the grid. Unfortunately, electricity systems depend on complex, interconnected transmission network structures with numerous operational constraints, making it difficult to attribute emissions to energy consumption at a particular time and place [10].
_Emissions rates_ are important metrics that quantify the amount of pollutant emissions, e.g., CO\({}_{2}\), SO\({}_{2}\), or NO\({}_{\rm x}\), attributable to the consumption of energy. Researchers and decision-makers often examine _average emissions rates_[11, 12, 13, 14, 15] which measure the average rate of emissions per MWh of energy consumed, and _marginal emissions rates_[16, 17, 18, 19, 20] (also known as marginal emission factors or marginal emission intensities), which measure the rate at which emissions increase or decrease given a marginal change in energy demand. While average emissions rates quantify emissions over long periods, marginal emissions rates better quantify the emissions impacts of small, local changes in demand, since only a few specific generators are expected to change production in response. This response to marginal changes can be estimated both at the network level--quantifying the aggregate sensitivity across many nodes in the network--or at specific locations. Indeed, these metrics vary on a node-by-node basis, as network constraints and the local energy mix dictate which generators are available to compensate for changes in demand. Hence, _locational_ marginal emissions
rates (LMEs) [21, 22] quantify the emission sensitivity at the nodal level, revealing the spatial heterogeneity in marginal emissions rates that emerges from network constraints. LMEs are the emissions-equivalent of locational marginal prices, which have been studied extensively in the power systems community due to their importance to electricity markets [23, 24, 25]. LMEs have been used to quantify the impacts of various policies on carbon emissions, e.g., increasing electric vehicle penetration [26, 27] and changing electric vehicle charging policies [28], and are published live in the PJM Interconnection [29], a major U.S. system operator. LMEs have also been used in transmission expansion studies [30, 9, 31]. In this application, the LMEs define the marginal emissions effect of offsetting demand at a particular node and can be viewed as the gradient of emissions with respect to net load in the planning optimization problem [9].
Empirical studies on marginal emissions rates in the U.S. and U.K. have used _regression-based approaches_ to estimate emissions rates across large geographical regions [32, 33, 16, 21]. These works leverage publicly available data and fit linear models to changes in emissions and net demand. The main benefit of these methods is that they do not require a model of the underlying electricity system. However, because of their inherent data requirements, these methods are difficult to extend to finer geographic resolutions and hypothetical electricity systems that lack preexisting data.
In contrast, _model-based approaches_ explicitly calculate LMEs using the underlying dispatch model. This calculation has been performed using marginal generator identification in merit-order models [34, 35], LMP-based approximations [36, 37, 38], or, in simple cases, explicit formulae [39, 40, 22]. Model-based methods are promising because dispatch in real-world electricity systems often involves solving an optimization problem. However, the models used in real world systems are highly complex, limiting the applicability of specific derivations in the aforementioned studies and highlighting the need for incorporating and feature specific constraints, e.g., ramping limits or storage, that need to be taken into account when calculating LMEs [35, 41].
For example, merit-order-based models usually neglect the impact of network constraints and only focus on generation cost to identify the marginal generator. LMP-based methods, which rely on matching LMPs with generation cost in order to identify the marginal generator, are not exact [37, 38], and the presence of complex coupling constraints would likely make identification even harder. On the other hand, analytical derivations, while exact, have so far only been conducted on dispatch models with a limited number of constraint types, e.g., transmission constraints. As far as we are aware of, no previous studies have exactly accounted for time dependencies such as energy storage when computing LMEs.
In this work, we address this limitation by developing a method that supports arbitrary convex optimization-based dispatch models.
### Contribution and outline
This paper makes three main contributions:
* We propose _a new method to compute LMEs_ in arbitrary convex optimization-based dispatch models. This is in contrast to standard regression-based methods that may have significant data requirements and previous model-based methods that have been derived for specific dispatch models. The method we propose generalizes previous analytical derivations [40, 22] and is generic, flexible, and suitable for real systems dispatched by grid operators.
* We use the proposed method to _derive LMEs in networks with dynamic constraints_, such as energy storage. As far as we are aware of, this is the first method to calculate LMEs for dynamic network dispatch models, such as the standard dynamic economic dispatch problem.
* We _demonstrate the utility of computing LMEs in dynamic models_ using two different experimental setups. First, using a published dataset [34], we show that dynamic models more accurately represent the real-world relationship between demand and emissions compared to their static counterparts. Second, we use our method to study the impact of dynamic constraints on emissions rates using a realistic model of the Western United States transmission system. The dynamic LMEs are distinct from their static counterparts, demonstrating the importance of accurately including all relevant dynamic constraints when estimating emissions sensitivities.
The paper is structured as follows. In Section 2, we introduce the problem of computing LMEs in dynamic electricity networks. We show that this problem generalizes previous approaches [22, 40] to complex models with temporal constraints. We then show how to solve this problem using implicit differentiation in Section 3. Although we use this technique to compute LMEs for the model specified in Section 2, our technique generalizes to arbitrary convex optimization-based dispatch models, including those used by system operators in real world electricity markets. Lastly, we report simulation results on two datasets in Section 4. In the first experiment, we demonstrate the validity of our approach on real US electricity data and compare our results with an established method [34]. In particular, we show that a dynamic model with unit-commitment constraints more accurately models changes in emissions compared to its static counterpart. Second, we analyze a 240-bus model of the Western United States and show that, in the presence of grid-scale storage, computing LMEs dynamically is essential to accurately quantifying changes in emissions. We conclude and discuss future work in Section 5.
## 2 Problem formulation
In this section, we formulate the problem of computing the LMEs in a dynamically-constrained electricity system. First, Section 2.1 provides background information on the _dynamic dispatch problem_, a mathematical model for electricity networks with temporal constraints. We then describe our mathematical model for emissions and marginal emissions in static networks in Section 2.2, where we formally state the dynamic marginal emissions problem. Next, in Section 2.3, we describe two special cases of the dynamic marginal emissions problem that have been solved in previous work. Notably, both special cases are static, i.e., they do not incorporate any dynamic constraints. Finally, Section 2.4 gives three examples of dynamic devices, i.e., devices that cannot be represented in a static model.
### Dynamic dispatch model
In electricity systems, a _dispatch model_ is a mathematical model for deciding which electricity generators should be used to meet demand. Dispatch models are often formulated as convex optimization problems, where the variables are the amount of power produced by each generator, and the parameters include both the current electricity demand and the physical constraints on the system. When modeling emissions, past work has often considered _static_ dispatch models, i.e., models that only reflect a single instant in time [15, 22, 34, 40]. However, most real world electricity systems have _dynamic_ constraints--constraints that couple generator outputs across time periods. For example, a generator with ramping constraints can only change its power output by a small amount between successive time periods. In order to effectively model the impact of temporal constraints on emissions, we will study the _dynamic optimal power flow problem_,*
Footnote *: The dynamic (DC) optimal power flow problem has been well studied in the power systems community [42, 43] and is sometimes referred to as the dynamic economic dispatch problem.
\[\begin{array}{llll}\text{minimize}&\sum_{j=1}^{k}f_{j}(g_{j})\\ \text{subject to}&g_{j}\in\mathcal{G}_{j},&j\in[1:k],\\ &\mathbf{1}^{T}d_{t}=\mathbf{1}^{T}\tilde{g}_{t},&t\in[1:T],\\ &F(d_{t}-B\tilde{g}_{t})\leq u^{\max},&t\in[1:T],\end{array} \tag{1}\]
where the variable is \(G\in\mathbf{R}^{T\times k}\). In the above, we use \(g_{j}\) to refer to \(j\)-th column of \(G\), and \(\tilde{g}_{t}\) to refer to the \(t\)-th row of \(G\). The matrix \(G\) represents the power output of \(k\) devices over \(T\) timesteps; each device can supply power, store power, or otherwise interact with the grid. The entry \(G_{tj}\) represent the output of device \(j\) at time \(t\).
Device Costs and ConstraintsEach device \(j\) has three properties: a convex cost function \(f_{j}(g):\mathbf{R}^{T}\to\mathbf{R}\), a convex constraint set \(\mathcal{G}_{j}\subset\mathbf{R}^{T}\), and a location on the network. We model the device locations with a matrix \(B\in\{0,1\}^{n\times k}\) that maps each device to a particular node in the network, i.e., \(B_{ij}=1\) if device \(j\) is located at node \(i\), and \(B_{ij}=0\) otherwise. The objective of Problem (1) is to minimize the sum of the device costs, and the first constraint states that each device must stay within their constraint set.
Network ConstraintsProblem (1) considers an electricity network with \(n\) nodes and \(m\) edges (transmission lines). Node \(i\in[1:n]\) has demand \((d_{t})_{i}\) at time \(t\in[1:T]\). The second constraint is that power must be balanced across the network, i.e., \(\mathbf{1}^{T}d_{t}=\mathbf{1}^{T}\tilde{g}_{t}\). Finally, the third constraint is that the power flowing across each transmission line is limited by its capacity. We define \(F\in\mathbf{R}^{n\times m}\) to be the _power flow distribution factor matrix_, where \(F_{i\ell}\) determines how a power injection at node \(i\) (withdrawn at node \(n\)) affects power flow across line \(\ell\in[1:m]\). Because of thermal and voltage phase angle constraints, each line \(\ell\) can only transport up to \(u_{\ell}^{\max}\) units of power, modeled with the constraints \(F(d_{t}-B\tilde{g}_{t})\leq u^{\max}\).
Solution MapLet \(D=(d_{1},\ldots,d_{T})\in\mathbf{R}^{Tn}\) be the concatenated vector of demand schedules and assume the solution to Problem (1) exists and is unique for all \(D\in\mathbf{R}^{Tn,\dagger}\) Let \(G^{*}(D):\mathbf{R}^{Tn}\to\mathbf{R}^{T\times k}\) denote the optimal choice of \(G\) in Problem (1) as a function of \(D\). Because we assume the solution to Problem (1) exists uniquely for all \(D\), then \(G^{*}\) is a well-defined function. We call \(G^{*}\) the _solution map_, and use the vector-valued function \(\tilde{g}_{t}^{*}(D):\mathbf{R}^{Tn}\to\mathbf{R}^{k}\) to denote the \(t\)-th row of \(G^{*}\). As we will see shortly, the solution map will allow us to formalize the relationship between demand and emissions.
### Locational marginal emissions
We model the emissions of generator \(i\) as a linear function of power output with rate \(c_{i}\), i.e., the total emissions at time \(t\) are \(c^{T}\tilde{g}_{t}\). Since the generator power outputs are determined by the dispatch model, the emissions at time \(t\) generated as a function of demand is \(E_{t}(D)=c^{T}\tilde{g}_{t}^{*}(D)\). The _total emissions_ over the entire time horizon are then \(E(D)=\sum_{t}E_{t}(D)\). Although we use a linear model throughout the remainder of the paper, it is straightforward to generalize all our results to nonlinear models. For example, if each generator has nonlinear emissions functions \(\gamma_{i}(g):\mathbf{R}\to\mathbf{R}\), then the total emissions at time \(t\) is \(E_{t}(D)=\sum_{i}\gamma_{i}((\tilde{g}_{t})_{i})\).
Problem statementThe LMEs \(\Lambda(D):\mathbf{R}^{Tn}\to\mathbf{R}^{Tn}\) are the marginal rate of change in total emissions given a marginal change in demand at a specific node and at a given time. In other words, the LMEs are the gradient of emissions with respect to demand, i.e., \(\Lambda(D)=\nabla E(D)\). The function \(\Lambda(D)\) is vector-valued, since changes in electricity consumption at different nodes and different times may have different impacts on emissions. As an illustration, we report a comparison between total emissions and LMEs for different values of demand at a given node in an arbitrary network (see Fig. 1). Locally, LMEs do indeed provide good approximations to the change in total emissions. It is however clear that those metrics are only locally valid and can sometimes be ill-defined, e.g. at points of non-differentiability of total emissions. The problem we study in this paper is how to compute \(\Lambda(D)\) when the solution maps \(\tilde{g}_{t}^{*}(D)\) are determined by the dynamic optimal power flow problem. As far as we are aware, no prior published results have shown how to compute LMEs for generic dynamic dispatch models.
### Special Case: Static Generators
When we restrict the devices (the functions \(f_{i}\) and sets \(\mathcal{G}_{i}\)) to be static generators, we recover previous analytical models [22, 39, 40]. The static generator device has constraint set \(\mathcal{G}=\{g\mid g^{\min}\leq g\leq g^{\max}\}\), where \(g^{\min},g^{\max}\in\mathbf{R}^{T}\) and cost function \(f(g)=\sum_{t}ag_{t}^{2}+bg_{t}\). The static generator could represent a traditional dispatchable generator, in which case \(g^{\max}=\alpha\mathbf{1}\), or a renewable resource, in which case the entries of \(g^{\max}\) may vary. Most importantly, the static generator has no temporal constraints: \(g_{t}\) is independent of the choice of \(g_{t^{\prime}}\) when \(t\neq t^{\prime}\). In a network with only static generator devices, the dynamic problem would simplify to \(T\)_static_ optimal power flow problems that can be solved independently. Moreover, if we remove the network constraints by setting \(F=0\), we recover the model used in [34] to empirically estimate emissions rates.
### Dynamic Devices
By addressing the dynamic optimal power flow problem in its full generality, our framework allows us to consider dynamic devices as well. These devices have temporal constraints, implying their actions at any
given time depend on their actions at other times. We give three examples below.
Ramp-constrained generatorsRamping constraints limit the rate at which a generator can change its output. These generators are modeled with the constraint set,
\[\mathcal{G}=\{\ g\ |g^{\min}\leq g\leq g^{\max},\] \[g_{t}-\rho\leq g_{t+1}\leq g_{t}+\rho,t\in[1:T-1]\ \}\]
where \(\rho\in\mathbf{R}^{k}\) is the ramping rate of each generator. Ramp-constrained generator devices have the same cost functions as static generator devices, \(f(g)=\sum_{t}ag_{t}^{2}+bg_{t}\). Ramping constraints are particularly useful in dynamic dispatch models with short time intervals, e.g., 15 minutes, and slow dispatching generators, like nuclear and hydro.
Storage devicesStorage devices include pump-hydro resources, grid-scale batteries, and DER aggregators selling storage services to the grid. We define a storage device to have cost function \(f(g)=0\) and constraint set \(\mathcal{G}\) that is the set of \(g\in\mathbf{R}^{T}\) such that there exists \(s,\gamma,\delta\in\mathbf{R}^{T}\) satisfying,
\[\begin{array}{l}0\leq s\leq C,\qquad 0\leq\gamma\leq P,\qquad 0\leq \delta\leq P,\\ g_{t}=\delta_{t}-\gamma_{t},\qquad s_{t}=s_{t-1}+\eta\gamma_{t}-(1/\eta) \delta_{t},\end{array}\]
for \(t\in[1:T]\), where \(s_{0}=0\). The vector \(C\in\mathbf{R}\) is the storage capacity, \(P\in\mathbf{R}\) is the maximum (dis)charge rate, and \(\eta\in(0,1]\) is the storage efficiency.
Unit-commitment-constrained generatorsUnit-commitment constraints are integer constraints specifying whether or not a power generator is on. If generator \(i\) is on, it must produce a minimum power \(g_{i}^{\min}\) and stay on for a specified period of time. We model this by modifying the generator device constraint set to be the set \(\mathcal{G}=\mathcal{G}_{\text{static}}\cup\{0\}\), where \(\mathcal{G}_{\text{static}}\) is the equivalent static generator. Although the set \(\mathcal{G}\) is not convex, it can be made mixed-integer convex by introducing an integer variable \(z\in\{0,1\}\). Although the mixed-integer constraint makes (1) NP-hard, many good heuristics for solving mixed-integer convex programs are readily available through commercial solvers such as Gurobi [44].
## 3 Implicit differentiation-based LMEs
In previous model-based studies, locational marginal emissions rates are derived by first calculating how generation changes with demand, and then multiplying the change in generation by the emissions rate of each generator. This is a manifestation of the chain rule, which states that the LMEs are
Figure 1: Illustration of marginal emissions rates. (Solid blue curve) Total emissions as a function of demand at a particular node. (Dashed red curves) The first order approximations defined by the LMEs calculated at each red circle. The LMEs are the slopes of the dashed red curves.
\(J_{J}(z)\in\mathbb{R}^{k\times n}\) and \(\sum_{t=1}^{T}J\tilde{g}_{t}^{*}(D)^{T}c\), where \(Jf(z)\in\mathbf{R}^{k\times n}\) denotes the Jacobian of the function \(f:\mathbf{R}^{n}\to\mathbf{R}^{k}\) evaluated at \(z\in\mathbf{R}^{n}\). Therefore, the main technical challenge when computing \(\Lambda(D)\) lies in computing an analytical expression for the Jacobians \(J\tilde{g}_{t}^{*}(D)\).
In previous studies that only consider static dispatch models, i.e., \(T=1\), one only needs to derive a single expression for \(J\tilde{g}_{1}^{*}(D)\in\mathbf{R}^{k\times n}\). In the general setting, the situation is much more complex--one must derive \(T\) Jacobians \(J\tilde{g}_{t}^{*}(D)\) of size \(k\times Tn\). Although deriving an analytical expression might be possible, we take a simpler and more powerful approach in this paper: we use the _implicit function theorem_ to compute the Jacobians \(J_{D}\tilde{g}_{t}^{*}(D)\). Our approach essentially generalizes the analytical derivations of [22, 40] to arbitrary convex optimization-based dispatch models, producing identical in the simpler static setting.
**Theorem 1** (Implicit Function Theorem, [45]).: _Suppose \(K:\mathbf{R}^{n}\times\mathbf{R}^{r}\to\mathbf{R}^{k}\) is strictly differentiable at \((D_{0},x_{0})\in\mathbf{R}^{n}\times\mathbf{R}^{r}\) and \(K(D_{0},x_{0})=0\). Moreover, suppose \(J_{x}K(D_{0},x_{0})\) is nonsingular, where \(J_{z}f(z,y)\in\mathbf{R}^{k\times n}\) denotes the partial Jacobian of the function \(f:\mathbf{R}^{n}\times\mathbf{R}^{r}\to\mathbf{R}^{k}\) with respect to \(z\) evaluated at \((z,y)\). Then the solution mapping \(x^{*}(D)=\{x\in\mathbf{R}^{r}\mid K(D,x)=0\}\) is single-valued in a neighborhood around \((D_{0},x_{0})\) and strictly differentiable at \((D_{0},x_{0})\) with Jacobian_
\[Jx^{*}(D_{0})=-J_{x}K(D_{0},x_{0})^{-1}J_{D}K(D_{0},x_{0}).\]
The implicit function theorem states that if a differentiable system of equations \(K(D,x)=0\) has a solution at \((D_{0},x_{0})\), and the corresponding partial Jacobian \(J_{x}K(D_{0},x_{0})\) is non-singular, then the solution map \(x^{*}(D)\) is a locally well-defined function with Jacobian given by Theorem 1.
In our setting, the solution map \(G^{*}(D)\) is not the solution to a system of equations, but is rather the solution to a convex optimization problem. From convex analysis, we know that \(G\) solves Problem 1 if and only if it solves the Karush-Kuhn-Tucker (KKT) equations \(K(D,G)=0\)[45, 46]. Therefore, we can apply the implicit function theorem to the KKT equations to compute \(JG^{*}(D)\). If we assume that the device objectives \(f_{j}\) are twice differentiable and that the device constraints \(\mathcal{G}_{j}\) can be parametrized via a set of twice differentiable inequalities, then the KKT equations are strictly differentiable, and \((D,G^{*}(D))\) satisfy the conditions of Theorem 1 (for more details on the implicit function theorem and its applications to optimization, we refer the reader to [45] and the references therein). By combining this with the chain rule, we can then derive marginal emissions rates for the dynamic optimal power flow problem specified by Problem (1). To summarize, we compute LMEs in two steps. First, we compute the gradient of emissions with respect to the device outputs, i.e. \(dE/d\tilde{g}_{t}=c\) in the case of linear emissions functions. Then, we multiply this by the Jacobian \(JG^{*}(D)\), which is computed using the implicit function theorem. The resulting metrics indicate the changes in emissions resulting from a marginal change in demand, as illustrated in Fig. 1.
Critically, this derivation works for any dynamic dispatch model that fits the form in Problem 1, i.e., regardless of the choice of device cost functions \(f_{j}\) and constraint sets \(\mathcal{G}_{j}\) (as long as they are convex and twice differentiable). When calculating the LMEs, different choices of device characteristics in Problem (1) only change the KKT operator \(K(D,G)\) and its corresponding partial Jacobians \(J_{G}K(D,G)\) and \(J_{D}K(D,G)\); however, the general derivation using the implicit function theorem remains unchanged. In practice, these Jacobians can either be derived analytically or computed using an automatic differentiation library, such as [47], given a programmatic specification \(f_{j}\) and \(\mathcal{G}_{j}\).
_Remark:_ Since the dynamic OPF problem includes the static OPF problem and the economic dispatch problem as special cases, this work generalizes the derivations in [22, 40] and [34]. However, our method is by no means constrained to the dynamic dispatch model used in this paper; implicit differentiation can be used to derive the marginal emissions rates for any convex-optimization based dispatch model. Importantly, this includes many of the dispatch models used by system operators in practice, a point we revisit in Section 5.
### Complexity and Software Implementation
We implement our method in Julia [48] for all the aforementioned dispatch models and constraints. Our implementation is publicly available on GitHub in the package DynamicMarginalEmissions.jl. We use Convex.jl[49] to model the problem and solve it with external solvers, such as ECOS [50] or Gurobi [44]. For the large-scale network discussed in Sections 4.2 and 4.3 (i.e., \(n=240\) nodes, \(k=136\) generators,
\(m=448\) lines) and \(T=1\) time periods, our software package solves the dispatch problem and computes the resulting LMEs in just under a second on a modern laptop with a 2.3 GHz quad-core processor. For the same problem with \(T=24\) time periods, the same machine takes about two minutes to solve the problem and compute the LMEs.
Our software package offers a flexible interface and can be used to analyze networks with different physical characteristics (e.g., locations of generators, transmission line ratings) and constraints (e.g., ramping rates). After specifying the network parameters, one can compute the LMEs with a single function call. Because our implementation is open-source, reasonably fast, and easy to use, we believe it is of value to the broader community.
In general, we expect our method to scale well to realistic, large problems encountered by grid operators. Specifically, let \(z=T\cdot\max(m,n,k)\). Solving the dispatch problem itself requires \(O(z^{4})\) operations. Once the dispatch problem is solved, constructing and inverting the Jacobian to compute LMEs only requires \(O(z^{3})\) operations, which is negligible compared to the complexity of solving the dispatch problem itself. Since most grid operators must solve dispatch problems at regular intervals, e.g., every 15 minutes to clear the real-time market, computing LMEs can be viewed as a post-processing step requiring little additional compute time.
## 4 Simulation Results
In this section, we illustrate the applicability and utility of the suggested approach using two different simulation setups. First, in Section 4.1, we compute the LMEs for a static model and a dynamic model with unit-commitment constraints using real demand and generator data from the U.S. Western Interconnection [34]. We compare each model's LMEs to real changes in emissions and to estimates from a merit-order-based model [34]. Second, in Sections 4.2, 4.3 and 4.4, we illustrate the methodology on a recent reduced network model of the same region [51]. Using the original dataset, we highlight the geographic variance of marginal emissions across the network. Then, we investigate the potential impacts of hypothetically large renewable penetration of storage and renewable generation. We conclude by comparing the LMEs obtained from a static approximation to those of the dynamic model.
### Economic Dispatch in the Western United States
In our first experiment, we analyze electricity data from the U.S. Western Interconnection system in 2016. The Western Interconnection dataset is compiled in [34] and contains weekly generator capacities, generator costs, and generator emission rates for large (above 25 MW capacity) fossil fuel generators in the Western Interconnection, as well as hourly total demand and total carbon emissions. Because no transmission data is available, we consider models without transmission constraints. The LMEs for a static model, a dynamic model with unit commitment constraints, and a state-of-the-art static merit-order method are compared to the real hourly rate of change in emissions.
ModelsWe analyze two models, which we compare to a baseline. First, we analyze the results of the simple economic dispatch model (1), with linear costs \(f_{i}(g_{i})=b_{i}g_{i}\), where \(b_{i}\) is the marginal operation cost of generator \(i\). Second, we analyze a dynamic economic dispatch model with unit commitment constraints, over a time horizon of \(T=24\). The unit-commitment constraints are only applied to coal generators, all of which are given a minimum output of \(g_{i}^{\min}=0.4g_{i}^{\max}\). We benchmark our results against the _reduced-order dispatch model (RODM)_ described in [34]. The core of the RODM is a _merit order_-based dispatch process: generators are dispatched in ascending order of cost. After dispatching generators via the merit-order, the _marginal generator_--the generator next in line to modify its output to meet an increase in demand--is identified to find the marginal emissions rate of the system. In [34], post-processing steps are applied to generate the marginal emission rates. Notably, when no post-processing is applied, the RODM is identical to the economic dispatch model in (1) with linear costs \(f_{i}(g_{i})=b_{i}g_{i}\).
ResultsAfter generating LMEs \(\lambda_{t}\) for every hour \(t=1,\ldots,T\) of the year, where \(T=8760\), we compare the resulting LMEs to the actual hourly changes in emissions. Specifically, we compute the change in demand
\(\Delta d_{t}\) and change in emissions \(\Delta E_{t}\) for every hour of the year. Each model's estimated change in emissions is given by \(\Delta\hat{E}_{t}=\lambda_{t}\Delta d_{t}\). In order to compare the models, we compute the absolute error \(|\Delta\hat{E}_{t}-\Delta E_{t}|/Z\) of each model's estimate against the true hourly change in emissions at each timepoint, where errors are normalized by the mean absolute change in emissions \(Z=(1/T)\sum_{t=1}^{T}|\Delta E_{t}|\). We use absolute error, instead of square error, to minimize the effect of outliers.
A violin plot of absolute errors is displayed in Figure 2, Panel A. As expected, the economic dispatch model and the merit-order model from [34] perform similarly--the merit-order model only differs from the economic dispatch model in its post-processing. Notably, the unit-commitment model better models hourly changes in emissions than both the economic dispatch model and the merit-order model, reducing the mean absolute error by \(8.2\%\). We attribute this to the fact that the unit-commitment model accurately represents dynamic constraints that appear in real-world dispatch processes, namely that coal generators cannot rapidly turn on and off again.
LMEs as a function of demand are also reported in Panel B of Figure 2. Historical LMEs are computed as \(\lambda_{t}=\Delta E_{t}/(\Delta d_{t}+\epsilon)\), where \(\epsilon=0.5\) MWh is a small value used to avoid unreasonably large LMEs when \(\Delta d_{t}\) is small. Following a similar procedure to [34], the LMEs for the data and for each model are smoothed using the mean of a rolling window of \(20\%\) of the data. Shaded regions representing the interquartile range (IQR) of the data are also plotted to better understand the variance of each model. After averaging, the LMEs produced by the economic dispatch model most closely resemble the data. However, the variation is significantly reduced in the unit-commitment model, and the IQR most closely resembles that of the data compared to both other models.
### 240-bus Network Model
In this experiment, we study LMEs using a recent 240-bus network that models the Western United States in 2018 [51]. The dataset includes generator capacities, costs, and ramping rates; hourly demand and renewable generation for \(T=8760\) hours; and the efficiencies and capacities of four pumped hydro storage units. We solve the dispatch problem and compute hourly LMEs using a dynamic model with storage devices as described in Section 2. We use this experiment to demonstrate the impact of network constraints on the
Figure 2: (Panel A) Emissions error for model from [34] (DA), economic dispatch model (ED), and unit-commitment model (UC), normalized by the mean absolute change in emissions. The ED model, effectively the same as the DA model without post-processing, performs similarly as DA. The UC model reduces the error by \(8.2\%\) compared to the DA model, suggesting unit-commitment more accurately represents the real world dispatch process. (Panel B) LMEs as a function of total demand. Emissions rates are smoothed using the mean of a rolling window of \(20\%\) of the data. Shaded regions represent the interquartile range (IQR), i.e., the middle \(50\%\) of the data, of the rolling window.
LMEs. Since the network has relatively little storage and only a few generators with ramping constraints, we do not analyze the impact of storage and dynamic constraints in this experiment.
We report the distribution of daily total emissions in Figure 3, Panel A (left frame) and the distribution of LMEs at 6pm in August (left frame) in Panel B. We observe that, on average, the distribution of nodal LMEs is narrowly concentrated around its mode. However, we also note that the distribution of LMEs has relatively long tails; although most of the LMEs are close to the mode, a few have drastically different values. In Panel C, we display a map of LMEs at 6pm in August (averaged over days of the month), illustrating the large geographic variation in emissions rates. Panel C demonstrates how transmission constraints can create large discrepancies in LMEs even within small geographic regions. For example, despite their geographic proximity, the California Bay Area differs significantly from the Sacramento area. The local diversity of the LMEs emphasizes the importance of modeling the network when computing emissions rates: a small geographic change can cause large differences in emissions rates.
### High Renewable Scenario in the 240-bus Network
To illustrate the impact of grid-level changes on emissions, a high-renewable version of the 240-bus network is presented in this section. Specifically, we uniformly scale renewable generation in the original 2018 model so that renewable generators meet 27% of total demand (compared to 13.5% originally). We also add 15 GW of 4-hour battery storage distributed among the ten largest renewable nodes proportional to generation capacity. These batteries have a round trip efficiency of 89.8% with symmetric charging and discharging efficiencies and are constrained to start and end each day with 50% of their total capacity. As in Section 2, we assume the grid operator manages these batteries to minimize the overall cost of electricity. The right frame of Panels A and B in Figure 3 show the distribution of total emissions and LMEs in an identical manner to the 2018 case. Similarly, Panel D displays a map of average nodal LMEs akin to Panel C. The 2018 and the high renewable scenarios differ in several ways. First, as expected, _total_ emissions decrease significantly in the high renewable scenario. The changes to the locational _marginal_ emissions rates, on the other hand, vary significantly from node to node. For example, adding renewable generation and storage causes LMEs to decrease at nodes in southern California, but to increase at nodes in Oregon and Washington. In general, the distribution of nodal LMEs exhibits high variance and displays two modes, in contrast with the 2018 case.++ Overall, the changes in LMEs are complex and unintuitive: because of the presence of storage, nodal LMEs depend not only on non-local generation and transmission constraints, but also on that of every other time period. We believe this complexity is one reason grid operators should use an exact method for calculating LMEs (instead of relying, for example, on network-wide heuristics).
Footnote ‡: We observe these differences for most hours of the day, but only display results for 6pm for concision.
### Comparison Between Static and Dynamic LMEs
In order to demonstrate the value of explicitly integrating dynamic constraints, we compare the true dynamic LMEs to the analogous "static LMEs" that arise from eliminating dynamic constraints. Specifically, we consider how the LMEs would differ between static and dynamic models with the exact same loads and device outputs. Since static models cannot incorporate dynamic devices, we first solve the dynamic dispatch problem, then fix the battery charging and discharging schedules and consider them as parameters to a series of independent static problems, eliminating any dynamic constraints between subsequent timesteps. We then compute the LMEs of the resulting model, which now only has static devices and constraints.
More formally, consider the dynamic optimal power flow problem in (1), with the devices ordered so that the first \(k_{1}\) devices are static and the remaining \(k_{2}\) devices are dynamic. After solving the dynamic problem with \(k=k_{1}+k_{2}\) devices to obtain device outputs \(G^{*}\) and LMEs \(\Lambda^{*}\), we solve the static problem,
\[\begin{array}{ll}\text{minimize}&\sum_{j=1}^{k_{1}}f_{j}(g_{j})\\ \text{subject to}&g_{j}\in\mathcal{G}_{j},&j\in[1:k_{1}],\\ &g_{j}=g_{j}^{*},&j\in[k_{1}+1:k],\\ &\mathbf{1}^{T}d_{t}=\mathbf{1}^{T}\tilde{g}_{t},&t\in[1:T],\\ &F(d_{t}-B\tilde{g}_{t})\leq u^{\max},&t\in[1:T],\end{array} \tag{2}\]
Figure 3: (Panel A) Distribution (over days of the year) of network-wide, total daily emissions for both the 2018 case and the high renewable scenario. The mean of the distribution is denoted with a horizontal black line. (Panel B) Distribution (over nodes and days of the month) of LMEs during August at 6pm, both for the 2018 case and the high renewable scenario. The mean of the distribution is again denoted with a horizontal black line. (Panel C) A map of nodal LMEs through the 240-bus WECC network during the same time period (averaged over the month). (Panel D) Same as Panel C, but for the high renewable scenario with 15 GW of 4-hour storage.
where the variable is again \(G\in\mathbf{R}^{T\times k}\). Since the schedules of the \(k_{2}\) dynamic devices are fixed, \(\tilde{g}_{t}\) is independent of \(\tilde{g}_{t^{\prime}}\) for \(t\neq t^{\prime}\), and Problem (2) can be decomposed into \(T\) independent optimization problems, if desired. We compute the resulting'static' LMEs \(\Lambda^{\rm static}\) from solving Problem (2), and compare them to \(\Lambda^{*}\). In theory, the difference between the LMEs of a dynamic model and its static approximation can be arbitrarily large, as seen in the following example.
_Example:_ Consider a single node network with \(T=2\) timesteps and \(k=3\) devices. The first device is a gas generator with constraint set \(\mathcal{G}_{1}=[0,10]\times[0,10]\) (i.e., the generator has capacity \(10\) in both time periods), cost \(f_{1}(g)=1\), and emissions rate \(c_{1}=500\). The second device is a solar panel with constraint set \(\mathcal{G}_{2}=[0,10]\times\{0\}\) (i.e., the generator has capacity \(10\) in the first period, and no capacity in the second period), cost \(f_{3}(g)=0.1\), and emissions rate \(c_{3}=0\). Finally, the third device is a battery with constraint set \(\mathcal{G}_{3}\) specified by Section 2.4, with capacity \(C=10\), charging rate \(P=10\), and efficiency \(\eta=1\). Assume a constant demand schedule \(d=(1,1)\). The economic dispatch will result in the following device outputs: \(g_{1}=(0,0)\), \(g_{2}=(2,0)\), and \(g_{3}=(-1,1)\), i.e., the solar panel will produce two units of power, storing some of it in the battery to serve the second time period, and curtail the remaining \(8\) units. The dynamic LMEs are naturally \(\Lambda(D)=(0,0)\): if we had slightly higher demand in either period, the solar panel would curtail its output less to meet demand. Now, if we fix the battery charging schedule to \(g_{3}=(-1,1)\), solving the static problem gives the same device outputs \(g_{1}=(0,0)\) and \(g_{2}=(2,0)\). However, the resulting static LMEs are \(\Lambda(D)=(0,500)\), a drastically different result. This is because the static approximation fixes the battery schedule, so changes in demand during period two are met by a change in the gas generator output.
The toy example above demonstrates that dynamic and static LMEs can differ significantly in theory. We verify that this occurs in practice using the 240-bus network from Section 4.4, where we use the same procedure to compute dynamic LMEs and their static approximations.
We report these differences in Figure 4, Panel A, where we display the distribution across all nodes and all days of the year of the root mean squared (RMS) deviation between the vector of daily emissions rates for the static model, \(\lambda_{\rm static}\in\mathbf{R}^{24}\), and the dynamic model \(\lambda_{\rm dynamic}\in\mathbf{R}^{24}\). The average RMS deviation (normalized by the median LME) is \(28.40\%\), indicating that the static and dynamic models yield significantly different results. In Panel B, we illustrate the static and dynamic LMEs for three randomly sampled days. While static LMEs are very good approximations in some instances (e.g., morning hours in top-left), they deviate significantly in others. These results suggest that ignoring dynamic constraints and simply computing static LMEs is not sufficient to model emissions in dynamic networks: explicitly computing dynamic LMEs is essential to understanding emissions rates in systems with significant dynamic constraints, such as large grid
Figure 4: (Panel A) Distribution of root mean squared (RMS) deviation between nodal LMEs produced by the static model and the dynamic model, normalized by the median LME. The average RMS deviation between the static and dynamic model is \(28.40\%\). (Panel B) Hourly time series of static LMEs (blue) and dynamic LMEs (yellow) during three sample days.
storage capacity.
## 5 Conclusion
In this paper, we introduce a novel method for computing locational marginal emissions rates using implicit differentiation. We use this method to compute the LMEs of dynamic dispatch models, i.e., dispatch problems containing temporal constraints.
Using real WECC electricity and emissions data, we find that incorporating these dynamic constraints improves model accuracy by 8.2%. Finally, we observe that dynamic LMEs are difficult to approximate with their static counterparts: in a synthetic approximation of the WECC network, static and dynamic LMEs have a normalized average RMS deviation of 28.40%. Since flexible loads and energy storage are expected to play a large role in future grids, we believe incorporating dynamic constraints will be essential to accurately modeling LMEs.
The method presented in this paper generalizes previous methods to arbitrary convex optimization-based dispatch models. Since many system operators use convex optimization-based dispatch models in day-ahead and real-time electricity markets [52], they could use this method to publish marginal emissions factors in real time. Although these models can be notably more complex than those analyzed in academic research, the proposed method can compute marginal emissions factors for any such model, as long as they can be represented as convex optimization programs. Moreover, by leveraging automatic differentiation software and optimization modeling languages [53], the system operator would only need to specify the objective and constraints of their dispatch problem. LMEs could then be published alongside LMPs to provide real time emissions information to electricity market participants. This could be helpful, for example, to a large internet company choosing to reduce emissions by directing internet traffic to servers in low emitting regions, a problem considered in [54], or more generally to operators wanting to define optimal load management strategies [38].
Finally, we comment on three directions for future work. First, our experimental results indicate that LMEs in dynamic models often display complex behaviors and are difficult to interpret due to temporal and network constraints. Deciphering the mechanisms underlying the structure of the LMEs in different settings would be useful in communicating these results and translating them into grid planning or policy decisions. Second, we note that computing LMEs in large networks could be computationally intensive. Exploiting network structure and using distributed computation could yield significant performance gains. Third, our paper shows how to compute LMEs when the full network model is available. In some cases, however, the network model may be unavailable to the interested party. Understanding how to estimate the parameters of the electricity network from publicly available data (using the methods developed in [55], for example) and then deriving marginal emissions factors from the learned model is an interesting area of research.
## Acknowledgements
The authors thank Liang Min and Ines Azevedo for their valuable comments and suggestions.
Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. |
2309.07832 | VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline
Reinforcement Learning | We present VAPOR, a novel method for autonomous legged robot navigation in
unstructured, densely vegetated outdoor environments using offline
Reinforcement Learning (RL). Our method trains a novel RL policy using an
actor-critic network and arbitrary data collected in real outdoor vegetation.
Our policy uses height and intensity-based cost maps derived from 3D LiDAR
point clouds, a goal cost map, and processed proprioception data as state
inputs, and learns the physical and geometric properties of the surrounding
obstacles such as height, density, and solidity/stiffness. The fully-trained
policy's critic network is then used to evaluate the quality of dynamically
feasible velocities generated from a novel context-aware planner. Our planner
adapts the robot's velocity space based on the presence of entrapment inducing
vegetation, and narrow passages in dense environments. We demonstrate our
method's capabilities on a Spot robot in complex real-world outdoor scenes,
including dense vegetation. We observe that VAPOR's actions improve success
rates by up to 40%, decrease the average current consumption by up to 2.9%, and
decrease the normalized trajectory length by up to 11.2% compared to existing
end-to-end offline RL and other outdoor navigation methods. | Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Mohamed Elnoor, Dinesh Manocha | 2023-09-14T16:21:27Z | http://arxiv.org/abs/2309.07832v2 | VAPOR: Legged Robot Navigation in Unstructured Outdoor Environments using Offline Reinforcement Learning
###### Abstract
We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments using offline Reinforcement Learning (RL). Our method trains a novel RL policy using an actor-critic network and arbitrary data collected in real outdoor vegetation. Our policy uses height and intensity-based cost maps derived from 3D LiDAR point clouds, a goal cost map, and processed proprioception data as state inputs, and learns the physical and geometric properties of the surrounding obstacles such as height, density, and solidity/stiffness. The fully-trained policy's critic network is then used to evaluate the quality of dynamically feasible velocities generated from a novel context-aware planner. Our planner adapts the robot's velocity space based on the presence of entrapment inducing vegetation, and narrow passages in dense environments. We demonstrate our method's capabilities on a Spot robot in complex real-world outdoor scenes, including dense vegetation. We observe that VAPOR's actions improve success rates by up to 40%, decrease the average current consumption by up to 2.9%, and decrease the normalized trajectory length by up to 11.2% compared to existing end-to-end offline RL and other outdoor navigation methods. Code implementation is available here.
## I Introduction
Autonomous robot navigation in complex outdoor scenes is an essential capability for many applications, including precision agriculture [1], search and rescue operations in forested environments [2], reconnaissance[3], etc. There are two major challenges in navigating such scenarios. Firstly, the robot must perceive and differentiate non-solid/pliable obstacles (e.g. tall grass), from solid/non-pliable obstacles (e.g. trees) [4]. Pliable obstacles can be safely traversed through, whereas non-pliable obstacles must be avoided. Secondly, apart from avoiding collisions, the robot also faces challenges such as narrow passages, and scenarios where the vegetation could wrap/attach onto the robot and entrap it. The robot's navigation must be capable of handling such adverse situations.
To address the perceptional challenges in outdoors, methods based on image classification [4], semantic segmentation [5], and anomaly detection [6] using supervised learning have been employed. However, such works require extensive manual annotation and labeling to identify traversable terrain during training. Such models also may not align with the actual traversability capabilities of different robots due to varying dynamic constraints. This restricts the robot's navigation and could lead to highly conservative, meandering trajectories [7], or freezing behaviors [8]. Imitation learning techniques have also been proposed for outdoor navigation, but the resulting models may not generalize well[9].
Conversely, outdoor navigation methods based on online reinforcement learning (RL) [14] do not require human labeling since they are trained using a robot's active interactions with a simulated environment. Nevertheless, such models exhibit severe performance degradation during real-world deployment due to sim-to-real transfer issues [15]. Training complex models using RL requires high-fidelity simulations, which may not be available, especially for complex scenarios. To alleviate such shortcomings, offline-RL [16] methods have been proposed, where a model is trained using data collected in real-world environments, reducing the sim-to-real transfer issues.
However, none of the existing methods for outdoor navigation have accounted for the constraints that complex real-world scenes imposes on the robot's velocities. For instance, while traversing through tall grass and bushes, angular motions could cause the vegetation to easily wrap around the robot restricting its motion. Furthermore, in cases with solid obstacles create narrow passages, the robot would have to rotate/reorient itself to maneuver the narrow free space. Therefore, the robot's executable actions must be adapted based on the environment.
Fig. 1: Legged robot trajectories generated when navigating through complex outdoor vegetation using VAPOR, end-to-end CQL-SAC [10], IQL [11], BCO [12], VERN [4], and DWA [13], VAPOR identifies the entrapment in vines and uses holonomic actions to minimize the instability and current consumption instead of excessive angular actions taken by the other methods which results in further entrapment. Hence, in this scenario, VAPOR moves backward with minimal angular motion to reduce entanglement with vines.
**Main contributions:** To address these challenges, we propose VAPOR, an offline RL-based trajectory evaluation model combined with a context-aware planner designed to generate dynamically feasible velocities to operate a legged robot in challenging outdoor scenes. VAPOR's offline RL formulation allows it to be trained using data collected in the real world [17] that is automatically compiled for training, alleviating sim-to-real transfer issues. The novel components of our work are:
* We propose a novel offline RL-based actor-critic network to learn a Q-function to evaluate a legged robot's candidate actions and velocities in terms of their ability to reach the goal, avoid solid, non-pliable vegetation and other desirable behaviors. The network consists of spatial and channel attention layers to learn the spatial correlations in the input observation space. Our model is trained using real-world data collected in dense environments that are automatically compiled into states and actions between randomly chosen start and goal states. This alleviates the sim-to-real transfer issues prevalent in existing RL methods. This results in an improvement up to 40% in terms of success rate
* A novel observation space to sense dense vegetation consisting of robot-centric height and intensity cost maps obtained by processing lidar point clouds, a goal map indicating the distance and direction to the goal, and proprioceptive signals from the legged robot's joints to indicate its stability. The height and intensity maps accurately represent the height and solidity or inversely, the pliability of the surrounding vegetation. The goal map and proprioception aid with spatially correlating the vegetation's properties with the robot's intended motion direction and stability during training.
* A novel context-aware motion planner that switches between (1). a holonomic velocity space to minimize the risk of entrapment in vegetation assessed from proprioceptive signals, and (2). a non-holonomic velocity space to navigate narrow passages between solid, non-pliable vegetation. Further, it generates dynamically feasible, smooth candidate actions/velocities to be assessed by VAPOR's Q-function. VAPORis evaluated on a real Boston Dynamics Spot robot in unstructured outdoor scenes.
## II Related Work
In this section, we discuss the existing literature on vegetation perception in outdoor environments, and offline RL methods used for navigation. Finally, we discuss the existing holonomic planning methods.
### _Outdoor Vegetation Perception_
Navigating robots in outdoor environments, particularly through vegetation, is a challenging task that has received increasing attention in recent years [4, 18, 19]. Existing approaches tackle this issue using various sensory modalities and learning techniques. For instance, [20] adopts a self-supervised approach to estimate the support surface in vegetation employing 3D point clouds and RGB images. Despite their promising results, the system requires manual labeling, which could be time-consuming and less scalable for real-world deployments. In [4], the authors use RGB images and 2D Lidar to create traversability cost maps in dense vegetation environments. Stone et al. [21] use an infrared (IR) sensor and an RGB camera for vegetation detection. While effective in certain conditions, these camera-based methods are often vulnerable to environmental factors such as changing lighting and motion blur, thereby limiting their robustness. Iberraken et al. [22] demonstrate the use of a 2D LiDAR to navigate through structured vineyard fields.
Some recent works have shifted their focus from external sensors to proprioceptive modalities to perceive vegetation [23, 24]. While proprioception offers reliable feedback about the robot's internal state, it inherently lacks the capability for look-ahead predictions before traversing a given terrain, especially in the absence of exteroceptive sensors. Our work combines exteroception (3D point clouds) with proprioception for robust and efficient navigation through outdoor vegetation.
### _Offline RL based Robot Navigation_
Reinforcement Learning (RL) has been fundamental to robot navigation [14, 25, 26, 27], providing methods for autonomous decision-making based on interaction with the environment. However, traditional online RL often falls short in situations where real-time data collection is either impractical or the lack of realistic simulation which increases the sim-to-real gap [28] e.g., navigating through dense vegetation or hazardous or complex terrains. On the other hand, offline RL has emerged as a promising alternative, designed to optimize policies based on pre-collected datasets. Among the foundational works in offline RL is the study by Levine et al. [16], which outlines the key methodologies and challenges. Methods based on Imitation Learning (IL) have also leveraged collected data [29, 30]. However, IL is often restricted by the limitations of the human operator (expert) who collected the data, meaning it cannot generally surpass the operator's performance. In contrast, offline RL aims to optimize the behavioral policy based on the dataset, offering the potential for more generalized and sometimes superior strategies [11, 31, 17]. Kostrikov et al. [11] introduce Implicit Q-Learning (IQL) which implicitly estimates the value function without querying the Q function of unseen actions. While IQL shows promise, it struggles in tasks with long planning horizons. Shah et al. [17] mitigate this limitation by combining IQL with topological graphs. Nevertheless, their method primarily relies on RGB images which is susceptible to lighting changes, motion blur, etc.
On the other hand, [10] developed Conservative Q-Learning (CQL) to enhance the robustness of the learned policy. This method lower-bounds the true value of its learned Q function. Following its superior performance with complex data distribution, we extend this method by employing data from a 3D LiDAR and a legged robot's joint encoders.
### _Holonomic Planning_
Traditional robotic planning often focuses on non-holonomic robotic planners, largely because many robots including wheeled robots, inherently possess non-holonomic constraints [32, 33, 34, 35]. Conversely, robots with higher degrees of freedom (e.g., legged or manipulator robots) can benefit from holonomic planning methods [36, 37]. However, such planner lack the ability to adapt the robot's velocity space based on environmental constraints, especially in dense vegetation.
## III VAPOR: Vegetation-Aware Planning using Offline Reinforcement Learning
### _Preliminaries_
We mathematically formulate our navigation problem as a Markov Decision Process (MDP) with continuous states and actions. Our MDP can be defined as \(\mathcal{M}:=\{\mathcal{S},\mathcal{A},\mathbb{P},r,\gamma\}\), where \(\mathcal{S},\mathcal{A}\) denote state and action spaces, \(\mathbb{P}(s^{\prime}|s,a)\) represents the state transition dynamics between current state \(s\), action \(a\), and next state \(s^{\prime}\). \(r(s,a)\) is the reward function, and \(\gamma\in(0,1)\) denotes the discount factor. The objective of RL is to learn a policy \(\pi_{\theta}(a|s)\) parameterized by \(\theta\) that maximizes the discounted cumulative reward return.
Offline RL particularly aims to learn policies from existing data sets instead of explicitly interacting with the environment. Hence, for a dataset \(\mathcal{D}=\{(s_{j},a_{j},r_{j},s^{\prime}_{j})|s_{j},s^{\prime}_{j}\in \mathcal{S};a_{j}\in\mathcal{A};j=1,2,..,N\}\), offline RL algorithms attempt to learn a policy \(\pi_{\theta}(a|s)\) that maximizes the discounted reward return \(R_{t}=\sum_{k=t}^{T}\gamma^{(k-t)}r_{k}(s_{k},a_{k})\) at time step \(t\). However, leveraging the standard RL algorithms for offline RL leads to poor performance due to overfitting and distributional shifts [16]. In particular, the existing value-based off-policy RL methods such as Q learning typically overestimate the value function predictions for unseen outcomes, which results in erroneous and overly optimistic estimations [38]. To mitigate this issue, Conservative Q Learning (CQL) [10] regularizes the Q-values during training to learn conservative and lower-bound estimates of the value function. Hence, in this work, we incorporate CQL with Soft Actor-Critic (SAC) [39] as our base offline RL algorithm.
Hereafter, we use \(j,k\) as indices. Vectors are represented in bold, lower case letters. All positions, velocities, and forces are represented w.r.t a rigid frame attached to the robot \(R\) (indicated in superscript) or relative to a cost map. The robot frame's \(x,y,z\) directions points forward, leftward, and upward respectively.
### _Dataset Generation_
Our raw training data is collected by teleoperating a legged robot equipped with a \(360^{\circ}\) 3D LiDAR, and joint encoders for \(\sim 4\) hours. We collect raw 3D point clouds, robot's odometry, joint positions and velocities on the legs, and joint actuator current as the robot moves in random trajectories in vegetation including grass, bushes, and trees with varying density. Hence, the raw data set does not have any goal-conditioning or goal-reaching policy.
To create goal-conditioned data set \(\mathcal{D}\) with a series of \(\{s_{j},a_{j},r_{j},s^{\prime}_{j}\}\), we consider random trajectory segments from the raw dataset, i.e., we select a random state as the initial position and a future sample in the same raw trajectory as the goal. This subsequent goal sample is selected such that it is \(\sim 8-20\) meters away from the robot's initial position, and our processed dataset \(\mathcal{D}=\{(s_{j},a_{j},r_{j},s^{\prime}_{j})\,|\,j=1,2,..,N\}\) is obtained. We explain the details of the state observations, actions, and reward formulation in the sub-sections below.
### _State Observations from Multi-sensor Data_
Our state observations \(s\in\mathcal{S}\) are obtained by pre-processing the raw sensory data collected from both the exteroceptive (point clouds) and proprioceptive (joint positions, forces) data from the robot. We denote the entire point cloud as \(\mathbf{P}\) reflected point's 3D location relative to the robot and intensity as \(\mathbf{p}_{j}=\{x_{j},y_{j},z_{j},i_{j}\}|x_{j},y_{j},z_{j}\in\mathbb{R};i_{ j}\in[0,i_{m}]\}\). Proprioceptive sensing is obtained from the robot's joint positions \(h_{1}^{x/y},h_{2}^{x/y},h_{3}^{x/y},h_{4}^{x/y}\), force feedback \(f_{1},f_{2},f_{3},f_{4}\), and the battery's current consumption \(I_{b}\).
We preprocess the aforementioned sensory data to generate two types of state observations: 1.) \(S_{e}\): A set of robot-centric cost maps that reflect the solidity and height of the surrounding objects, and distance to the goal using exteroceptive sensors; 2.) \(S_{p}\): A vector that quantifies the robot's stability using propriception. Hence, our final state observations \(s=[S_{e},S_{p}]\in\mathcal{S}\).
#### Iii-B1 Layered Cost Maps from Exteroception
Navigation in outdoor vegetation requires sensing the height and solidity of the vegetation in the robot's vicinity. Moreover, spatial information of the goal location is necessary to perform successful goal-reaching tasks. Hence, we propose three robot-centric 2D cost maps, intensity cost map \((\mathcal{C}_{i})\), height cost map \((\mathcal{C}_{h})\), and goal cost map \((\mathcal{C}_{g})\), to represent the solidity and height of the surrounding objects/vegetation, and distance and direction to the goal respectively.
All three cost maps \(\mathcal{C}_{i},\mathcal{C}_{h}\) and, \(\mathcal{C}_{g}\) are \(n\times n\) matrices with the robot positioned at the center \((n/2,n/2)\) as depicted in Fig. 3. Each element in each cost map satisfies \(\mathcal{C}_{i,h,g}(l,m)\in\mathcal{C}_{i}\).
Fig. 2: Overall system architecture of VAPOR which uses a height and intensity cost map generated from 3D lidar, a goal cost map, and proprioception data from the robot as state inputs to train an Actor-Critic offline RL policy. Then, the fully trained critic network is used to evaluate the dynamically feasible actions generated by a planner. The planner uses instability detection using proprioception and intensity map to switch between a holonomic and non-holonomic action space to reduce the risk of entrapment.
\[[0,100]\quad\forall\,l,m=0,1,..,n-1\text{. A grid }(l,m)\text{ in a cost map is related to a grid }grid_{l,m}^{R}\text{ of physical }(x,y)\text{ locations relative to the robot as,} \tag{1}\] \[\begin{split}(x,y)&\in grid_{l,m}^{R}\\ grid_{l,m}^{R}&=[[x_{l,m},x_{l,m}+\beta],[y_{l,m},y_{l,m }+\beta]]\\ x_{l,m}&=\left\lfloor(l-\frac{n}{2})\cdot\beta \right\rfloor\text{ and }\ y_{l,m}=\left\lfloor(m-\frac{n}{2})\cdot\beta \right\rfloor,\end{split}\]
where \(\beta\) is the side length of a square-shaped grid \(grid_{l,m}^{R}\) in meters.
**Intensity Cost Map \((\mathcal{C}_{i})\):** We employ the point cloud intensity values [40, 41, 42], to construct an intensity cost map \(\mathcal{C}_{i}\). The LiDAR's reflectance power (i.e., intensity) is directly proportional to the solidity of the corresponding objects. Hence, we observe that grass, bushes, and trees result in distinct intensities (see Fig. 2(a)). We calculate elements of \(\mathcal{C}_{i}\) as,
\[\mathcal{C}_{i}(l,m)=\frac{\sum_{x_{j}}\sum_{y_{j}}i_{j}}{\beta^{2}}\ \ \forall\mathbf{p}_{j}\in\mathbf{P}\text{ and }x_{j},y_{j}\in grid_{l,m}^{R}. \tag{2}\]
**Height Cost Map \((\mathcal{C}_{h})\):** We generate \(\mathcal{C}_{h}\) to represent the maximum heights of the objects in each grid location \(grid_{l,m}^{R}\). To this end, element \((l,m)\) of \(\mathcal{C}_{h}\) is obtained by,
\[\mathcal{C}_{h}(l,m)=max(z_{j})\ \ \forall\mathbf{p}_{j}\in\mathbf{P}\text{ and }x _{j},y_{j}\in grid_{l,m}^{R}, \tag{3}\]
where higher values in \(\mathcal{C}_{h}(l,m)\) indicate taller objects.
**Goal Cost Map \((\mathcal{C}_{g})\):** Each location \((l,m)\) in the goal cost map represents \(grid_{l,m}^{R}\)'s distance to the goal \((x_{g}^{R},y_{g}^{R})\). Its value is calculated as,
\[\mathcal{C}_{g}(l,m)=\frac{\alpha_{g}\cdot\left(\sqrt{(x_{g}^{R}-x_{l,m})^{2} +(y_{g}^{R}-y_{l,m})^{2}}\right)}{d_{tot}}, \tag{4}\]
where \(d_{tot}\) is the total distance to the goal from the robot's starting position and \(\alpha_{g}\in\mathbb{R}\) is a tunable weight parameter.
Finally, we obtain our state observation from the exteroception \(S_{e}\) by concatenating the derived cost maps. Hence, \(S_{e}=\{\mathcal{C}_{i},\mathcal{C}_{h},\mathcal{C}_{g}\}\) of shape \(n\times n\times 3\).
#### Iii-C2 Stability Observation from Proprioception
To estimate the robot's stability in vegetation, we incorporate data acquired from the robot's joint positions, forces, and battery current for proprioceptive sensing. To this end, we process the raw proprioceptive data \(H_{prop}=h_{1}^{x/y},h_{2}^{x/y},h_{3}^{x/y},h_{4}^{x/y},f_{1/2/3/4},I_{b}]\), as performed in [23]. Principal Component Analysis (PCA) is then applied to the processed data to reduce its dimensions to two primary axes. Subsequently, we extract the variances (\(\sigma_{PC1}^{2}\) and \(\sigma_{PC2}^{2}\)) of the dimension reduced data along the principal components, and define our resulting proprioceptive state observation vector as, \(S_{p}=[\sigma_{PC1}^{2},\sigma_{PC2}^{2}],\sigma_{PC1}^{2},\sigma_{PC2}^{2}\in \mathbb{R}^{+}\). We observe that highly stable terrains such as asphalt lead to lower variances, and unstable terrain leading to higher values.
Lastly, we derive our final state observation as \(s=[S_{e},S_{p}]\) by combining both exteroceptive and proprioceptive state observations.
### _Offline Reinforcement Learning Using CQL-SAC_
Our network architecture is based on CQL-SAC [10] that incorporates two critic networks and an actor-network. The policy actor-network (i.e., \(\pi_{\theta}(a|s)\)) estimates the parameters \(\theta\) of the policy distribution, which provides the conditional probability of taking action \(a\) given the state observation \(s\). In our context, this policy distribution is Gaussian parameterized by the mean \(\mu_{\theta}\) and standard deviation \(\sigma_{\theta}\). Further, the two critic networks are Q networks \(\big{(}\)i.e., \(Q_{1}(s,a;\pi_{\theta}),Q_{2}(s,a;\pi_{\theta})\big{)}\) that uses state-action pairs \(s,a\) as inputs to estimate the expectation of the value function. We design the actor and critic networks as follows.
#### Iii-D1 Actor and Critic Networks
Our actor and critic network architecture with layer dimensions is presented in Fig. 4. In both networks, we use two separate network branches to process the exteroceptive \(S_{e}\) and proprioceptive \(S_{p}\) observations in our input state \(s\). We highlight the use of spatial and channel attention networks in the exteroception branch. Spatial attention blocks encode spatial neighborhood properties in individual cost maps and channel attention helps learn the correlations between the features between the cost maps. The outputs from the two branches are concatenated and processed using several linear layers to obtain the end-to-end action outputs.
Since the critic networks take both the action and state inputs, we use an additional branch to process the action by passing two linear layers through before concatenating with the state observation branches. All the hidden layers in the network are followed by \(ReLU\) activation.
Fig. 4: Actor and Critic network architectures of our method. We incorporate two separate branches to process exteroception \(S_{e}\), and proprioception \(S_{p}\) observations. We use spatial and channel attention to encode correlation among the layered cost maps in \(S_{e}\).
Fig. 3: Robot-centric cost map state observations from exteroceptive sensing from Scenario 3 in Fig. 5: **[Left]** Point cloud-based intensity cost map \(\mathcal{C}_{i}\) that indicates the density of the surrounding objects using lidar reflectance; **[Center]** Height cost map \(\mathcal{C}_{h}\) that represents the maximum height of the objects derived from the point cloud; **[Right]** Goal cost map \(\mathcal{C}_{g}\) that indicates the distance to the goal from the robot’s neighborhood. Light colors indicate higher costs and dark areas represent lower costs.
#### Iii-B2 Reward Functions
The reward function is formulated to obtain robot actions that lead to desired navigation behavior. In this work, we are primarily interested in three navigation behaviors: 1) Goal reaching; 2) Avoiding dense/solid objects while navigating through pliable vegetation; and 3) Minimizing the overall energy consumption. We introduce three reward terms \(r_{goal},r_{reg}\), and \(r_{energy}\) to achieve the aforementioned behaviors. Hence, the total reward \(r_{tot}\) obtained by the robot for a given sample is calculated as,
\[r_{tot}=\beta_{1}r_{goal}+\beta_{2}r_{veg}+\beta_{1}r_{energy}, \tag{5}\]
where \(\beta_{1},\beta_{2},\beta_{3}\) are tunable parameters to weigh the reward terms. We design \(r_{goal}\) based on the robot's current distance \(d_{g}\) to the goal to encourage moving towards the goal. Hence,
\[r_{goal}=\frac{\lambda_{1}d_{tot}}{d_{g}}\mathds{1}_{\{d_{g}>d_{th}\}}+ \lambda_{2}\mathds{1}_{\{d_{g}\leq d_{th}\}}, \tag{6}\]
where \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) are adjustable parameters, \(\mathds{1}\) is an indicator function, and \(d_{th}\) is the goal reaching threshold.
The vegetation reward \(r_{veg}\) is a penalty for actions that navigate the robot in dense vegetation nearby (i.e, higher the density, lower the reward). To this end, we consider three circular neighborhoods with radii \(0.5,1.5\) and \(2.5\) meters centered at the robot. Let, \(A_{1},A_{2}\) and \(A_{3}\) denote the sets of grids corresponding to these neighborhoods in the intensity cost map \(\mathcal{C}_{i}\). Then, \(r_{veg}\) is calculated as,
\[r_{veg}=-\sum_{k=1,2,3}\bigg{(}\frac{\eta_{k}}{|A_{k}|}\sum_{l,m\in A_{k}} \mathcal{C}_{i}(l,m)\bigg{)}, \tag{7}\]
where the tunable parameters are set such that \(\eta_{1}>\eta_{2}>\eta_{3}\in\mathbb{R}\) to ensure higher penalties for the dense vegetation in the robot's nearby vicinity. \(|A_{k}|\) denotes cardinality of the set \(A_{k}\).
We incorporate \(r_{energy}\) to penalize actions consuming high amounts of energy (proportional to the current \(I_{b}\)) during navigation. We calculate \(r_{energy}\) as,
\[r_{energy}=-\epsilon I_{b}, \tag{8}\]
where \(\epsilon\in\mathbb{R}^{+}\) is a weight parameter.
#### Iii-B3 Critic Networks for State-action Evaluation
Even though we train an end-to-end navigation policy using CQL-SAC on our data set \(\mathcal{D}\), we do not use the actions from the trained policy \(\pi_{\theta}(a|s)\) in the actor network for navigation. Instead, we leverage the Q-function \(Q(s,a)\) learned by a critic network to evaluate the quality of the set of actions generated by a context-aware planner. Intuitively, \(Q(s,a)\) indicates how well the action leads to desirable behaviors imposed by the reward function. Since CQL-SAC includes two critic networks and learned Q-functions (\(Q_{1}(s,a;\pi_{\theta})\) and \(Q_{2}(s,a;\pi_{\theta})\)), we choose the critic network with the lowest training loss. We refer to its Q-function as \(Q_{min}(s,a;\pi_{\theta})\) from here on.
### _Context-Aware Planning_
To generate dynamically feasible candidate actions to be evaluated using \(Q_{min}(s,a;\pi_{\theta})\), we formulate a novel context-aware planner. An action \(a\in\mathcal{A}\) for our robot can be denoted as \(a=(v_{x},v_{y},\omega_{z})\). The planner uses a 3-dimensional velocity space (\(V_{s}\subset\mathcal{A}\)) defined as \(V_{s}=\{(v_{x},v_{y},\omega_{z})|-v_{max}\leq v_{x},v_{y}\leq v_{max},-\omega_ {max}\leq\omega\leq\omega_{max}\}\). Here, \(v_{x}\) and \(v_{y}\) denote the linear velocities along the robot's x and y directions respectively, and \(\omega_{z}\) represents the angular velocity about the vertical z-axis. \(v_{max}\) and \(\omega_{max}\) are the maximum linear and angular velocity limits. Additionally, the planner uses the set of reachable/dynamically feasible velocities from the current velocities within an interval \(\Delta t\) based on acceleration limits as \(V_{r}=\{[v_{x}-\dot{v}_{max}\Delta t,v_{x}+\dot{v}_{max}\Delta t],[v_{y}-\dot{ v}_{max}\Delta t,v_{y}+\dot{v}_{max}\Delta t],[\omega_{z}-\dot{\omega}_{max}\Delta t, \omega_{z}+\dot{\omega}_{max}\Delta t]\}\). Here, \(\dot{v}_{max}\), and \(\dot{\omega}_{max}\) are the robot's maximum linear and angular acceleration limits.
The risk of entrapment in dense vegetation a robot faces is exacerbated when the robot performs angular motions because it aids the vegetation in helically twirling on to its legs (intuitively similar to rotating a fork on spaghetti). Therefore, in such scenarios, the robot's angular motion must be restricted. On the other hand, in scenarios with narrow passages, the rectangularly shaped robot must be capable of performing angular motions to traverse through. Such behaviors are also desirable when the robot is equipped with a sensor with a limited FOV that needs to be pointed in a specific direction. To accommodate both scenarios, we restrict \(V_{s}\) based on the following condition:
\[\text{C}:\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}} >\Gamma,\ \mathcal{C}_{i}(l,m)\in[0.5i_{m},0.75i_{m}]\forall\ \ l,m\in A_{2}.\] \[V_{s} =\begin{cases}\{(v_{x},v_{y},0)\},\ \text{if C is True},\\ \{(v_{x},0,\omega_{z})\},\ \text{Otherwise},\end{cases} \tag{9}\]
where \(v_{x},v_{y}\in[-v_{max},v_{max}]\), and \(\omega\in[-\omega_{max},\omega_{max}]\). The corresponding \(V_{r}\) is calculated from the restricted \(V_{s}\) by omitting either \(v_{y}\) or \(\omega_{z}\) based on the environment. The best action \(a^{*}\) for the robot to execute given the current state \(s\) can then be found as,
\[a^{*}=\operatorname*{argmax}_{a_{k}\in V_{r}}(Q_{min}(s,a_{k})). \tag{10}\]
## IV Results and Analysis
### _Implementation_
Our CQL-SAC offline RL policy is implemented using PyTorch and our model is trained on a workstation with an Intel Xeon 3.6 GHz processor and an Nvidia Titan GPU. For real-time deployment and inference, we use the Spot robot from Boston Dynamics equipped with a VLP16 Velodyne LiDAR, an onboard Intel NUC 11, which includes an Intel i7 CPU and an NVIDIA RTX 2060 GPU.
### _Comparison Methods and Evaluation Metrics_
We compare our method's navigation performance with three recent offline RL algorithms: CQL-SAC (our end-to-end policy) [10], IQL [11], BCO [12], an autonomous imitation learning approach, VERN [4], an outdoor vegetation navigation algorithm, and the Dynamic Window Approach (DWA) [13] a classical model-based navigation approach that uses 2D LiDAR scans. We train all the aforementioned offline RL comparison methods on our data set \(\mathcal{D}\) using networks architectures similar to ours for fair comparison. We further perform two ablation studies: VAPOR w/o Propri-ception; and VAPOR w/o attention to highlight the benefits of our approach. Our metrics for evaluation are:
**Success Rate** - The number of times the robot reached its goal while avoiding collisions with _solid and dense vegetation_ over the total number of attempts.
**Avg. Current Consumption** - The average battery current consumption during a navigation task (i.e., \(\sum_{traj}I_{b}\)) in Amperes (A).
**Normalized Traj. Length** - The robot's trajectory length normalized using the straight-line distance to the goal for both successful and unsuccessful trajectories.
### _Testing Scenarios_
We compare our method's navigation performance in the real-world outdoor test scenarios that are not included in the training data set. At least 10 trails are conducted in each scenario.
* Contains narrow passages between shrubs, and trees in a much surface.
* Dense bushes that lead to entrapment, sparse grass, and trees.
* Thin grass, shrubs, and trees with narrow openings under low light conditions.
* Dense grass, fallen branches, vines, and trees.
### _Analysis and Comparison_
We evaluate our method's navigation performance qualitatively in the Fig. 5 and quantitatively in Table I. Scenario 4 is presented in the Fig. 1. We observe that VAPOR demonstrate the highest success rate compared to other methods in all four scenarios that include diverse and unseen vegetation. Since the data set does not include expert demonstrations specifically collected with the behaviors imposed by reward functions, behavioral cloning with BCO [12] shows the lowest success rate due its attempt to imitate the data
\begin{table}
\begin{tabular}{|c|c|} \hline
**Methods** & **Inference Time (ms)** \\ \hline VERN [4] & 84.612 \\ BCO [12] & 3.622 \\ IQL [11] & 3.951 \\ VAPOR w/o Attention & 8.820 \\ VAPOR (Ours) & 8.934 \\ \hline \end{tabular}
\end{table} TABLE II: Inference time comparison between ours and other methods when executing in robot’s onboard computer. IQL [11], BCO [12], and VAPOR w/o attention has the lowest inference time since they use the same network backbone. However, their navigation performance is significantly lower as shown in Table I. VERN [4] has the highest inference time due to computationally heavy backbones. In contrast, VAPOR has a lightweight network that can execute in real time while providing accurate predictions.
Fig. 5: Trajectories generated when navigating through complex outdoor vegetation using various comparison methods. The trajectories drawn from the robot’s rear indicate that it has moved backward. VAPOR is able to use holonomic action in dense vegetation and vines (Scenarios 2 and 4) to reduce the risk of entrapment while others use angular velocities to reach the goal which results in instability and navigation failures. In the presence of narrow spaces in Scenarios 1 and 3, VAPOR uses non-holonomic actions to navigate through.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Metrics** & **Methods** & **Scenario** & **Scenario** & **Scenario** \\ \hline \multirow{8}{*}{**Success Rate (\%)**} & DWA [13] & 30 & 0 & 0 & 20 \\ & VERN [4] & 60 & **70** & 10 & 40 \\ & BCO [12] & 10 & 0 & 0 & 10 \\ & BCO [11] & 40 & 30 & 40 & 20 \\ & CQL-SAC [10] & 50 & 60 & 50 & 50 \\ & VAPOR w/o Projection & 50 & 40 & 50 & 30 \\ & VAPOR w/o Attention & 60 & 50 & 40 & 60 \\ & VAPOR (ours) & **80** & **70** & **60** & **70** \\ \hline \multirow{8}{*}{**Avg. Current Consumption**} & DWA [13] & 7.158 & 7.428 & 7.260 & 7.502 \\ & VERN [4] & 6.937 & 7.457 & 6.993 & 7.423 \\ & BCO [12] & 6.681 & 7.153 & 6.637 & 7.391 \\ & \(\text{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@setlinewidth{ 0.0pt}{0.0pt}{0.
set trajectories without the knowledge of the rewards. In contrast, offline RL methods such as IQL and CQL-SAC attempt to perform the navigation tasks at a reasonable success rate. Eventhough VERN demonstrate the second best success rate in Scenarios 1 and 2, it performs poorly in low light conditions in Scenario 3 and trees covered with leaves in Scenario 4 due to the erroneous vegetation classification from its vision based system. DWA freezes in tall and dense vegetation in Scenarios 2 and 3 identifying such regions as obstacles from the 2D LiDAR scan.
**Benefits of Proprioception:** We observe that VAPOR's performance in terms of success rate and current consumption degrades in dense vegetation without the proprioception state observations. Further, our planner uses proprioception to restrict the angular velocities during entrapment in scenarios 2 and 4 which leads to a higher success rate and low current consumption than VAPORwithout proprioception. VERN and DWA leads to entrapment in scenario 2 and 4 due to lack of vegetation awareness from proprioception. Moreover, in stable conditions such as scenario 3, our planner uses angular velocities to move between the trees that create a narrow passage.
**Benefits of Attention:** Our method without attention demonstrate relatively low success rate, high power consumption and longer trajectory lengths particularly due to the lack of feature encoding capabilities between the cost map inputs than when spatial and channel attention are included. We observe that VAPOR without attention deviates from goal in some trails due to lack of spatial aware encoding from the goal cost map.
**End-to-end RL vs Ours:** We observe that end-to-end RL policies generates dynamically infeasible actions for the robot's motors though the actions reflects the behavior imposed by the rewards (See Fig. 6). This leads to jerky motion due to motor vibrations (see Fig. 5) and high avg. current consumption. In contrast, VAPOR's planner ensures that the actions are dynamically feasible which results in lower current consumption than all end-to-end RL models.
**Inference Time:** VAPOR has a lightweight network that can execute in real time (\(\sim 112\)Hz) on the robot's onboard computer while providing accurate predictions as shown in Table II and I. Vision based methods such as VERN [4] has a significantly lower inference time due to computationally heavy backbones. In contrast, VAPOR incorporate relatively lower dimensional state inputs that can represent \(360^{\circ}\) view of the robot's vicinity and a light-weight network to obtain comparable or better navigation performance.
## V Conclusions, Limitations and Future Work
We present VAPOR, an offline-RL based method for legged robot navigation in outdoor vegetation. Our method uses randomly collected real world data to train a navigation policy that can reach local goals while avoiding dense and solid vegetation. Instead of end-to-end actions from the policy, its fully trained critic network is used to evaluate dynamically feasible actions generated by a planner. The planner is capable of adaptively switching between holonomic and non-holonomic action to minimize entrapment in unstructured vegetation. We deploy our method into a Boston Dynamics Spot robot and evaluate in real outdoor vegetation to demonstrate benefits.
Our method has a few limitations. Our planner cannot provide any theoretical guarantees on the behavior since the the state-action evaluations are obtained from a Q function trained on a data set. Even though our method generalizes well compared to vision based and supervised learning methods, large data sets are required for training. Further, our method cannot detect thin poles or string fences due to low resolution of the lidar and lack of scene awareness.
|
2309.11936 | Partial continuum limit of the 2D Hubbard model | An effective quantum field theory of the 2D Hubbard model on a square lattice
near half-filling is presented and studied. This effective model describes
so-called nodal and antinodal fermions, and it is derived from the lattice
model using a certain partial continuum limit. It is shown that the nodal
fermions can be bosonized, which leads to spin-charge separation and a 2D
analogue of a Wess-Zumino-Witten model. A bosonization formula for the nodal
fermion field operator is obtained, and an exactly solvable model of
interacting 2D fermions is identified. Different ways of treating the antinodal
fermions are also proposed. | Jonas de Woul, Edwin Langmann | 2023-09-21T09:50:46Z | http://arxiv.org/abs/2309.11936v1 | # Partial continuum limit of the 2D Hubbard model
###### Abstract
An effective quantum field theory of the 2D Hubbard model on a square lattice near half-filling is presented and studied. This effective model describes so-called nodal- and antinodal fermions, and it is derived from the lattice model using a certain partial continuum limit. It is shown that the nodal fermions can be bosonized, which leads to spin-charge separation and a 2D analogue of a Wess-Zumino-Witten model. A bosonization formula for the nodal fermion field operator is obtained, and an exactly solvable model of interacting 2D fermions is identified. Different ways of treating the antinodal fermions are also proposed.
**Remark added on September 20, 2023:**_This paper was included in the PhD thesis of the first author, who defended his PhD on December 16, 2011 (this thesis: "Fermions in two dimensions and exactly solvable models," is available on [http://kth.diva-portal.org/](http://kth.diva-portal.org/)). We planned to publish this paper, but for some reason or another this did not happen then._
_We make this paper available on the arXiv in the form it was on August 14, 2012 (this is a slight update of the version that appeared in the above-mentioned PhD thesis)._
## 1 Introduction
Advancing our computational understanding of the Hubbard model [1, 2, 3] is an important but challenging problem in the theory of many-electron systems. As one of _the_ minimal models for strongly correlated electrons, its ground state is believed to describe various charge-ordered-, magnetic- and superconducting phases for different parameter values and spatial dimensionality [4, 5]. The Hamiltonian can be represented as
\[H_{\rm Hub}=-\sum_{\alpha=\uparrow,\downarrow}\sum_{i,j}t_{ij}c_{i,\alpha}^{ \dagger}c_{j,\alpha}+U\sum_{i}n_{i,\uparrow}n_{i,\downarrow} \tag{1}\]
with operators \(c_{i,\alpha}^{\dagger}\) and \(c_{i,\alpha}\) describing the creation- and annihilation of a fermion with spin projection \(\alpha\) at lattice site \(i\), \(n_{i,\alpha}=c_{i,\sigma}^{\dagger}c_{i,\sigma}\) the corresponding density operators, \(U\geq 0\)
the strength of the screened Coulomb repulsion, and \(t_{ij}\) the hopping matrix elements. Of particular interest for the high-Tc problem of the cuprate superconductors [6, 7, 8] is the two-dimensional (2D) model on a square lattice, which is the focus of the present paper. At half-filling and sufficiently large \(U\), there is by now compelling evidence that the model is a Mott insulator [9] with strong antiferromagnetic correlations, as seen for example in rigorous Hartree-Fock- [10, 11] and quantum Monte Carlo studies [12, 13]. Less is known away from half-filling. Numerical Hartree-Fock studies find a plethora of inhomogeneous solutions like polarons, different types of domain walls or stripes, vortex-like structures and ferromagnetic domains; see [14] and references therein. Furthermore, renormalization group studies at weak coupling show Fermi-liquid behavior far from half-filling [15], and strong tendencies towards antiferromagnetism and \(d\)-wave superconductivity close to half-filling [16, 17, 18, 19, 20]; similar results are obtained from quantum cluster methods [21, 22]. Still, few definitive conclusions can be drawn for arbitrary coupling strength.
This level of uncertainty may be contrasted with the corresponding situation in one dimension. The 1D Hubbard model with nearest-neighbor hopping is integrable and can be solved exactly using Bethe ansatz; see [23] and references therein. More general 1D lattice models of fermions can be successfully studied using numerical methods, e.g. the density matrix renormalization group [24]. An alternative approach is to perform a particular continuum limit away from half-filling that leads to a simplified model that can be studied by analytical methods. This limit involves linearising the tight-binding band relation at the non-interacting Fermi surface points and "filling up the infinite Dirac sea of negative energy states". For spinless fermions one obtains the (Tomonaga-)Luttinger model [25, 26], which can be solved using bosonization [27]; in particular, all thermodynamic Green's functions can be computed [28, 29, 30, 31, 32, 33, 34]. Generalizing to arbitrary interacting fermion models away from half-filling leads to the notion of the Luttinger liquid [35] - the universality class of gapless Fermi systems in one dimension (see e.g. [36] for review). Furthermore, spinfull systems like the 1D Hubbard model can be studied using both abelian- and non-abelian bosonization, with the latter leading to a Wess-Zumino-Witten-type (WZW) model [37, 38]. We note that bosonization has a rigorous mathematical foundation, see e.g. [39, 40].
The idea of applying bosonization methods in dimensions higher than one goes back to pioneering work of Luther [41], and was popularized by Anderson's suggestion that the Hubbard model on a square lattice might have Luttinger-liquid behavior away from half-filling [42]. Consider for example a gapless system with a square Fermi surface. Let \(k_{\parallel}\) and \(k_{\perp}\) denote fermion momenta parallel and perpendicular, respectively, to a face of the square. Following [41], one would treat \(k_{\parallel}\) as a flavor index, extend \(k_{\perp}\) to be unbounded, and fill up the Dirac sea such that all states \(k_{\perp}<0\) are filled. The system can then be bosonized by the same methods used in one dimension. Unfortunately, in this approach only density operators with momentum exchange in the perpendicular direction behave as bosons, while operators with exchange in the parallel direction do not have simple commutation relations. Yet, Mattis [43] proposed a 2D model of spinless fermions with density-density interactions, containing momentum exchange in all directions, that he claimed was solvable using bosonization. The Hamiltonian of Mattis' model had a kinetic energy term with a linear tight-binding band relation on each face of a square Fermi surface, and with a constant Fermi velocity \(v_{F}\) along each face. Mattis rewrote the kinetic energy as a quadratic expression in densities using a generalized Kronig identity, and the Hamiltonian was then
diagonalized by a Bogoliubov transformation.
The exact solubility of Mattis' model can be understood in light of more recent work of Luther [44] in which he studied a model of electrons with linear band relations on a square Fermi surface: A notable difference to the 1D case is the huge freedom one has in choosing the accompanying flavor indices when bosonizing. In particular, one may do a Fourier transformation in the \(k_{\parallel}\)-direction and then bosonize using a new index flavor \(x_{\parallel}\). In this way, Luther obtained density operators that indeed satisfy 2D boson commutation relations. The price one has to pay for solubility is that \(v_{F}\) needs to be constant on each face, i.e. it cannot depend on \(k_{\parallel}\). The properties of Luther's model were further investigated in [45, 46]. We also mention Haldane's phenomenological approach to bosonization in higher dimensions [47], which has been further pursued by various groups [48, 49, 50], and functional integral approaches to bosonization [51, 52]; none of these will be followed here.
Returning to the 2D Hubbard model, consider momentarily the half-filled square lattice with nearest-neighbor (nn) hopping only. The tight-binding band relation relevant in this case is1\(\epsilon({\bf k})=-2t[\cos(k_{1})+\cos(k_{2})]\), which gives a square (non-interacting) Fermi surface at half-filling. The functional form of \(\epsilon({\bf k})\) varies significantly over this surface: In the so-called _nodal_ regions of the Brillouin zone near the midpoints \((\pm\pi/2,\pi/2)\) and \((\pi/2,\pm\pi/2)\) of the four faces, the band relation is well represented by a linear approximation in the perpendicular direction to each face. In contrast, at the corner points \((\pm\pi,0)\) and \((0,\pm\pi)\) in the so-called _antinodal_ regions, \(\epsilon({\bf k})\) has saddle points. This makes taking a constant Fermi velocity along each face a questionable approximation. Furthermore, we know that the van-Hove singularities associated with these saddle points, and the nesting of the Fermi surface, give various ordering instabilities that can lead to gaps [53]. Of course, going away from half-filling or including further neighbor hopping can bend the Fermi surface away from these points. Moreover, even if the concept of a Fermi surface survives at intermediate- to strong coupling, the interaction is likely to renormalize the surface geometry [54]. Nonetheless, the fermion degrees of freedom in the nodal- and antinodal regions are likely to play very different roles for the low-energy physics of the Hubbard model.
Footnote 1: We write \({\bf k}=(k_{1},k_{2})\) for fermion momenta, \(t>0\) is the nn hopping constant, and we set the lattice constant \(a=1\) in this section.
In this paper we develop a scheme that improves the bosonization treatments of the 2D Hubbard model mentioned above. The basic idea is to treat nodal- and antinodal degrees of freedom using differing methods. To be specific, we perform a certain _partial_ continuum limit that only involves the nodal fermions and that makes them amenable to bosonization, while allowing to treat the antinodal fermions by conventional methods like a mean-field- or random phase approximation. This is an extension of our earlier work on the so-called 2D \(t\)-\(t^{\prime}\)-\(V\) model of interacting spinless fermions [55, 56, 57, 58]. In the spinless case, the partial continuum limit gives a natural 2D analogue of the Luttinger model consisting of nodal fermions coupled to antinodal fermions [55, 56]. This effective model is a quantum field theory (QFT) model (by this, we mean that the model has an infinite number of degrees of freedom) and, as such, requires short- and long distance regularizations [56, 58]. These regularizations are provided by certain length scale parameters \(\tilde{a}\) (proportional to the lattice constant) and \(L\) (the linear size of the lattice). After bosonizing the nodal fermions, one can integrate them out exactly using functional integrals, thus leading to an effective model
of antinodal fermions only [56]. It was shown in [57] that this antinodal model allows for a mean field phase corresponding to charge ordering (charge-density-wave), such that the antinodal fermions are gapped and the total filling of the system is near, but not equal to, half-filling. In this _partially gapped phase_ the low-energy properties of the system are governed by the nodal part of the effective Hamiltonian. This nodal model is exactly solvable: the Hamiltonian can be diagonalised and all fermion correlation functions can be computed by analytical methods [58]. One finds, for example, that the fermion two-point functions have algebraic decay with non-trivial exponents for intermediate length- and time scales. The purpose of this paper is to extend the above analysis to fermions with spin. In the main text we explain the ideas and present our results, emphasizing the differences with the spinless case. Details and technicalities (which are important in applications of our method) are deferred to appendices. One important feature of our method is its flexibility. To emphasize this, the results in the appendices are given for an extended Hubbard model that also includes a nn repulsive interaction.
In Section 2, we summarize our results by giving a formal2 description of the effective QFT model that we obtain. We then outline how the partial continuum limit is done for the 2D Hubbard model in Section 3. In Section 4, we define the nodal part of the effective model and show how it can be bosonized by operator methods. We also identify an exactly solvable model of interacting fermions in 2D. In Section 5, we include the antinodal fermions in the analysis and discuss how different effective actions may be obtained by integrating out either the nodal- or the antinodal fermions. The final section contains a discussion of our results. Computational details, including formulas relating the Hubbard model parameters to the parameters of the effective QFT model, are given in Appendices A-D.
Footnote 2: By “formal” we mean that details of the short- and long distance regularizations needed to make these models well-defined are ignored; these details are spelled out in other parts of the paper.
_Notation_: For any vector \(\mathbf{u}\in\mathbb{R}^{2}\), we write either \(\mathbf{u}=(u_{1},u_{2})\) or \(\mathbf{u}=u_{+}\mathbf{e}_{+}+u_{-}\mathbf{e}_{-}\), with \(u_{\pm}\stackrel{{\mbox{\tiny def}}}{{=}}(u_{1}\pm u_{2})/\sqrt{2}\) and \(\mathbf{e}_{\pm}\stackrel{{\mbox{\tiny def}}}{{=}}(1,\pm 1)/\sqrt{2}\). We denote the Pauli matrices by \(\sigma^{i}\), \(i=1,2,3\), the \(2\times 2\) unit matrix as \(\sigma^{0}\), and \(\sigma^{\pm}=(\sigma^{1}\pm\mathrm{i}\sigma^{2})/2\). Spin quantum numbers are usually written as \(\uparrow,\downarrow\), but sometimes also as \(\pm\). We write \(h.c.\) for the hermitian conjugate. Fermion- and boson normal ordering of an operator \(A\) is written \(:A:\) and \(\stackrel{{\mbox{\tiny$\times$}}}{{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}_{}}_{{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}}_{{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}}_{{}_{{}_{}_{{}}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}}_{{}_{}_{}_{{}_{}{}_{{}_{}_{{}}_{{}_{}_{{}}_{{}_{}_{{}_{}{}_{{}_{}_{{}_{}_{{}}_{{}_{{}_{}}_{{}_{{}_{{}}_{}_{{}_{{}}_{{}_{}_{{}}_{{}_{{}}_{{}_{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{}{}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}}_{{}_{{}_{{}}_{{}}_{{}{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}{}_{{}}_{{}_{{}}_{{}_{{}}_{{}}_{{}_{{}{}_{{}}_{{}_{{}{}_{{}}_{{}_{{}}_{{}{}_{{}}_{{}{}_{{}}_{{}_{{}}_{{}_{{}{}}_{{}_{{}}_{{}{{}}_{}_{{}{}_{{}_{{}}{}_{{}_{{}}_{{}{}_{{}}_{{}{}_{{}_{{}}_{{}_{{}}_{{{}}_{{}{}_{{}}_{{}}_{{}_{{}}_{{{}}_{{}}_{{}_{{}}_{{{}}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}}_{{}_{{}{}_{{}}_{{{}}_{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{{}}_{{}}_{{}_{{}}_{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}}_{{}_{{{}}_{{}}_{{}_{{}}_{{}_{{{}}}_{{{}}_{{}_{{}}_{{}}_{{}_{{{}}_{{}}_{{{}}_{{{}}_{{}}_{{{}}_{{}}_{{}_{{{}}_{{}}_{{{}}_{{}}_{{}_{{{}}}_{{{}}_{{}_{{{}}}_{{{}}_{{}}_{{}_{{{}}_{{{}}}_{{}_{{}}_{{{}}_{{}}_{{}_{{{}}_{{}}_{{{}}_{{}}_{{{}}_{{}}_{{{}}_{{{}}_{{}}_{{{}}_{{}}_{{{}}_{{}}_{{{}}_{{}_{{{}}}_{{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{{}}_{{{}}}_{{{}}_{{}_{{}}_{{{}}_{{}}_{{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{{}}_{{}_{}_{{}}{{}_{}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}_{}{{}_{}{}_{{}_{{}}_{{}{}_{{}_{{}}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}{}_{{}}_{{}_{{}}_{{}_{{}_{}{{}_{}_{{}_{{}}_{{}_{{}}_{{}{{}_{}_{{}}_{{}_{{}_{{}}_{{}_{{}}{{}_{{}_{}_{{}_{}{}_{{}_{}_{{}}{{}_{{}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}_{{}}{{}_{}_{{}_{{}}{{}_{{}_{}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}_{{}}{{}_{{}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{{}_{{}}_{{{}_{}_{{}_{{}}{{}_{{}_{{}}_{{}_{{}}{{}_{{}_{{}}_{{{}
### A 2D analogue of a Wess-Zumino-Witten model
As will be shown, the nodal part of the full effective Hamiltonian (see below) has a contribution formally given by (we suppress all UV regularizations in this section)
\[\begin{split} H=\int\mathrm{d}^{2}x\,\Big{(}v_{F}& \sum_{\alpha=\uparrow,\downarrow}\sum_{r,s=\pm}:\!\psi_{r,s,\alpha}^{\dagger}( \mathbf{x})(-\mathrm{i}r\partial_{s})\psi_{r,s,\alpha}(\mathbf{x})\!:+g\big{(} \sum_{r,s=\pm}J^{0}_{r,s}J^{0}_{r,s}\\ &+\sum_{s=\pm}J^{0}_{+,s}J^{0}_{-,s}+\sum_{r,r^{\prime}=\pm}J^{0} _{r,+}J^{0}_{r^{\prime},-}-\sum_{s=\pm}\mathbf{J}_{+,s}\cdot\mathbf{J}_{-,s}- \sum_{r,r^{\prime}=\pm}\mathbf{J}_{r,+}\cdot\mathbf{J}_{r^{\prime},-}\big{)} \Big{)}\end{split} \tag{2}\]
with \(\partial_{\pm}=\partial/\partial x_{\pm}\) and \(x_{\pm}\) Cartesian coordinates of \(\mathbf{x}\). The fermion field operators \(\psi_{r,s,\alpha}(\mathbf{x})\) obey canonical anticommutator relations \(\{\psi_{r,s,\alpha}(\mathbf{x}),\psi_{r^{\prime},s^{\prime},\alpha^{\prime}} ^{\dagger}(\mathbf{y})\}=\delta_{r,r^{\prime}}\delta_{s,s^{\prime}}\delta_{ \alpha,\alpha^{\prime}}\delta(\mathbf{x}-\mathbf{y})\), etc., and \(r,s=\pm\) are certain flavor indices. The coupling constant \(g\) is proportional to \(U\). Furthermore,
\[\begin{split} J^{0}_{r,s}(\mathbf{x})&=\sum_{\alpha }:\!\psi_{r,s,\alpha}^{\dagger}(\mathbf{x})\psi_{r,s,\alpha}(\mathbf{x})\!:\\ \mathbf{J}_{r,s}(\mathbf{x})&=\sum_{\alpha,\alpha^{ \prime}}:\!\psi_{r,s,\alpha}^{\dagger}(\mathbf{x})\boldsymbol{\sigma}_{\alpha,\alpha^{\prime}}\psi_{r,s,\alpha^{\prime}}(\mathbf{x})\!:,\qquad\boldsymbol {\sigma}=(\sigma^{1},\sigma^{2},\sigma^{3})\end{split} \tag{3}\]
are 2D (fermion normal-ordered) density- and (rescaled) spin operators for which the non-trivial commutation relations are given by (again formally)
\[\begin{split}\big{[}J^{0}_{r,s}(\mathbf{x}),J^{0}_{r,s}(\mathbf{ y})\big{]}=& r\frac{1}{\pi\tilde{a}\mathrm{i}}\partial_{s}\delta\left(\mathbf{x}- \mathbf{y}\right)\\ \big{[}J^{i}_{r,s}(\mathbf{x}),J^{j}_{r,s}(\mathbf{y})\big{]}=& 2 \mathrm{i}\sum_{k}\epsilon_{ijk}J^{k}_{r,s}(\mathbf{x})\delta\left( \mathbf{x}-\mathbf{y}\right)+r\frac{1}{\pi\tilde{a}\mathrm{i}}\delta_{i,j} \partial_{s}\delta\left(\mathbf{x}-\mathbf{y}\right)\end{split}. \tag{4}\]
We also set \(\mathbf{S}_{r,s}(\mathbf{x})=\mathbf{J}_{r,s}(\mathbf{x})/2\). We find by using a particular Sugawara construction that the Hamiltonian in (2) separates into a sum of independent density- and spin parts (spin-charge separation)
\[H=H_{C}+H_{\mathbf{S}} \tag{5}\]
with
\[\begin{split} H_{C}&=\frac{v_{F}}{2}\int\mathrm{d}^{ 2}x\,\pi\tilde{a}\stackrel{{\times}}{{\times}}\Bigl{(}\sum_{r,s} \bigl{(}(1+2\gamma)J^{0}_{r,s}J^{0}_{r,s}+\gamma J^{0}_{r,s}J^{0}_{-r,s}\bigr{)} +2\gamma\sum_{r,r^{\prime}}J^{0}_{r,+}J^{0}_{r^{\prime},-}\Bigr{)}\stackrel{{ \times}}{{\times}}\\ H_{\mathbf{S}}&=\frac{v_{F}}{2}\int\mathrm{d}^{2}x\, \pi\tilde{a}\stackrel{{\times}}{{\times}}\Bigl{(}\sum_{r,s} \bigl{(}\mathbf{J}_{r,s}\cdot\mathbf{J}_{r,s}/3-\gamma\mathbf{J}_{r,s}\cdot \mathbf{J}_{-r,s}\bigr{)}-2\gamma\sum_{r,r^{\prime}}\mathbf{J}_{r,+}\cdot \mathbf{J}_{r^{\prime},-}\Bigr{)}\stackrel{{\times}}{{\times}} \end{split} \tag{6}\]
and with a dimensionsless coupling constant \(\gamma\geq 0\) proportional to \(g\). As is evident from the multiple occurence of the short-distance scale \(\tilde{a}\) in (4) and (6), a proper quantum field theory limit \(\tilde{a}\to 0^{+}\) of the effective model can possibly make sense only after certain non-trivial multiplicative renormalizations of observables (and implementing a UV regularization on the Hamiltonian). The algebra in (4) and the Sugawara construction leading to (5)-(6) can naturally be interpreted as giving a WZW-type model in two spatial dimensions.
### The full nodal-antinodal model
The full effective Hamiltonian of the nodal-antinodal system is given by
\[H_{eff}=H_{n}+H_{a}+H_{na} \tag{7}\]
with the terms on the right hand side corresponding to a pure nodal part (\(n\)), a pure antinodal part (\(a\)), and a nodal-antinodal interaction (\(na\)), respectively. We find that
\[H_{n}=H+g_{n}^{P}\int\mathrm{d}^{2}x\,P_{r,s}^{\dagger}(\mathbf{x})\cdot P_{r ^{\prime},-s}(\mathbf{x}), \tag{8}\]
with \(H\) defined in (2),
\[\begin{split} H_{a}=\int\mathrm{d}^{2}x\,\sum_{r=\pm}& \Big{(}\sum_{\alpha}:\!\psi_{r,0,\alpha}^{\dagger}(\mathbf{x})\big{(}rc_{F} \partial_{+}\partial_{-}+c^{\prime}_{F}(\partial_{+}^{2}+\partial_{-}^{2})- \mu_{0}\big{)}\psi_{r,0,\alpha}(\mathbf{x})\!:\\ &\qquad\qquad+g_{a}^{C}J_{r,0}J_{r,0}+\tilde{g}_{a}^{C}J_{r,0}J_ {-r,0}+g_{a}^{S}\mathbf{S}_{r,0}\cdot\mathbf{S}_{-r,0}+g_{a}^{P}P_{r,0}^{ \dagger}\cdot P_{-r,0}\Big{)}\end{split}, \tag{9}\]
and
\[H_{na}=\int\mathrm{d}^{2}x\,\sum_{r,r^{\prime},s=\pm}\big{(}g_{na}^{C}J_{r,s} J_{r^{\prime},0}+g_{na}^{S}\mathbf{S}_{r,s}\cdot\mathbf{S}_{r^{\prime},0}+g_{ na}^{P}(P_{r,s}^{\dagger}\cdot P_{r^{\prime},0}+h.c.)/2\big{)} \tag{10}\]
(the coupling constants are defined in terms of the original Hubbard model parameters in Appendix B). While the definition of the density- and spin operators for the antinodal fermions in (9) are similar to (3), we note that there are no anomalous (Schwinger) terms in their commutation relations (cf. (4)). The operators \(P_{r,s}^{\mu}\) in (8)-(10) are certain pairing bilinears given by
\[\begin{split} P_{r,s}^{0}(\mathbf{x})&=\frac{1}{2} \sum_{\alpha}\psi_{r_{s},s,\alpha}(\mathbf{x})\psi_{r,s,\alpha}(\mathbf{x})\\ P_{r,s}^{i}(\mathbf{x})&=\frac{1}{2}\sum_{\alpha, \alpha^{\prime}}\psi_{r_{s},s,\alpha}(\mathbf{x})\sigma_{\alpha,\alpha^{ \prime}}^{i}\psi_{r,s,\alpha^{\prime}}(\mathbf{x})\qquad(i=1,2,3)\end{split} \tag{11}\]
with the flavor index \(r_{s}\equiv-r\) and \(r_{s}\equiv r\) for nodal- (\(s=\pm\)) and antinodal (\(s=0\)) fermions, respectively. We note that pairing nodal fermions with opposite flavor (chirality) index \(r\) is compatible with pairing momenta \(\mathbf{k}\) with \(-\mathbf{k}\) in the Brillouin zone. The same holds true for antinodal fermions with equal flavor index \(r\).
One can use abelian bosonization to rewrite the nodal part of the effective model in terms of boson fields corresponding to charge- and spin degrees of freedom. If one truncates (2) by only keeping the third spin components in the spin rotation invariant interaction, the remaining part becomes quadratic in these boson fields and can thus be diagonalised by a Bogoliubov transformation. This diagonalisation requires that
\[0\leq\gamma<1/3 \tag{12}\]
which translates into constraints on the original Hubbard parameters; one finds that \(U/t\) must be bounded from above by a value between ten and twenty. Furthermore, the other spin components and the nodal pairing bilinears in (11) can be written in terms of exponentials of the charge- and spin boson fields (cf. bosonization of the 1D Hubbard model; see e.g. [59]).
Partial continuum limit
Our partial continuum limit of the Hubbard model near half-filling is similar to the one done in [56] for a lattice model of spinless fermions. In this section, we outline the main steps in this derivation; technical details are given in Appendix B.
We consider the two-dimensional Hubbard model with nearest- (nn) and next-nearest neighbor (nnn) hopping on a square lattice with lattice constant \(a\) and \((L/a)^{2}\) lattice sites. The Hamiltonian is defined as (equivalent to (1) up to a chemical potential term)
\[H_{\rm Hubb}=\sum_{\alpha=\uparrow,\downarrow}\sum_{{\bf k}\in BZ}\left( \epsilon({\bf k})-\mu\right)\hat{c}_{\alpha}^{\dagger}({\bf k})\hat{c}_{\alpha }({\bf k})+\frac{U}{2}\left(\frac{a}{L}\right)^{2}\sum_{{\bf p}}\hat{\rho}(-{ \bf p})\hat{\rho}({\bf p}) \tag{13}\]
with the fermion operators normalized such that \(\{\hat{c}_{\alpha}({\bf k}),\hat{c}_{\alpha^{\prime}}^{\dagger}({\bf k}^{ \prime})\}=\delta_{{\bf k},{\bf k}^{\prime}}\delta_{\alpha,\alpha^{\prime}}\),
\[\epsilon({\bf k})=-2t\left[\cos\left(k_{1}a\right)+\cos\left(k_{2}a\right) \right]-4t^{\prime}\cos\left(k_{1}a\right)\cos\left(k_{2}a\right) \tag{14}\]
the tight-binding band relation, and
\[\hat{\rho}({\bf p})=\sum_{\alpha=\uparrow,\downarrow}\sum_{{\bf k}_{1}{\bf k }_{2}\in BZ}\sum_{{\bf n}\in{\mathbb{Z}}^{2}}\hat{c}_{\alpha}^{\dagger}({\bf k }_{1})\hat{c}_{\alpha}({\bf k}_{2})\delta_{{\bf k}_{1}+{\bf p}+2\pi{\bf n}/a,{ \bf k}_{2}} \tag{15}\]
Fourier-transformed density operators. We assume that the parameters satisfy the constraints \(|t^{\prime}|\leq t/2\) and \(U\geq 0\). The average number of fermions per site, or _filling factor_, is denoted by \(\nu\). Note that \(0\leq\nu\leq 2\), with _half-filling_ corresponding to \(\nu=1\).
We choose to classify one-particle degrees of freedom with momenta \({\bf k}\) according to the functional form of \(\epsilon({\bf k})\) in (14) as discussed in the introduction. This enables us to disentangle fermions that (presumably) play different roles for the low-energy physics of the model. To this end, we introduce eight non-overlapping regions in momentum space identified by pairs of indices \((r,s)\), with \(r=\pm\) and \(s=0,\pm,2\); see the patchwork of rectangles in Figure 1. These regions are defined such that their union is the (first) Brillouin zone, modulo translations of individual momenta by a reciprocal lattice vector. We define the eight regions mathematically by associating to each one a fixed point \({\bf K}_{r,s}\) and a momentum set \(\Lambda_{r,s}^{*}\), such that every momenta in the (first) Brillouin zone can be written uniquely as \({\bf K}_{r,s}+{\bf k}\) (modulo reciprocal lattice vectors) for some pair of flavor indices \((r,s)\) and momenta \({\bf k}\in\Lambda_{r,s}^{*}\). The relative size of each region is parameterized by a variable \(0\leq\kappa\leq 1\). The precise definitions of the sets \(\Lambda_{r,s}^{*}\) are given in Appendix B and is further discussed in [56].
The eight regions correspond to three classes of fermion degrees of freedom. We let \(s=0\) label so-called antinodal fermions and define \({\bf K}_{+,0}\stackrel{{\rm def}}{{=}}(\pi/a,0)\) and \({\bf K}_{-,0}\stackrel{{\rm def}}{{=}}(0,\pi/a)\). Similarly, we let \(s=\pm\) label so-called nodal fermions and define \({\bf K}_{r,s}=(rQ/a,rsQ/a)\) with a parameter \(Q\) close, but not equal, to \(\pi/2\). To get a simple geometry, it is useful to also introduce so-called _in-_ and _out_ fermions labelled by \(s=2\). The corresponding points are \({\bf K}_{-,2}=(0,0)\) (in) and \({\bf K}_{+,2}=(\pi/a,\pi/a)\) (out), i.e. the center and corners of the Brillouin zone. In the following, one can equally well think of the in- and out fermions as belonging
to the nodal fermions. We also define new fermion operators \(\hat{c}^{(\dagger)}_{r,s,\alpha}({\bf k})=\hat{c}^{(\dagger)}_{\alpha}({\bf K}_{r,s}+{\bf k})\) such that the Hubbard Hamiltonian in (13) can be represented as
\[H_{\rm Hubb}=H^{(0)}_{\rm Hubb}+H^{(1)}_{\rm Hubb} \tag{16}\]
with
\[H^{(0)}_{\rm Hubb}=\sum_{\alpha=\uparrow,\downarrow}\sum_{r=\pm}\sum_{s=0,\pm, 2}\sum_{{\bf k}\in\Lambda^{*}_{r,s}}\left(\epsilon({\bf K}_{r,s}+{\bf k})-\mu+ U/2\right)\hat{c}^{\dagger}_{r,s,\alpha}({\bf k})\hat{c}_{r,s,\alpha}({\bf k}) \tag{17}\]
the free part, and
\[\begin{split} H^{(1)}_{\rm Hubb}=U\left(\frac{a}{L}\right)^{2} \sum_{r_{j},s_{j}}\sum_{{\bf k}_{j}\in\Lambda^{*}_{r_{j},s_{j}}}\sum_{{\bf n} \in\mathbb{Z}^{2}}\delta_{{\bf K}_{r_{1},s_{1}}-{\bf K}_{r_{2},s_{2}}+{\bf K} _{r_{3},s_{3}}-{\bf K}_{r_{4},s_{4}}+{\bf k}_{1}-{\bf k}_{2}+{\bf k}_{3}-{\bf k }_{4},2\pi{\bf n}/a}\\ \times\hat{c}^{\dagger}_{r_{1},s_{1},\uparrow}({\bf k}_{1})\hat{c }_{r_{2},s_{2},\uparrow}({\bf k}_{2})\hat{c}^{\dagger}_{r_{3},s_{3},\downarrow} ({\bf k}_{3})\hat{c}_{r_{4},s_{4},\downarrow}({\bf k}_{4})\end{split} \tag{18}\]
the interaction part.
We will assume that there exists some underlying Fermi surface dominating the low-energy physics of the interacting model near half-filling, and that this surface has "flat parts" that can be approximated by a straight line segment or _Fermi arc_ in each nodal region. Furthermore, we assume that the parameter \(Q\) is such that each \({\bf K}_{r,s=\pm}\) lies on this underlying Fermi surface (\(Q\) is the analogue of \(k_{F}\) in the corresponding 1D model). We make no assumption on the geometry of the Fermi surface in the antinodal regions.
Figure 1: Partition of non-equivalent momenta into eight disjoint regions (rectangles), whose union under suitable translations by reciprocal lattice vectors is the first Brillouin zone. The regions are labelled by pairs of indices \((r,s)\) with \(s=0\) corresponding to antinodal fermions, \(s=\pm\) nodal fermions, and \(s=2\) in- or out fermions. The dashed curves is a superimposed non-interacting Fermi surface corresponding to \(t=1\), \(t^{\prime}=-0.2\) and \(\mu=-0.51(1)\). We set the lattice constant \(a=1\).
In the following, we concentrate on that part of (17)-(18) that only involves the nodal fermions (\(s=\pm\)); the end-result for the effective nodal Hamiltonian is given in the next secion, while the inclusion of antinodal fermions is discussed in Section 5. In Appendix B, the approximations introduced below (except for the continuum limit) are also applied to the antinodal (and in- and out) fermions in order to highlight similarities and differences between the fermions. In the appendices, we also include a nn interaction in the lattice Hamiltonian.
We expand the tight-binding band relations \(\epsilon({\bf K}_{r,s}+{\bf k})\) for the nodal fermions as
\[\epsilon({\bf K}_{r,s}+{\bf k})=\epsilon({\bf K}_{r,s})+\varepsilon_{r,s}({\bf k })+O(|a{\bf k}|^{2}),\qquad r,s=\pm \tag{19}\]
with
\[\varepsilon_{r,s}({\bf k})=v_{F}rk_{s},\qquad v_{F}=2\sqrt{2}\sin(Q)\left[t+2t ^{\prime}\cos(Q)\right]a \tag{20}\]
and where we use coordinates \(k_{\pm}=(k_{1}\pm k_{2})/\sqrt{2}\). Our first approximation is to only keep terms up to linear order in \(|a{\bf k}|\).
The interaction in the Hubbard Hamiltonian consists of those scattering processes \(({\bf k}_{2},{\bf k}_{4})\rightarrow({\bf k}_{1},{\bf k}_{3})\) that conserve overall momenta (up to reciprocal lattice vectors). When writing the Hubbard Hamiltonian in terms of the operators \(\hat{c}^{(\dagger)}_{r,s,\alpha}({\bf k})\), conservation of momenta corresponds to the following requirement
\[({\bf K}_{r_{1},s_{1}}+{\bf k}_{1})-({\bf K}_{r_{2},s_{2}}+{\bf k}_{2})+({\bf K }_{r_{3},s_{3}}+{\bf k}_{3})-({\bf K}_{r_{4},s_{4}}+{\bf k}_{4})\in(2\pi/a) \mathbb{Z}^{2} \tag{21}\]
with \({\bf k}_{j}\in\Lambda^{*}_{r_{j},s_{j}}\). The next approximation is to reduce the number of (nodal) interaction terms in the Hubbard Hamiltonian by imposing the additional constraint
\[{\bf K}_{r_{1},s_{1}}-{\bf K}_{r_{2},s_{2}}+{\bf K}_{r_{3},s_{3}}-{\bf K}_{r_{ 4},s_{4}}\in(2\pi/a)\mathbb{Z}^{2} \tag{22}\]
for interaction terms that we keep. If all momenta lie strictly on a Fermi arc, the constraint (22) follows from momentum conservation. All possible combinations of \((r_{j},s_{j})\) satisfying this constraint when \(Q\neq\pi/2\) are given in Table 1 in Appendix B. If \(Q=\pi/2\) there are additional (and potentially gap-inducing) umklapp processes; it is tempting to identify this value with the half-filled model.
Obvious solutions to the constraint in (22) is to set either \((r_{1},s_{1})=(r_{2},s_{2})\) and \((r_{3},s_{3})=(r_{4},s_{4})\), or \((r_{1},s_{1})=(r_{4},s_{4})\) and \((r_{2},s_{2})=(r_{3},s_{3})\). These combinations naturally lead to the definition of density- and spin operators \(\hat{\rho}_{r,s}\) and \(\hat{S}^{i}_{r,s}\), \(i=1,2,3\), corresponding to each pair of flavor indices. For example, the nodal density operators are
\[\hat{\rho}_{r,s}({\bf p})=\sum_{\alpha=\uparrow,\downarrow{\bf k}_{1},{\bf k }_{2}\in\Lambda^{*}_{r,s}}\hat{c}^{\dagger}_{r,s,\alpha}({\bf k}_{1})\hat{c}_{ r,s,\alpha}({\bf k}_{2})\delta_{{\bf k}_{1}+{\bf p},{\bf k}_{2}}. \tag{23}\]
The interaction terms in the truncated Hubbard Hamiltonian with the above combinations for \((r_{j},s_{j})\) are products of these bilinears, i.e. \(\hat{\rho}_{r,s}\hat{\rho}_{r^{\prime},s^{\prime}}\) and \(\hat{\bf S}_{r,s}\cdot\hat{\bf S}_{r^{\prime},s^{\prime}}\). The constraint in (22) also allows for interaction terms involving pairing bilinears of the form \(\hat{\psi}^{(\dagger)}\hat{\psi}^{(\dagger)}\). We define associated pairing operators denoted by \(\hat{P}^{\mu}_{r,s}\), \(\mu=0,1,2,3\), and write these interaction terms as \(\hat{P}^{\dagger}_{r,s}\cdot\hat{P}_{r^{\prime},s^{\prime}}\) with \(\hat{P}_{r,s}=(\hat{P}^{0}_{r,s},\hat{P}^{1}_{r,s},\hat{P}^{2}_{r,s},\hat{P}^{ 3}_{r,s})\).
The components of the momenta in the nodal sets \(\Lambda^{*}_{r,s=\pm}\) are restricted by cutoffs proportional to the inverse lattice constant. Our partial continuum limit for the nodal fermions involves removing the cutoff in the directions orthogonal to each Fermi arc. To this end, we normal-order the kinetic part and the bilinears in the truncated interaction with respect to a state \(\Omega\) (the Dirac sea) in which all momenta up to the Fermi arcs in the nodal regions are occupied.
Consider now region \((+,+)\) in Figure 1. After removing the cutoff in the \(k_{+}\)-direction, it would be possible to bosonize the nodal fermions by treating the \(k_{+}\) as an unbounded 1D chain of momenta and \(k_{-}\) a flavor index labelling each chain. However, as discussed in the introduction, this does not lead to simple bosonic commutation relations for (23); only densities with momentum exchange in the \(k_{+}\)-direction would behave as bosons and one cannot treat momentum exchange between fermions on different chains. Instead, it is more fruitful to first do a Fourier transformation (change of basis) in the \(k_{-}\)-direction and then bosonize the fermions using a new flavor index \(x_{-}\)[44, 56]. If one also removes the cutoff in the \(k_{-}\)-direction, the commutation relations of the (normal-ordered and rescaled) densities in (23) become that of 2D bosons. However, this limit is delicate as the (normal-ordered) Hamiltonian would then no longer be bounded from below; see the next section.
A mathematically more sound way to proceed is to keep the cutoff and instead modify the nodal density operators in (23); we define the normal-ordered density operators
\[\hat{J}^{0}_{r,s=\pm}({\bf p})=\sum_{\alpha}\sum_{{\bf k}_{1},{\bf k}_{2}\in \Lambda^{*}_{s}}:\hat{c}^{\dagger}_{r,s,\alpha}({\bf k}_{1})\hat{c}_{r,s, \alpha}({\bf k}_{2})\colon\sum_{n\in\mathbb{Z}}\delta_{{\bf k}_{1}+{\bf p}+2 \pi n{\bf e}_{-s}/\tilde{a},{\bf k}_{2}} \tag{24}\]
with \({\bf e}_{-s}\) a unit vector in the direction of the Fermi arc. Here \(\tilde{a}=\sqrt{2}a/(1-\kappa)\) with the length of each Fermi arc given by \(2\pi/\tilde{a}\). This operator is obtained from (23) by adding "umklapp terms" corresponding to \(n\neq 0\). As shown in [58], it is possible to send \(\tilde{a}\to 0^{+}\) on the level of correlation functions. We do a similar regularization for the spin operators. With this, one obtains our effective nodal Hamiltonian; see Equation (31) in the next section.
## 4 Nodal fermions
We formulate the nodal part of the effective QFT model obtained from our partial continuum limit of the 2D Hubbard model near half-filling. We also show that the nodal fermions can be bosonized using exact methods. Some of these results are straightforward generalisations of the corresponding ones obtained for the so-called _Mattis model_ in [58], and in those instances we will be rather brief in the presentation. Further mathematical details are also given in Appendix C. In this section, the flavor indices are always \(r,s=\pm\).
### The nodal Hamiltonian
We rescale the nodal fermion operators by setting \(\hat{\psi}_{r,s,\alpha}({\bf k})=L/(2\pi)\hat{c}_{r,s,\alpha}({\bf k})\) such that
\[\{\hat{\psi}_{r,s,\alpha}({\bf k}),\hat{\psi}^{\dagger}_{r^{\prime},s^{\prime },\alpha^{\prime}}({\bf k}^{\prime})\}=[L/(2\pi)]^{2}\delta_{r,r^{\prime}} \delta_{s,s^{\prime}}\delta_{\alpha,\alpha^{\prime}}\delta_{{\bf k},{\bf k}^{ \prime}},\quad\{\hat{\psi}_{r,s,\alpha}({\bf k}),\hat{\psi}_{r^{\prime},s^{ \prime},\alpha^{\prime}}({\bf k}^{\prime})\}=0. \tag{25}\]
The momenta \({\bf k}\) are in the (unbounded) sets
\[\Lambda_{s}^{*}=\left\{{\bf k}\in\frac{2\pi}{L}\Big{(}\mathbb{Z}+\frac{1}{2} \Big{)}^{2}\ :\ -\frac{\pi}{\tilde{a}}\leq k_{-s}<\frac{\pi}{\tilde{a}}\right\}. \tag{26}\]
The nodal part of the effective model is obtained from a Dirac vacuum \(\Omega\) satisfying
\[\hat{\psi}_{r,s,\alpha}({\bf k})\Omega=\hat{\psi}_{r,s,\alpha}^{\dagger}(-{ \bf k})\Omega=0,\quad\mbox{for all }{\bf k}\in\Lambda_{s}^{*}\ \mbox{ such that }\ rk_{s}>0 \tag{27}\]
with \(\langle\Omega,\Omega\rangle=1\). The specific choice of filling for the antinodal fermion states in \(\Omega\) is unimportant; we assume for simplicity that no state is occupied. We also introduce ordinary fermion normal-ordering with respect to \(\Omega\) such that \(:\!{\cal O}\!:\!={\cal O}\!-\langle\Omega,{\cal O}\Omega\rangle\) for fermion bilinears \({\cal O}=\hat{\psi}_{r,s,\alpha}^{\dagger}({\bf k})\hat{\psi}_{r^{\prime},s^{ \prime},\alpha^{\prime}}({\bf k}^{\prime})\).
We define the following nodal bilinear operators
\[\hat{J}_{r,s}^{\mu}({\bf p}) =\sum_{\alpha,\beta}\,\sum_{{\bf k}_{1},{\bf k}_{2}\in\Lambda_{s }^{*}}\sum_{n\in\mathbb{Z}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\,:\!\hat{\psi}_{ r,s,\alpha}^{\dagger}({\bf k}_{1})\sigma_{\alpha,\beta}^{\mu}\hat{\psi}_{r,s, \beta}({\bf k}_{2})\!:\delta_{{\bf k}_{1}+{\bf p},{\bf k}_{2}+2\pi n{\bf e}_{- s}/\tilde{a}} \tag{28}\] \[\hat{P}_{r,s}^{\mu}({\bf p}) =\frac{1}{2}\sum_{\alpha,\beta}\,\sum_{{\bf k}_{1},{\bf k}_{2}\in \Lambda_{r,s}^{*}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\hat{\psi}_{-r,s,\alpha}({ \bf k}_{1})\sigma_{\alpha,\beta}^{\mu}\hat{\psi}_{r,s,\beta}({\bf k}_{2}) \delta_{{\bf k}_{1}+{\bf k}_{2},{\bf p}} \tag{29}\]
with \(r,s=\pm\) and \(\mu=0,1,2,3\); here \(\sigma^{i}\), \(i=1,2,3\), are the Pauli matrices, \(\sigma_{\alpha,\beta}^{0}=\delta_{\alpha,\beta}\) and the momenta \({\bf p}\) are in the set
\[\tilde{\Lambda}_{s}^{*}=\{{\bf p}=(p_{+},p_{-})\in(2\pi/L)\mathbb{Z}\ :\ -\pi/ \tilde{a}\leq p_{-s}<\pi/\tilde{a}\}\,. \tag{30}\]
Spin operators are given by the simple rescaling \(\hat{S}_{r,s}^{i}=\hat{J}_{r,s}^{i}/2\). We note that removing the cutoff in the summation of momenta \({\bf k}_{1},{\bf k}_{2}\) in (29) would lead to ill-defined operators [39]. For example, acting with such operators on \(\Omega\) would result in a state of infinite norm.
The nodal part of the effective Hamiltonian is now defined as
\[\begin{split}& H_{n}=H+U\sum_{r,r^{\prime},s=\pm}\,\sum_{{\bf p}} \Bigl{(}\frac{a}{L}\Bigr{)}^{2}\chi({\bf p})\hat{P}_{r,s}^{\dagger}({\bf p}) \cdot\hat{P}_{r^{\prime},-s}({\bf p})\\ & H=H_{0}+H_{1}\end{split} \tag{31}\]
with
\[H_{0}=v_{F}\sum_{\alpha=\pm}\sum_{r,s=\pm}\sum_{{\bf k}\in\Lambda_{s}^{*}} \Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}rk_{s}:\hat{\psi}_{r,s,\alpha}^{\dagger}({ \bf k})\hat{\psi}_{r,s,\alpha}({\bf k}): \tag{32}\]
the free part, and
\[\begin{split} H_{1}=\frac{U}{2}\sum_{{\bf p}}\Bigl{(}\frac{a}{L} \Bigr{)}^{2}\chi({\bf p})\Bigl{(}\sum_{s=\pm}\bigl{(}\sum_{r=\pm}\hat{J}_{r,s} ^{0\dagger}\hat{J}_{r,s}^{0}+\hat{J}_{+,s}^{0\dagger}\hat{J}_{-,s}^{0}-\hat{ \bf J}_{+,s}^{\dagger}\cdot\hat{\bf J}_{-,s}\bigr{)}\\ +\sum_{r,r^{\prime}=\pm}\bigl{(}\hat{J}_{r,+}^{0\dagger}\hat{J}_{ r^{\prime},-}^{0}-\hat{\bf J}_{r,+}^{\dagger}\cdot\hat{\bf J}_{r^{\prime},-} \bigr{)}\Bigr{)}\end{split} \tag{33}\]
the density- and spin interaction part; here \(\hat{\bf J}_{r,s}=(\hat{J}_{r,s}^{1},\hat{J}_{r,s}^{2},\hat{J}_{r,s}^{3})\) and we suppress common arguments of \({\bf p}\). Furthermore, we have introduced a cutoff function for possible momentum exchange in the interaction by
\[\chi({\bf p})=\begin{cases}1&\text{if }-\pi/\tilde{a}\leq p_{\pm}<\pi/\tilde{a} \cr 0&\text{otherwise}\end{cases}. \tag{34}\]
The nodal Hamiltonian in (31) contains different types of scattering processes. Terms involving the bilinears in (28) correspond to processes for which both fermions remain near the same Fermi arc, and for which their spin projection may or may not be reversed. In contrast, terms involving (29) are such that both fermions are scattered from one Fermi arc to another. As we will see below, these latter terms cannot be easily analyzed using our methods.
We also summarize our conventions for Fourier transforms of nodal operators (similar expressions can be found in [58]). Define nodal fermion field operators by
\[\psi_{r,s,\alpha}({\bf x})=\frac{1}{2\pi}\sum_{{\bf k}\in\Lambda_{s}^{*}} \Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\hat{\psi}_{r,s,\alpha}({\bf k})\,{\rm e}^{{ \rm i}{\bf k}\cdot{\bf x}}\qquad(s=\pm), \tag{35}\]
with "positions" \({\bf x}\) in
\[\Lambda_{s}=\bigl{\{}{\bf x}\in\mathbb{R}^{2}\,:\,x_{s}\in\mathbb{R},\;x_{-s} \in\tilde{a}\mathbb{Z},\;-L/2\leq x_{\pm}<L/2\bigr{\}} \tag{36}\]
and which obey the anticommutation relations
\[\{\psi_{r,s,\sigma}({\bf x}),\psi_{r^{\prime},s^{\prime},\sigma^{\prime}}^{ \dagger}({\bf y})\}=\delta_{r,r^{\prime}}\delta_{s,s^{\prime}}\delta_{\sigma, \sigma^{\prime}}\tilde{\delta}_{s}({\bf x}-{\bf y}),\qquad\tilde{\delta}_{s}( {\bf x})=\delta(x_{s})\frac{1}{\tilde{a}}\delta_{x_{-s},0}. \tag{37}\]
The (regularized) Fourier transforms of the nodal density- and spin operators in (28) are defined as
\[J_{r,s}^{\mu}({\bf x};\epsilon)=\sum_{{\bf p}\in\tilde{\Lambda}_{s}^{*}}\frac {1}{L^{2}}\hat{J}_{r,s}^{\mu}({\bf p})\,{\rm e}^{{\rm i}{\bf p}\cdot{\bf x}- \epsilon|p_{s}|/2},\qquad J_{r,s}^{\mu}({\bf x})=\lim_{\epsilon\to 0^{+}}J_{r,s}^{ \mu}({\bf x};\epsilon) \tag{38}\]
with \(\epsilon>0\) infinitesimal. Using these operators, it is for example possible to rewrite \(H_{0}\) in (32) in "position" space and thus obtain a well-defined regularised expression replacing the free part of (2)
\[H_{0}=v_{F}\sum_{\alpha=\pm}\sum_{r,s=\pm}\int\limits_{s}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Bosonization
The presence of the Dirac vacuum satisfying (27) leads to anomalous commutator relations [27] for the fermion bilinears in (28) (see Appendix C for proof)
\[\begin{split}\Big{[}\hat{J}^{0}_{r,s}(\mathbf{p}),\hat{J}^{0}_{r, s}(\mathbf{p}^{\prime})\Big{]}&=r\frac{4\pi p_{s}}{\hat{a}}\Big{(} \frac{L}{2\pi}\Big{)}^{2}\delta_{\mathbf{p}+\mathbf{p}^{\prime},\mathbf{0}}\\ \Big{[}\hat{J}^{i}_{r,s}(\mathbf{p}),\hat{J}^{j}_{r,s}(\mathbf{p} ^{\prime})\Big{]}&=2\mathrm{i}\sum_{k=1}^{3}\epsilon_{ijk}\hat{J} ^{k}_{r,s}(\mathbf{p}+\mathbf{p}^{\prime})+r\frac{4\pi p_{s}}{\hat{a}}\Big{(} \frac{L}{2\pi}\Big{)}^{2}\delta_{\mathbf{p}+\mathbf{p}^{\prime},\mathbf{0}} \delta_{i,j}\end{split} \tag{42}\]
with all other commutators vanishing; \(\epsilon_{ijk}\) is the totally antisymmetric tensor and \(\epsilon_{123}=1\). Furthermore, \(\hat{J}^{\mu}_{r,s}(\mathbf{p})\Omega=0\) for all \(\mathbf{p}\) such that \(rp_{s}\geq 0\). Using (42) together with (38), one obtains the commutation relations in (4) (everywhere replacing \(\delta(\mathbf{x})\) with \(\tilde{\delta}_{s}(\mathbf{x})\) defined in (37)).
We introduce spin-dependent densities,
\[\hat{J}_{r,s,\uparrow}(\mathbf{p})=\big{(}\hat{J}^{0}_{r,s}(\mathbf{p})+\hat {J}^{3}_{r,s}(\mathbf{p})\big{)}/2,\qquad\hat{J}_{r,s,\downarrow}(\mathbf{p}) =\big{(}\hat{J}^{0}_{r,s}(\mathbf{p})-\hat{J}^{3}_{r,s}(\mathbf{p})\big{)}/2, \tag{43}\]
which by (42) satisfy the commutation relations
\[\Big{[}\hat{J}_{r,s,\alpha}(\mathbf{p}),\hat{J}_{r,s,\alpha^{\prime}}(\mathbf{ p}^{\prime})\Big{]}=r\delta_{\alpha,\alpha^{\prime}}\frac{2\pi p_{s}}{\hat{a}} \Big{(}\frac{L}{2\pi}\Big{)}^{2}\delta_{\mathbf{p}+\mathbf{p}^{\prime},\mathbf{ 0}}. \tag{44}\]
It follows that the rescaled densities
\[b_{s,\alpha}(\mathbf{p})=\begin{cases}-\frac{\mathrm{i}}{L}\sqrt{\frac{2\pi \tilde{a}}{|p_{s}|}}\hat{J}_{+,s,\alpha}(\mathbf{p})&\text{if $p_{s}>0$}\\ \frac{\mathrm{i}}{L}\sqrt{\frac{2\pi\tilde{a}}{|p_{s}|}}\hat{J}_{-,s,\alpha}( \mathbf{p})&\text{if $p_{s}<0$}\end{cases} \tag{45}\]
obey the defining relations of 2D boson creation- and annihilation operators,
\[[b_{s,\alpha}(\mathbf{p}),b^{\dagger}_{s^{\prime},\alpha^{\prime}}(\mathbf{p} ^{\prime})]=\delta_{s,s^{\prime}}\delta_{\alpha,\alpha^{\prime}}\delta_{ \mathbf{p},\mathbf{p}^{\prime}},\qquad[b_{s,\alpha}(\mathbf{p}),b_{s^{\prime}, \alpha^{\prime}}(\mathbf{p}^{\prime})]=0,\qquad b_{s,\alpha}(\mathbf{p}) \Omega=0. \tag{46}\]
The boson operators in (45) are defined for momenta \(\mathbf{p}\in\tilde{\Lambda}^{*}_{s}\) such that \(p_{s}\neq 0\); we denote this set as \(\tilde{\Lambda}^{*}_{s}\) (see (89)). Corresponding to momenta with \(p_{s}=0\), we also introduce so-called zero mode operators, or simply _zero modes_, \(N_{r,s,\alpha}(x)\) with \(x\in\Lambda_{\mathrm{1D}}\) (see (41)); their definition is given in Appendix C. To complete the bosonization of the nodal fermions, we also need the so-called _Klein factors_\(R_{r,s,\alpha}(x)\) conjugate to the zero modes. These are sometimes called _charge shift-_ or _ladder operators_[60] as they raise or lower the number of fermions (with flavor indices \((r,s,\alpha,x)\)) by one when acting on the Dirac vacuum. The Klein factors, together with the boson operators introduced above, span the nodal part of the fermion Fock space when acting on the Dirac vacuum; see Appendix C for details. This enables us to express nodal operators in terms of Klein factors and density operators; in particular, the fermion field operator in (35) has the form
\[\psi_{r,s,\alpha}(\mathbf{x})\sim\frac{1}{\sqrt{2\pi\tilde{a}\epsilon}}R_{r,s,\alpha}(x_{-s})^{-r}\exp\Bigl{(}r\frac{\tilde{a}}{2\pi}\sum_{\mathbf{p}\in \hat{\Lambda}_{s}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\frac{1}{p_{s}}\hat{J}_{r,s,\alpha}(\mathbf{p})\,\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\mathbf{x}}\, \mathrm{e}^{-\epsilon|p_{s}|/2}\Bigr{)} \tag{47}\]
with \(\epsilon\to 0^{+}\) implicit; precise statements are given in Appendix C.
We define _boson normal-ordering_ with respect to the Dirac vacuum \(\Omega\) such that
\[\begin{split}\mathop{\times}\limits^{\times}\hat{J}^{\mu}_{r,s}( \mathbf{p})\hat{J}^{\mu}_{r^{\prime},s^{\prime}}(\mathbf{p}^{\prime})\mathop{ \times}\limits^{\text{\tiny{def}}}=\begin{cases}\hat{J}^{\mu}_{r,s}(\mathbf{p })\hat{J}^{\mu}_{r^{\prime},s^{\prime}}(\mathbf{p}^{\prime})&\text{if }rp_{s}<0\\ \hat{J}^{\mu}_{r^{\prime},s^{\prime}}(\mathbf{p}^{\prime})\hat{J}^{\mu}_{r,s}( \mathbf{p})&\text{if }rp_{s}\geq 0\end{cases}\qquad(\mu=0,1,2,3)\end{split} \tag{48}\]
(analogous expressions hold for \(\hat{J}_{r,s,\alpha}\)). Then the following operator identities hold true
\[\begin{split}\sum_{\alpha=\uparrow,\downarrow}\sum_{\mathbf{k} \in\Lambda_{s}^{*}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}rk_{s}:\!\hat{\psi}^{ \dagger}_{r,s,\alpha}\hat{\psi}_{r,s,\alpha}\!:&=\tilde{a}\pi\sum_{ \alpha=\uparrow,\downarrow}\sum_{\mathbf{p}\in\hat{\Lambda}_{s}^{*}}\frac{1 }{L^{2}}\mathop{\times}\limits^{\times}\hat{J}^{\dagger}_{r,s,\alpha}\hat{J} _{r,s,\alpha}\mathop{\times}\limits^{\times}\\ &=\frac{\tilde{a}\pi}{2}\sum_{\mathbf{p}\in\hat{\Lambda}_{s}^{*}} \frac{1}{L^{2}}\mathop{\times}\limits^{\times}\Bigl{(}\hat{J}^{0\dagger}_{r,s }\hat{J}^{0}_{r,s}+\frac{1}{3}\hat{\mathbf{J}}^{\dagger}_{r,s}\cdot\hat{ \mathbf{J}}_{r,s}\Bigr{)}\mathop{\times}\limits^{\times}\end{split} \tag{49}\]
with the momentum sets defined in (26) and (30). The first identity is an application of the Kronig identity, while the second is a Sugawara construction; see Appendix C.
### An exactly solvable model of 2D electrons
We discuss the bosonization of the nodal Hamiltonian in (31) using the results obtained above. Inserting the last expression of (49) into (32) gives (5) with (cf. (6))
\[\begin{split} H_{C}=\frac{v_{F}\pi\tilde{a}}{2}\sum_{\mathbf{p}} \frac{1}{L^{2}}\mathop{\times}\limits^{\times}\Bigl{(}\sum_{r,s=\pm}\,\Bigl{(} (1+2\gamma\chi(\mathbf{p}))\,\hat{J}^{0\dagger}_{r,s}\hat{J}^{0}_{r,s}+\gamma \chi(\mathbf{p})\hat{J}^{0\dagger}_{r,s}\hat{J}^{0}_{-r,s}\Bigr{)}\\ +2\gamma\chi(\mathbf{p})\sum_{r,r^{\prime}=\pm}\hat{J}^{0\dagger}_ {r,+}\hat{J}^{0}_{r^{\prime},-}\Bigr{)}\mathop{\times}\limits^{\times}\\ H_{\mathbf{S}}=\frac{v_{F}\pi\tilde{a}}{2}\sum_{\mathbf{p}}\frac{1 }{L^{2}}\mathop{\times}\limits^{\times}\Bigl{(}\sum_{r,s=\pm}\,\Bigl{(}\hat{ \mathbf{J}}^{\dagger}_{r,s}\cdot\hat{\mathbf{J}}_{r,s}/3-\gamma\chi(\mathbf{p })\hat{\mathbf{J}}^{\dagger}_{r,s}\cdot\hat{\mathbf{J}}_{-r,s}\Bigr{)}\\ -2\gamma\chi(\mathbf{p})\sum_{r,r^{\prime}=\pm}\hat{\mathbf{J}}^ {\dagger}_{r,+}\cdot\hat{\mathbf{J}}_{r^{\prime},-}\Bigr{)}\mathop{\times} \limits^{\times}\end{split} \tag{50}\]
and with the dimensionsless coupling constant
\[\gamma=\frac{a^{2}U}{2\pi\tilde{a}v_{F}}. \tag{52}\]
We emphasize that this does not imply (exact) spin-charge separation of the nodal Hamiltonian; there is also the second term on the right hand side of (31) that does not have a simple bosonized form (although it can indeed be expressed in terms of Klein factors, density- and spin operators using Proposition C.3 in Appendix C).
A complete analysis of (50)-(51) will not be attempted in the present paper. Instead, we will focus in the remainder of this section on the "abelian" part of \(H\) obtained by breaking
manifest spin rotation invariance, and which we denote by \(H_{M}\) due to its similarity to the so-called Mattis Hamiltonian in [58]. More specifically, we write
\[\begin{split} H=H_{M}-\frac{U}{4}\sum_{\mathbf{p}}\Bigl{(}\frac{a} {L}\Bigr{)}^{2}\chi(\mathbf{p})\Bigl{(}&\sum_{s=\pm}\bigl{(} \hat{J}^{+}_{+,s}(-\mathbf{p})\hat{J}^{-}_{-,s}(\mathbf{p})+h.c.\bigr{)}\\ &+\sum_{r,r^{\prime}=\pm}\bigl{(}\hat{J}^{+}_{r,+}(-\mathbf{p}) \hat{J}^{-}_{r^{\prime},-}(\mathbf{p})+h.c.\bigr{)}\Bigr{)}\end{split} \tag{53}\]
with the raising- and lowering operators defined as usual, \(\hat{J}^{\pm}_{r,s}=\bigl{(}\hat{J}^{1}_{r,s}\pm{\rm i}\hat{J}^{2}_{r,s}\bigr{)}/2\), and where \(H_{M}\) only depends on \(\hat{J}^{0}_{r,s}\) and \(\hat{J}^{3}_{r,s}\). Using results from Section 4.2, it is possible to write the Hamiltonian \(H_{M}\) in terms of free bosons. Define
\[\begin{split}\hat{\Phi}_{C;s}(\mathbf{p})\stackrel{{ \text{\tiny def}}}{{=}}&\sqrt{\frac{\tilde{a}}{8\pi}} \frac{1}{{\rm i}p_{s}}\Bigl{(}\hat{J}^{0}_{+,s}(\mathbf{p})+\hat{J}^{0}_{-,s}( \mathbf{p})\Bigr{)},\ \ \ \ \ \hat{\Pi}_{C;s}(\mathbf{p})\stackrel{{ \text{\tiny def}}}{{=}}&\sqrt{\frac{\tilde{a}}{8\pi}} \Bigl{(}-\hat{J}^{0}_{+,s}(\mathbf{p})+\hat{J}^{0}_{-,s}(\mathbf{p})\Bigr{)} \\ \hat{\Phi}_{S;s}(\mathbf{p})\stackrel{{\text{\tiny def }}}{{=}}&\sqrt{\frac{\tilde{a}}{8\pi}}\frac{1}{{\rm i}p_{s}} \Bigl{(}\hat{J}^{3}_{+,s}(\mathbf{p})+\hat{J}^{3}_{-,s}(\mathbf{p})\Bigr{)},\ \ \ \ \ \hat{\Pi}_{S;s}(\mathbf{p})\stackrel{{\text{\tiny def }}}{{=}}&\sqrt{\frac{\tilde{a}}{8\pi}}\Bigl{(}-\hat{J}^{3}_{+,s}( \mathbf{p})+\hat{J}^{3}_{-,s}(\mathbf{p})\Bigr{)}\end{split} \tag{54}\]
for \(s=\pm\) and 2D momenta \(\mathbf{p}\in\hat{\Lambda}_{s}^{*}\). It follows that these obey the defining relations of 2D neutral bosons, i.e.
\[[\hat{\Phi}_{X;s}(\mathbf{p}),\hat{\Pi}^{\dagger}_{X^{\prime};s^{\prime}}( \mathbf{p}^{\prime})]={\rm i}\delta_{X,X^{\prime}}\delta_{s,s^{\prime}}\Bigl{(} \frac{L}{2\pi}\Bigr{)}^{2}\delta_{\mathbf{p},\mathbf{p}^{\prime}} \tag{55}\]
(all other commutators vanishing) and
\[\hat{\Pi}^{\dagger}_{X;s}(\mathbf{p})=\hat{\Pi}_{X;s}(-\mathbf{p}),\qquad\hat {\Phi}^{\dagger}_{X;s}(\mathbf{p})=\hat{\Phi}_{X;s}(-\mathbf{p}), \tag{56}\]
where we have used the symbolic notation \(X,X^{\prime}=C,S\). Furthermore, applying the first equality in (49) to (32), together with (54), allows us to write
\[H_{M}=H_{C}+H_{S} \tag{57}\]
with
\[\begin{split} H_{C}=&\frac{v_{F}}{2}\sum_{s=\pm} \sum_{\mathbf{p}\in\hat{\Lambda}_{s}^{*}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2} \stackrel{{\text{\tiny$\times$}}}{{=}}\Bigl{(}\bigl{(}1+\gamma \chi(\mathbf{p})\bigr{)}\hat{\Pi}^{\dagger}_{C;s}\hat{\Pi}_{C;s}\\ &+\bigl{(}1+3\gamma\chi(\mathbf{p})\bigr{)}p_{s}^{2}\hat{\Phi}^{ \dagger}_{C;s}\hat{\Phi}_{C;s}+2\gamma p_{+}p_{-}\chi(\mathbf{p})\hat{\Phi}^{ \dagger}_{C;s}\hat{\Phi}_{C;-s}\Bigr{)}\stackrel{{\text{\tiny$ \times$}}}{{=}}+z.m.\\ H_{S}=&\frac{v_{F}}{2}\sum_{s=\pm}\sum_{\mathbf{p} \in\hat{\Lambda}_{s}^{*}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\stackrel{{ \text{\tiny$\times$}}}{{=}}\Bigl{(}\bigl{(}1+\gamma\chi(\mathbf{p})\bigr{)} \hat{\Pi}^{\dagger}_{S;s}\hat{\Pi}_{S;s}\\ &+\bigl{(}1-\gamma\chi(\mathbf{p})\bigr{)}p_{s}^{2}\hat{\Phi}^{ \dagger}_{S;s}\hat{\Phi}_{S;s}-2\gamma p_{+}p_{-}\chi(\mathbf{p})\hat{\Phi}^{ \dagger}_{S;s}\hat{\Phi}_{S;-s}\Bigr{)}\stackrel{{\text{\tiny$ \times$}}}{{=}}+z.m.\end{split} \tag{59}\]
with \(z.m.\) denoting terms involving zero mode operators; a complete solution including the zero modes is given in Appendix C.3.
The charge- and spin parts of \(H_{M}\) in (57) each have the same structure as the bosonized Hamiltonian of the so-called _Mattis model_ of spinless fermions studied in [58] (compare Equation (3.3) in [58] with (58) and (59)). As for the Mattis Hamiltonian, the right hand
side of (57) can be diagonalised by a Bogoliubov transformation into a sum of decoupled harmonic oscillators and zero mode terms. To this end, define
\[b_{C;s}(\mathbf{p})=\big{(}b_{s,\uparrow}(\mathbf{p})+b_{s,\downarrow}(\mathbf{p })\big{)}/\sqrt{2},\qquad b_{S;s}(\mathbf{p})=\big{(}b_{s,\uparrow}(\mathbf{p} )-b_{s,\downarrow}(\mathbf{p})\big{)}/\sqrt{2}. \tag{60}\]
The Hamiltonian in (57) can then be diagonalized by a unitary operator \(\mathcal{U}\) as follows (see Theorem C.5 in Appendix C.3)
\[\mathcal{U}^{\dagger}H_{M}\mathcal{U}=\sum_{s=\pm}\sum_{\mathbf{p}\in\hat{ \Lambda}_{s}^{*}}\Big{(}\omega_{C;s}(\mathbf{p})b_{C;s}^{\dagger}(\mathbf{p}) b_{C;s}(\mathbf{p})+\omega_{S;s}(\mathbf{p})b_{S;s}^{\dagger}(\mathbf{p})b_{S;s}( \mathbf{p})\Big{)}+\mathcal{E}^{(0)}+z.m. \tag{61}\]
with
\[\omega_{C;\pm}(\mathbf{p})=\begin{cases}\tilde{v}_{F}^{C}\sqrt{\frac{1}{2} \Big{(}|\mathbf{p}|^{2}\pm\sqrt{|\mathbf{p}|^{4}-A_{C}\big{(}2p_{+}p_{-}\big{)} ^{2}}\,\,\Big{)}}&\text{if}\ \,\,\gamma\chi(\mathbf{p})p_{+}p_{-}\neq 0\\ v_{F}\sqrt{\big{(}1+2\gamma\chi(\mathbf{p})\big{)}^{2}-\big{(}\gamma\chi( \mathbf{p})\big{)}^{2}}|p_{\pm}|&\text{if}\ \,\,\gamma\chi(\mathbf{p})p_{+}p_{-}=0\end{cases} \tag{62}\]
\[A_{C}=1-\big{[}2\gamma/(1+3\gamma)\big{]}^{2},\qquad\tilde{v}_{F}^{C}=v_{F} \sqrt{\big{(}1+2\gamma\big{)}^{2}-\gamma^{2}} \tag{63}\]
and
\[\omega_{S;\pm}(\mathbf{p})=\begin{cases}\tilde{v}_{F}^{S}\sqrt{\frac{1}{2} \Big{(}|\mathbf{p}|^{2}\pm\sqrt{|\mathbf{p}|^{4}-A_{S}\big{(}2p_{+}p_{-}\big{)} ^{2}}\,\,\Big{)}}&\text{if}\ \,\,\gamma\chi(\mathbf{p})p_{+}p_{-}\neq 0\\ v_{F}\sqrt{1-\big{(}\gamma\chi(\mathbf{p})\big{)}^{2}}|p_{\pm}|&\text{if}\ \,\, \gamma\chi(\mathbf{p})p_{+}p_{-}=0\end{cases} \tag{64}\]
\[A_{S}=1-\big{[}2\gamma/(1-\gamma)\big{]}^{2},\qquad\tilde{v}_{F}^{S}=v_{F} \sqrt{1-\gamma^{2}} \tag{65}\]
the boson dispersion relations, and
\[\mathcal{E}^{(0)}=\frac{1}{2}\sum_{s=\pm}\sum_{\mathbf{p}\in\hat{\Lambda}_{s}^ {*}}\big{(}\omega_{C;s}(\mathbf{p})+\omega_{S;s}(\mathbf{p})-2v_{F}|p_{s}| \big{)} \tag{66}\]
the groundstate energy of \(H_{M}\). This is well-defined if (12) is fulfilled. For the special case \(t^{\prime}=0\), \(\kappa=1/2\), and \(Q\to\pi/2\), one obtains the upper bound \(U/t<16\pi/3\).
In principle, one can now obtain the complete solution for the model defined by \(H_{M}\) by stepwise generalizing the results given in [58] to the present case. For example, all correlation functions of nodal fermion operators in the thermal equilibrium state obtained from \(H_{M}\) can be computed exactly by analytical methods. Furthermore, as shown in [58], zero modes do not contribute to correlation functions in the thermodynamic limit \(L\to\infty\) (much like in 1D). The only exception are the Klein factors that need to be handled with some care; see Section 3.3 in [58].
## 5 Integrating out degrees of freedom
Up to now, we have studied the part that involves only nodal fermions in the effective Hamiltonian (7). Below, we will propose different ways of also including the antinodal fermions in the analysis.
### Integrating out nodal fermions
The nodal- and antinodal fermions couple through various types of scattering processes in the Hubbard interaction (18) that we cannot treat in full generality. A simple approximation is to also introduce the constraint in (22) for nodal-antinodal processes. This leads to an effective interaction involving nodal- and antinodal bilinears of the same form as in (31); we refer to Appendix B for details. If we truncate this interaction further by only keeping terms involving the nodal bilinears \(J^{0}_{r,s}\) and \(J^{3}_{r,s}\), it is possible to integrate out the bosonized nodal fermions using a functional integral representation of the partition function. We set (cf. (153); note the abuse of notation for the left hand side)
\[H_{na}=\frac{U}{2}\sum_{r,r^{\prime},s=\pm}\sum_{\mathbf{p}}\Bigl{(}\frac{a}{L }\Bigr{)}^{2}\chi(\mathbf{p})\Bigl{(}\hat{J}^{0}_{r,s}(-\mathbf{p})\hat{J}^{0} _{r^{\prime},0}(\mathbf{p})-\hat{J}^{3}_{r,s}(-\mathbf{p})\hat{J}^{3}_{r^{ \prime},0}(\mathbf{p})\Bigr{)} \tag{67}\]
with the antinodal bilinears (\(\mu=0,1,2,3\))
\[\hat{J}^{\mu}_{r,0}(\mathbf{p})=\sum_{\alpha,\beta=\uparrow,\downarrow}\sum_{ \mathbf{k}_{1},\mathbf{k}_{2}\in\Lambda^{*}_{0}}\hat{c}^{\dagger}_{r,0,\alpha} (\mathbf{k}_{1})\sigma^{\mu}_{\alpha,\beta}\hat{c}_{r,0,\beta}(\mathbf{k}_{2} )\delta_{\mathbf{k}_{1}+\mathbf{p},\mathbf{k}_{2}} \tag{68}\]
and we can write \(\Lambda^{*}_{0}=\Lambda^{*}_{r,0}\) for the antinodal momenta. Using (54),
\[H_{na}=\frac{U}{2}\sqrt{\frac{2}{\pi\tilde{a}}}\sum_{r,s=\pm}\sum_{\mathbf{p} \in\Lambda^{*}_{s}}\Bigl{(}\frac{a}{L}\Bigr{)}^{2}2\pi\mathrm{i}p_{s}\chi( \mathbf{p})\Bigl{(}\hat{J}^{0}_{r,0}(-\mathbf{p})\hat{\Phi}_{C;s}(\mathbf{p})- \hat{J}^{3}_{r,0}(-\mathbf{p})\hat{\Phi}_{S;s}(\mathbf{p})\Bigr{)}+z.m. \tag{69}\]
where \(z.m.\) denote terms involving zero mode operators; we will assume throughout this section that their contribution to the functional integral becomes irrelevant in the thermodynamic limit \(L\to\infty\).
The functional integration of the nodal bosons is done exactly as in [56] (see Section 6.3 and Appendix C) with the only difference that we now have two independent boson fields instead of one. Performing the (Gaussian) integrals for the fields \(\hat{\Pi}_{C;s}(\tau,\mathbf{p})\) and \(\hat{\Pi}_{S;s}(\tau,\mathbf{p})\) yields an action that is at most quadratic in \(\hat{\Phi}_{C;s}(\tau,\mathbf{p})\) and \(\hat{\Phi}_{S;s}(\tau,\mathbf{p})\). The interaction between the nodal boson fields and the antinodal fermions, which is linear in the former, can then be removed by completing a square. This leads to the induced action
\[S^{\prime}_{a}=\sum_{n\in\mathbb{Z}}\sum_{r,r^{\prime}=\pm}\sum_{\mathbf{p}} \frac{1}{L^{2}}\left(\hat{v}_{C}(\omega_{n},\mathbf{p})\hat{J}^{0\dagger}_{r,0} \hat{J}^{0}_{r^{\prime},0}+\hat{v}_{S}(\omega_{n},\mathbf{p})\hat{J}^{3\dagger }_{r,0}\hat{J}^{3}_{r^{\prime},0}\right) \tag{70}\]
contributing to the full antinodal action; we write \(\hat{J}^{\mu\dagger}_{r,0}=\hat{J}^{\mu}_{r,0}(-\omega_{n},-\mathbf{p})\) with boson Matsubara frequencies \(\omega_{n}=2\pi n/\beta\). The induced density-density interaction potential is found to be
\[\hat{v}_{C}(\omega_{n},\mathbf{p})=-\frac{a^{4}U^{2}}{8\pi\tilde{a}v_{F}}\sum _{s=\pm}\frac{W_{C;s}(\mathbf{p})}{\omega_{n}^{2}+\omega_{C;s}(\mathbf{p})^{2 }}\chi(\mathbf{p}) \tag{71}\]
with
\[W_{C;\pm}(\mathbf{p})=v_{F}^{2}\left(1+\gamma\right)\left(\left|\mathbf{p} \right|^{2}\pm\frac{\left(p_{+}^{2}-p_{-}^{2}\right)^{2}+\sqrt{1-A_{C}}\left(2 p_{+}p_{-}\right)^{2}}{\sqrt{\left|\mathbf{p}\right|^{4}-A_{C}\left(2p_{+}p_{-} \right)^{2}}}\right) \tag{72}\]
(see also definitions (62)-(63)). Likewise, the induced spin-spin interaction potential is
\[\hat{v}_{S}(\omega_{n},\mathbf{p})=-\frac{a^{4}U^{2}}{8\pi\tilde{a}v_{F}}\sum_{s= \pm}\frac{W_{S;s}(\mathbf{p})}{\omega_{n}^{2}+\omega_{S;s}(\mathbf{p})^{2}} \chi(\mathbf{p}) \tag{73}\]
with (see (64)-(65))
\[W_{S;\pm}(\mathbf{p})=v_{F}^{2}\left(1-\gamma\right)\Biggl{(}\left|\mathbf{p} \right|^{2}\pm\frac{\left(p_{+}^{2}-p_{-}^{2}\right)^{2}-\sqrt{1-A_{S}}\left(2 p_{+}p_{-}\right)^{2}}{\sqrt{\left|\mathbf{p}\right|^{4}-A_{S}\left(2p_{+}p_{-} \right)^{2}}}\Biggr{)}. \tag{74}\]
We note that the functional form of the induced potentials (71) and (73) are both identical to the induced potential found for the spinless model; cf. Equation (86) in [56].
Furthermore, in the derivation above, we have been rather nonchalant in treating different momentum domains. In particular, in (70)_ff_ one should in principle be more careful to distinguish between different cases when components of \(\mathbf{p}\) are zero or not. We assume this becomes irrelevant in the thermodynamic limit.
The analysis above leads to an effective antinodal action that breaks spin rotation invariance. It is possible to go beyond this abelian treatment by recalling the commutation relations in (4). Rescaling the nodal operators \(\tilde{J}^{i}_{r,s}(\mathbf{x})\stackrel{{\mathrm{def}}}{{=}} \sqrt{\tilde{a}}J^{i}_{r,s}(\mathbf{x})\), one sees that the first term on the right hand side of the commutator
\[\Bigl{[}\tilde{J}^{i}_{r,s}(\mathbf{x}),\tilde{J}^{j}_{r,s}(\mathbf{y})\Bigr{]} =2\mathrm{i}\sqrt{\tilde{a}}\sum_{k}\epsilon_{ijk}\tilde{J}^{k}_{r,s}( \mathbf{x})\delta_{s}\left(\mathbf{x}-\mathbf{y}\right)+r\frac{1}{\pi\mathrm{ i}}\delta_{i,j}\partial_{s}\delta_{s}\left(\mathbf{x}-\mathbf{y}\right) \tag{75}\]
is of lower order in \(\tilde{a}\) as compared to the second term. This suggest, at least within the functional integral framework, to treat the three components approximately as mutually commuting (bosonic) fields; thus being able to integrate out the nodal fermions while still preserving spin rotation invariance. Results are given in Appendix D.
### Integrating out antinodal fermions
Another interesting possibility is to integrate out the antinodal fermions and obtain an effective action involving only nodal fermions. To do this in a systematic manner, it is useful to return to the representation (17)-(18) of the Hubbard Hamiltonian. The corresponding action for the pure antinodal part can then be written (\(\mathbf{p}\in(2\pi/L)\mathbb{Z}^{2}\), \(-\pi/a\leq p_{1,2}<\pi/a\))
\[\begin{split} S_{a}&=\sum_{\alpha=\uparrow, \downarrow}\sum_{r=\pm}\sum_{\mathbf{k}\in\Lambda_{0}^{\ast}}\int_{0}^{\beta} \mathrm{d}\tau\,\bar{\psi}_{r,0,\alpha}(\tau,\mathbf{k})\left(\partial_{\tau}+ \epsilon(\mathbf{K}_{r,s}+\mathbf{k})-\mu\right)\psi_{r,0,\alpha}(\tau, \mathbf{k})\\ &\quad+\frac{U}{4}\sum_{r_{j}=\pm}\sum_{\mathbf{p}}\Bigl{(}\frac{ a}{L}\Bigr{)}^{2}\int_{0}^{\beta}\mathrm{d}\tau\,\bigl{(}\rho_{r_{1}r_{2}}^{a}( \tau,\mathbf{p})\rho_{r_{3}r_{4}}^{a}(\tau,-\mathbf{p})-J_{r_{1}r_{2}}^{3,a}( \tau,\mathbf{p})J_{r_{3}r_{4}}^{3,a}(\tau,-\mathbf{p})\bigr{)}\end{split} \tag{76}\]
with Grassmann fields \(\psi_{r,0,\alpha}(\tau,\mathbf{k})\), Matsubara time \(\tau\in[0,\beta)\) and bilinears
\[\begin{split}\rho_{r_{1}r_{2}}^{a}(\tau,\mathbf{p})& =\sum_{\alpha}\sum_{\mathbf{n}\in\mathbb{Z}^{2}}\sum_{\mathbf{k} _{1}\mathbf{k}_{2}\in\Lambda_{0}^{\ast}}\delta_{\mathbf{K}_{r_{1},0}-\mathbf{ K}_{r_{2},0}+\mathbf{k}_{1}-\mathbf{k}_{2},\mathbf{p}+2\pi\mathbf{n}/a}\bar{ \psi}_{r_{1},0,\alpha}^{\dagger}(\tau,\mathbf{k}_{1})\psi_{r_{2},0,\alpha}( \tau,\mathbf{k}_{2})\\ J_{r_{1}r_{2}}^{3,a}(\tau,\mathbf{p})&=\sum_{ \alpha\alpha^{\prime}}\sum_{\mathbf{n}\in\mathbb{2}}\sum_{\mathbf{k}_{1} \mathbf{k}_{2}\in\Lambda_{0}^{\ast}}\delta_{\mathbf{K}_{r_{1},0}-\mathbf{K}_{r_{ 2},0}+\mathbf{k}_{1}-\mathbf{k}_{2},\mathbf{p}+2\pi\mathbf{n}/a}\bar{\psi}_{r_ {1},0,\alpha}^{\dagger}(\tau,\mathbf{k}_{1})\sigma_{\alpha\alpha^{\prime}}^{3} \psi_{r_{2},0,\alpha^{\prime}}(\tau,\mathbf{k}_{2})\end{split}. \tag{77}\]
The nodal action \(S_{n}\) has the same form with corresponding bilinears \(\rho^{n}_{r_{1}r_{2}}(\tau,{\bf p})\) and \(J^{3,n}_{r_{1}r_{2}}(\tau,{\bf p})\). The action for the nodal-antinodal interaction in (18) has a more complicated form. A simple approximation is to truncate it by only keeping the terms
\[S_{na}=\frac{U}{2}\sum_{r_{j}=\pm}\sum_{\bf p}\Bigl{(}\frac{a}{L}\Bigr{)}^{2} \int_{0}^{\beta}{\rm d}\tau\,\bigl{(}\rho^{a}_{r_{1}r_{2}}(\tau,{\bf p})\rho^{ n}_{r_{3}r_{4}}(\tau,-{\bf p})-J^{3,a}_{r_{1}r_{2}}(\tau,{\bf p})J^{3,n}_{r_{3}r_{4}}( \tau,-{\bf p})\bigr{)}\,. \tag{78}\]
We define the full action as \(S=S_{n}+S_{a}+S_{na}\). The interaction terms can then be decoupled by introducing two Hubbard-Stratonovich (HS) fields \(\phi_{0}(\tau,{\bf p})\) and \(\phi_{S}(\tau,{\bf p})\) such that
\[\begin{split} S=\sum_{\alpha}\sum_{r=\pm}\sum_{s=0,\pm}\sum_{{ \bf k}\in\Lambda^{*}_{r,s}}\int\limits_{0}^{\beta}{\rm d}\tau\,\bar{\psi}_{r,s,\alpha}\,(\partial_{\tau}+\varepsilon({\bf K}_{r,s}+{\bf k})-\mu)\,\psi_{r,s,\alpha}+\frac{U}{4}\sum_{\bf p}\Bigl{(}\frac{a}{L}\Bigr{)}^{2}\int\limits_{0} ^{\beta}{\rm d}\tau\\ \times\Bigl{(}\hat{\phi}_{0}^{\dagger}\hat{\phi}_{0}+\hat{\phi}_{ S}^{\dagger}\hat{\phi}_{S}-2\sum_{r_{1}r_{2}}\big{(}{\rm i}\hat{\phi}_{0}^{ \dagger}(\rho^{a}_{r_{1}r_{2}}+\rho^{n}_{r_{1}r_{2}})+\hat{\phi}_{S}^{\dagger} (J^{3,a}_{r_{1}r_{2}}+J^{3,n}_{r_{1}r_{2}})\big{)}\Bigr{)}\end{split} \tag{79}\]
with \(\phi_{0}^{\dagger}(\tau,{\bf p})=\phi_{0}(\tau,-{\bf p})\), etc. There are several ways of decoupling the interaction using HS fields; our choice is such that, if the nodal fermions are ignored in (79), a saddle-point analysis reproduces the correct Hartree-Fock equations for the antinodal fermions when spin rotation invariance is broken; see [61] for further discussion of this point. Integrating out the antinodal Grassman fields in (79) gives a term \(-{\rm Tr}\ln G^{-1}\), where
\[\begin{split} G^{-1}_{{\bf k},r,\alpha;{\bf k}^{\prime},r^{ \prime},\alpha^{\prime}}=&\,(\partial_{\tau}+\varepsilon({\bf K} _{r,s}+{\bf k})-\mu)\,\delta_{{\bf k},{\bf k}^{\prime}}\delta_{r,r^{\prime}} \delta_{\alpha,\alpha^{\prime}}-\frac{U}{2}\,\Bigl{(}\frac{a}{L}\Bigr{)}^{2}\\ &\times\Bigl{(}{\rm i}\hat{\phi}_{0}(\tau,{\bf K}_{r^{\prime},0}-{ \bf K}_{r,0}+{\bf k}^{\prime}-{\bf k})+\hat{\phi}_{S}(\tau,{\bf K}_{r^{\prime},0}-{\bf K}_{r,0}+{\bf k}^{\prime}-{\bf k})\sigma^{3}_{\alpha\alpha^{\prime}} \Bigr{)}\end{split}. \tag{80}\]
If we expand this term to quadratic order in the HS fields, we can integrate out these fields and obtain an effective action of nodal fermions. This action can then be analyzed using the same partial continuum limit as in Section 3.
## 6 Discussion
In this paper, we have related an effective QFT model of interacting electrons to the 2D Hubbard model near half-filling. The model consists of so-called nodal- and antinodal fermions and is obtained by performing a certain partial continuum limit in the lattice model. We have shown that the nodal part can be studied using bosonization methods in the Hamiltonian framework. Important results are a formula expressing the nodal fermion field operator in terms of Klein factors and density operators, and a 2D extension of the Sugawara construction. We identified a QFT model of 2D electrons (defined by the Hamiltonian in (57)) that can be solved exactly by bosonization. We also obtained a 2D analogue of a Wess-Zumino-Witten model, which we believe is simpler to analyse than the corresponding one in 1D due to different scaling behavior.
The antinodal fermions can be studied on different levels of sophistication. As in [57], we can do a local-time approximation in the induced antinodal action obtained by integrating out the bosonized nodal fermions. The antinodal fermions can then be studied using
ordinary mean field theory. Due to the close similarity to the spinless case [57], we are likely to find a mean field phase near half-filling in which the antinodal fermions are gapped and have an antiferromagnetic ordering. In this partially gapped phase, the low-energy physics of the effective QFT model would then be governed by the nodal fermions alone.
If the antinodal fermions are gapless, they will contribute to the low-energy physics. As we have proposed above, a crude way to incorporate their effect is to apply a Hubbard-Stratonovich (HS) transformation and then expand the resulting action in powers of the HS fields. This allows us to derive an effective nodal action that becomes Gaussian after bosonization. The study of this action is left to future work.
## Acknowledgments
We thank Pedram Hekmati, Vieri Mastropietro, Manfred Salmhofer, Chris Varney and Mats Wallin for useful discussions. This work was supported by the Goran Gustafsson Foundation and the Swedish Research Council (VR) under contract no. 621-2010-3708.
## Appendix A Index sets
The following index sets are used throughout the paper and are collected here for easy reference (\(s=\pm\))
\[\Lambda=\{\mathbf{x}\in a\mathbb{Z}^{2}\,:\,-L/2\leq x_{\pm}<L/2\} \tag{81}\] \[\Lambda_{s}=\left\{\mathbf{x}\in\mathbb{R}^{2}\,:\,x_{s}\in \mathbb{R},\;x_{-s}\in\tilde{a}\mathbb{Z},\;-L/2\leq x_{\pm}<L/2\right\}\] (82) \[\Lambda_{\mathrm{1D}}=\{x\in\tilde{a}\mathbb{Z}\;:\;-L/2\leq x<L/2\}\] (83) \[\Lambda^{*}=\{\mathbf{k}\in\mathbb{R}^{2}\,:\,k_{\pm}\in\left(2 \pi/L\right)(\mathbb{Z}+1/2)\}\] (84) \[\Lambda^{*}_{s}=\{\mathbf{k}\in\Lambda^{*}\;:\;-\pi/\tilde{a} \leq k_{-s}<\pi/\tilde{a}\}\] (85) \[\Lambda^{*}_{0}=\left\{\mathbf{k}\in\Lambda^{*}\,:\,|k_{\pm}+\pi /L|<\kappa\pi/(\sqrt{2}a)\right\}\] (86) \[\tilde{\Lambda}^{*}=\{\mathbf{p}\in\mathbb{R}^{2}\,:\,p_{\pm}\in \left(2\pi/L\right)\mathbb{Z}\}\] (87) \[\tilde{\Lambda}^{*}_{s}=\left\{\mathbf{p}\in\tilde{\Lambda}^{*} \,:\;-\pi/\tilde{a}\leq p_{-s}<\pi/\tilde{a}\right\}\] (88) \[\hat{\Lambda}^{*}_{s}=\left\{\mathbf{p}\in\tilde{\Lambda}^{*}_{s }\,:\;p_{s}\neq 0\right\}\] (89) \[\tilde{\Lambda}^{*}_{\mathrm{1D}}=\left\{p\in\left(2\pi/L\right) \mathbb{Z}\,:\,-\pi/\tilde{a}\leq p<\pi/\tilde{a}\right\}\] (90) \[\hat{\Lambda}^{*}_{\mathrm{1D}}=\left\{p\in\tilde{\Lambda}^{*}_{ \mathrm{1D}}\,:\,p\neq 0\right\} \tag{91}\]
## Appendix B Derivation of the effective QFT model
We summarise technical details for the derivation of the nodal-antinodal model from the 2D Hubbard model near half-filling; see also [56, 57] for further explanations.
### The extended Hubbard model
To emphasize the generality of the derivation, we will in this and the following appendices include a nearest-neighbor interaction in the lattice Hamiltonian. We thus consider an extended Hubbard model of itinerant spin-\(1/2\) fermions on a diagonal square lattice \(\Lambda\) with lattice constant \(a\) and \((L/a)^{2}\) lattice sites (see (81)). The model is defined by fermion creation- and annihilation operators \(\psi_{\alpha}^{(\dagger)}({\bf x})\), with \({\bf x}\in\Lambda\) and spin \(\alpha=\pm\), acting on a fermion Fock space with vacuum \(|0\rangle\) and \(\psi_{\alpha}({\bf x})|0\rangle=0\). The fermion operators satisfy antiperiodic boundary conditions and are normalized such that \(\{\psi_{\alpha}({\bf x}),\psi_{\alpha^{\prime}}^{\dagger}({\bf y})\}=\delta_{ \alpha,\alpha^{\prime}}\delta_{{\bf x},{\bf y}}/a^{2}\), etc. The Hamiltonian is
\[H_{\rm Hubb}\stackrel{{\rm def}}{{=}}\sum_{\alpha=\pm}\sum_{{\bf x },{\bf y}\in\Lambda}a^{4}\left(-T({\bf x}-{\bf y})-\mu\delta_{{\bf x},{\bf y}}/ a^{2}\right)\psi_{\alpha}^{\dagger}({\bf x})\psi_{\alpha}({\bf y})+\sum_{{\bf x },{\bf y}\in\Lambda}a^{4}u({\bf x}-{\bf y})\rho({\bf x})\rho({\bf y}) \tag{92}\]
with non-zero hopping matrix elements \(T({\bf x}-{\bf y})\) equal to \(t/a^{2}>0\) for nn sites and \(t^{\prime}/a^{2}\) for nnn sites, and non-zero interaction matrix elements \(u({\bf x}-{\bf y})\) equal to \(U/2\) for on-site and \(V/2\) for nn sites; the (local) density operators are \(\rho({\bf x})\stackrel{{\rm def}}{{=}}\sum_{\alpha}\psi_{\alpha}^ {\dagger}({\bf x})\psi_{\alpha}({\bf x})\). We will assume that the Hubbard parameters satisfy the constraints
\[|t^{\prime}|\leq t/2,\qquad U\geq 4V\geq 0. \tag{93}\]
We define Fourier-transformed fermion creation- and annihilation operators by
\[\hat{\psi}_{\alpha}({\bf k})\stackrel{{\rm def}}{{=}}\frac{1}{2 \pi}\sum_{{\bf x}\in\Lambda}a^{2}\psi_{\alpha}({\bf x})\,{\rm e}^{-{\rm i}{\bf k }\cdot{\bf x}}\qquad({\bf k}\in\Lambda^{*}) \tag{94}\]
such that \(\{\hat{\psi}_{\alpha}({\bf k}),\hat{\psi}_{\alpha^{\prime}}^{\dagger}({\bf k} ^{\prime})\}=[L/(2\pi)]^{2}\delta_{{\bf k},{\bf k}^{\prime}}\delta_{\alpha, \alpha^{\prime}}\), etc. Note that the fermion operators in Section 3 are related to these as \(\hat{c}_{\alpha}({\bf k})=(2\pi/L)\hat{\psi}_{\alpha}({\bf k})\). The Fourier-transformed density operators are
\[\hat{\rho}({\bf p})\stackrel{{\rm def}}{{=}}\sum_{{\bf x}\in \Lambda}a^{2}\rho({\bf x})\,{\rm e}^{-{\rm i}{\bf p}\cdot{\bf x}}=\sum_{\alpha =\uparrow,\downarrow}\sum_{{\bf k}_{1}{\bf k}_{2}\in BZ}\sum_{{\bf n}\in \mathbb{Z}^{2}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\hat{\psi}_{\alpha}^{\dagger} ({\bf k}_{1})\hat{\psi}_{\alpha}({\bf k}_{2})\delta_{{\bf k}_{1}+{\bf p}+2\pi {\bf n}/a,{\bf k}_{2}} \tag{95}\]
where \({\bf p}\in\tilde{\Lambda}^{*}\); the last sum in (95) accounts for umklapp terms and
\[BZ\stackrel{{\rm def}}{{=}}\{{\bf k}\in\Lambda^{*}\,:\,-\pi/a \leq k_{1,2}<\pi/a\} \tag{96}\]
is the Brillouin zone. The Hamiltonian is written in terms of these latter operators as (the second sum is such that \({\bf p}\in(2\pi/L)\mathbb{Z}^{2}\), \(-\pi/a\leq p_{1,2}<\pi/a\))
\[H_{\rm Hubb}=\sum_{\alpha=\pm}\sum_{{\bf k}\in BZ}\Bigl{(}\frac{2\pi}{L} \Bigr{)}^{2}\left(\epsilon({\bf k})-\mu\right)\hat{\psi}_{\alpha}^{\dagger}({ \bf k})\hat{\psi}_{\alpha}({\bf k})+\sum_{{\bf p}}\Bigl{(}\frac{2\pi}{L} \Bigr{)}^{2}\hat{u}({\bf p})\hat{\rho}(-{\bf p})\hat{\rho}({\bf p}) \tag{97}\]
with the tight-binding band relation in (14) and
\[\hat{u}({\bf p})=a^{2}\Bigl{(}U/2+V\bigl{[}\cos\left(ap_{1}\right)+\cos\left(ap _{2}\right)\bigr{]}\Bigr{)}/\left(2\pi\right)^{2} \tag{98}\]
the interaction potential. With the chosen normalization for \(\hat{\psi}_{\alpha}^{(\rm f)}({\bf k})\), the fermion number operator is given by
\[N=\hat{\rho}({\bf 0})=\sum_{\alpha=\pm}\sum_{{\bf k}\in BZ}\left(\frac{2\pi}{L} \right)^{2}\hat{\psi}_{\alpha}^{\dagger}({\bf k})\hat{\psi}_{\alpha}({\bf k}). \tag{99}\]
We recall that, under a particle-hole transformation
\[{\cal W}_{ph}\hat{\psi}_{\alpha}({\bf k}){\cal W}_{ph}^{\dagger}\stackrel{{ \mbox{\tiny def}}}{{=}}\hat{\psi}_{\alpha}^{\dagger}\left(-{\bf k}+(\pi,\pi) \,/a\right), \tag{100}\]
the filling \(\nu\) is mapped to \(2-\nu\), while the Hamiltonian in (97) transforms as
\[\begin{array}{l}{\cal W}_{ph}H_{\rm Hubb}(t,t^{\prime},\mu,U,V){\cal W}_{ph} ^{\dagger}=H_{\rm Hubb}(t,-t^{\prime},2(U+4V)-\mu,U,V)\\ \hskip 113.811024pt+2\left(U+4V-\mu\right)\left(L/a\right)^{2}\end{array} \tag{101}\]
where the notation \(H_{\rm Hubb}(t,t^{\prime},\mu,U,V)\) has been used for the right hand side of (97).
### Eight-flavor representation of the Hamiltonian
We rewrite the Hubbard Hamiltonian in terms of nodal-, antinodal-, in- and out fermions. To this end, we let \({\cal I}\) be the index set of the eight pairs of flavor indices \((r,s)\), with \(r=\pm\) and \(s=0,\pm,2\). The momentum regions are defined as (\(r=\pm\))
\[\begin{array}{l}\Lambda_{r,0}^{*}\stackrel{{\mbox{\tiny def }}}{{=}}\left\{{\bf k}\in\Lambda^{*}\,:\,|k_{\pm}+\pi/L|<\kappa\pi/(\sqrt{2}a) \right\}\\ \Lambda_{r,2}^{*}\stackrel{{\mbox{\tiny def}}}{{=}}\left\{{\bf k }\in\Lambda^{*}\,:\,|k_{\pm}+\pi/L|<\pi/\tilde{a}\right\}\\ \Lambda_{r,s=\pm}^{*}\stackrel{{\mbox{\tiny def}}}{{=}}\left\{{ \bf k}\in\Lambda^{*}\,:\,\left|k_{s}+\frac{\pi}{L}+r\frac{2Q-\pi}{\sqrt{2}a} \right|<\frac{\kappa\pi}{\sqrt{2}a},\,\left|k_{-s}+\frac{\pi}{L}\right|<\frac {\pi}{\tilde{a}}\right\}\end{array} \tag{102}\]
with the parameters
\[\kappa\in(2\sqrt{2}a/L)(\mathbb{N}+1/2),\qquad Q\in(\sqrt{2}\pi a/L)\mathbb{N},\qquad\tilde{a}\stackrel{{\mbox{\tiny def}}}{{=}}\sqrt{2}a/(1-\kappa) \tag{103}\]
satisfying the geometric constraints
\[Q\neq\pi/2,\qquad\pi(1-\kappa)/2<Q<\pi(1+\kappa)/2,\qquad 0<\kappa<1. \tag{104}\]
The relative number of momenta in these regions, defined as \(f_{r,s}\stackrel{{\mbox{\tiny def}}}{{=}}\sum_{{\bf k}\in \Lambda_{r,s}^{*}}(a/L)^{2}\), are
\[f_{r,0}=\kappa^{2}/2,\qquad f_{r,2}=(1-\kappa)^{2}/2,\qquad f_{r,s=\pm}=\kappa (1-\kappa)/2. \tag{105}\]
We also set
\[\begin{array}{l}{\bf K}_{-,2}\stackrel{{\mbox{\tiny def}}}{{=}} (0,0),\qquad\quad{\bf K}_{+,2}\stackrel{{\mbox{\tiny def}}}{{=}} \left(\pi/a,\pi/a\right),\qquad{\bf K}_{r,s=\pm}\stackrel{{\mbox{ \tiny def}}}{{=}}\left(rQ/a,rsQ/a\right)\\ {\bf K}_{+,0}\stackrel{{\mbox{\tiny def}}}{{=}}\left(\pi/a,0\right), \qquad{\bf K}_{-,0}\stackrel{{\mbox{\tiny def}}}{{=}}\left(0,\pi/a \right),\end{array} \tag{106}\]
and define new fermion operators by
\[\hat{\psi}_{r,s,\alpha}^{(\dagger)}({\bf k})\stackrel{{\mbox{ \tiny def}}}{{=}}\hat{\psi}_{\alpha}^{(\dagger)}({\bf K}_{r,s}+2\pi{\bf n}/a+{ \bf k})\qquad({\bf k}\in\Lambda_{r,s}^{*}) \tag{107}\]
with \({\bf n}\in\mathbb{Z}^{2}\) such that \({\bf K}_{r,s}+2\pi{\bf n}/a+{\bf k}\in BZ\). They satisfy the anticommutation relations
\[\{\hat{\psi}_{r,s,\alpha}({\bf k}),\hat{\psi}^{\dagger}_{r^{\prime},s^{\prime}, \alpha^{\prime}}({\bf k}^{\prime})\}=[L/(2\pi)]^{2}\delta_{r,r^{\prime}}\delta _{s,s^{\prime}}\delta_{\alpha,\alpha^{\prime}}\delta_{{\bf k},{\bf k}^{\prime} },\quad\{\hat{\psi}_{r,s,\alpha}({\bf k}),\hat{\psi}_{r^{\prime},s^{\prime}, \alpha^{\prime}}({\bf k}^{\prime})\}=0. \tag{108}\]
In terms of these operators, the kinetic part of the Hubbard Hamiltonian in (97) can be written
\[H^{(0)}_{\rm Hubb}=\sum_{\alpha=\pm}\sum_{(r,s)\in\mathcal{I}}\sum_{{\bf k} \in\Lambda^{*}_{r,s}}\biggl{(}\frac{2\pi}{L}\biggr{)}^{2}\left[\epsilon({\bf K }_{r,s}+{\bf k})-\mu\right]\hat{\psi}^{\dagger}_{r,s,\alpha}({\bf k})\hat{\psi }_{r,s,\alpha}({\bf k}) \tag{109}\]
and similarly for the interaction part
\[H^{(1)}_{\rm Hubb}=\sum_{\alpha,\alpha^{\prime}=\pm}\sum_{(r_{ j},s_{j})\in\mathcal{I}}\sum_{{\bf k}\in\Lambda^{*}_{r_{j},s_{j}}}\biggl{(} \frac{2\pi}{L}\biggr{)}^{8}\hat{v}(K_{1},K_{2},K_{3},K_{4}) \tag{110}\] \[\times\hat{\psi}^{\dagger}_{r_{1},s_{1},\alpha}({\bf k}_{1})\hat {\psi}_{r_{2},s_{2},\alpha}({\bf k}_{2})\hat{\psi}^{\dagger}_{r_{3},s_{3}, \alpha^{\prime}}({\bf k}_{3})\hat{\psi}_{r_{4},s_{4},\alpha^{\prime}}({\bf k} _{4})\]
with \(K_{j}\) short for \({\bf K}_{r_{j},s_{j}}+{\bf k}_{j}\), and
\[\hat{v}(K_{1},K_{2},K_{3},K_{4})= \hat{u}({\bf K}_{r_{1},s_{1}}-{\bf K}_{r_{2},s_{2}}+{\bf k}_{1}- {\bf k}_{2}) \tag{111}\] \[\times\sum_{{\bf n}\in\mathbb{Z}^{2}}\biggl{(}\frac{L}{2\pi} \biggr{)}^{2}\delta_{{\bf K}_{r_{1},s_{1}}-{\bf K}_{r_{2},s_{2}}+{\bf K}_{r_{3 },s_{3}}-{\bf K}_{r_{4},s_{4}}+{\bf k}_{1}-{\bf k}_{2}+{\bf k}_{3}-{\bf k}_{4},2\pi{\bf n}/a}.\]
Setting \(V=0\) gives (17)-(18). The fermion number operator can be expressed as
\[N=\sum_{(r,s)\in\mathcal{I}}N_{r,s},\qquad N_{r,s}\stackrel{{ \mbox{\tiny def}}}{{=}}\sum_{\alpha=\pm}\sum_{{\bf k}\in\Lambda^{*}_{r,s}} \biggl{(}\frac{2\pi}{L}\biggr{)}^{2}\hat{\psi}^{\dagger}_{r,s,\alpha}({\bf k}) \hat{\psi}_{r,s,\alpha}({\bf k}). \tag{112}\]
Note that the mapping from the representation in (97) to the one in (109)-(111) is exact.
Under the particle-hole transformation defined in (100), the fermion operators in the eight-flavor representation transform as
\[\mathcal{W}_{ph}\hat{\psi}_{r,0,\alpha}({\bf k})\mathcal{W}^{ \dagger}_{ph}=\hat{\psi}^{\dagger}_{-r,0,\alpha}(-{\bf k}),\qquad\mathcal{W}_ {ph}\hat{\psi}_{r,2,\alpha}({\bf k})\mathcal{W}^{\dagger}_{ph}=\hat{\psi}^{ \dagger}_{-r,2,\alpha}(-{\bf k}) \tag{113}\]
and
\[\mathcal{W}_{ph}\hat{\psi}_{r,s,\alpha}({\bf k})\mathcal{W}^{ \dagger}_{ph}=\hat{\psi}^{\dagger}_{r,s,\alpha}(-{\bf k}+\left(\pi,\pi\right)/a- 2{\bf K}_{r,s}+2\pi{\bf n}/a),\qquad(s=\pm) \tag{114}\]
where \({\bf n}\in\mathbb{Z}^{2}\) is such that \(\left(-{\bf k}+\left(\pi,\pi\right)/a-2{\bf K}_{r,s}+2\pi{\bf n}/a\right)\in \Lambda^{*}_{r,s}\).
### Simplified matrix elements
Expanding the tight-binding band relation to lowest non-trivial order around the points \({\bf K}_{r,s}\), \((r,s)\in\mathcal{I}\), leads to
\[\epsilon({\bf K}_{r,s}+{\bf k})=\epsilon({\bf K}_{r,s})+\varepsilon_{r,s}({\bf k })+\ldots\qquad({\bf k}\in\Lambda^{*}_{r,s}) \tag{115}\]
with constants
\[\epsilon({\bf K}_{r,0})=4t^{\prime},\qquad\epsilon({\bf K}_{r,\pm})=-4\cos(Q) \left[t+t^{\prime}\cos(Q)\right],\qquad\epsilon({\bf K}_{r,2})=4(rt-t^{\prime}) \tag{116}\]
and effective band relations
\[\varepsilon_{r,0}({\bf k})=-rc_{F}k_{+}k_{-}-c_{F}^{\prime}|{\bf k}|^{2},\quad \varepsilon_{r,\pm}({\bf k})=rv_{F}k_{\pm},\quad\varepsilon_{r,2}({\bf k})=(- rc_{F}/2+c_{F}^{\prime})\,|{\bf k}|^{2} \tag{117}\]
where
\[v_{F}\stackrel{{\mbox{\tiny def}}}{{=}}2\sqrt{2}\sin(Q)\left[t+2 t^{\prime}\cos(Q)\right]a,\qquad c_{F}\stackrel{{\mbox{\tiny def}}}{{=}}2ta^{2}, \qquad c_{F}^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}2t^{ \prime}a^{2}. \tag{118}\]
Thus, the nodal fermions have (approximately) a linear-, the antinodal fermions a hyperbolic-, and the in- and out fermions a parabolic band relation.
**Approximation B.1**.: Replace the tight-binding band relations \(\epsilon({\bf K}_{r,s}+{\bf k})\) in (109) by \(\epsilon({\bf K}_{r,s})+\varepsilon_{r,s}({\bf k})\) defined in Equations (116)-(118).
We note that this approximation is only crucial for the nodal fermions and is not done for the other fermions in the main text.
**Approximation B.2**.: Simplify the interaction vertex in (111) by replacing the right-hand side with
\[\hat{u}({\bf K}_{r_{1},s_{1}}-{\bf K}_{r_{2},s_{2}})\Big{(}\frac{L}{2\pi} \Big{)}^{2}\delta_{{\bf k}_{1}-{\bf k}_{2}+{\bf k}_{3}-{\bf k}_{4},{\bf 0}}\, \sum_{{\bf n}\in\mathbb{Z}^{2}}\delta_{{\bf K}_{r_{1},s_{1}}-{\bf K}_{r_{2},s_{ 2}}+{\bf K}_{r_{3},s_{3}}-{\bf K}_{r_{4},s_{4}},2\pi{\bf n}/a}. \tag{119}\]
In addition to the added constraint (22), this involves expanding the interaction vertex in (111) as
\[\hat{u}({\bf K}_{r,s}-{\bf K}_{r^{\prime},s^{\prime}}+{\bf k}-{\bf k}^{\prime} )=\hat{u}({\bf K}_{r,s}-{\bf K}_{r^{\prime},s^{\prime}})+O\left(|a({\bf k}-{ \bf k}^{\prime})|\right) \tag{120}\]
and then only keeping the lowest-order term; this approximation is not needed if \(V=0\). Again, Approximation B.2 is only crucial for scattering processes involving nodal fermions and will not be done for processes involving only antinodal fermions in the main text.
The constraint imposed by the second Kronecker delta in (119) reduces the number of terms in the original Hubbard interaction: of the originally 4096 possible combinations of pairs \((r_{j},s_{j})\), 512 yield a non-zero interaction vertex if \(Q=\pi/2\), and 196 if \(Q\neq\pi/2\). The combinations of \((r_{j},s_{j})\in{\cal I}\) for which (119) is non-zero when \(Q\neq\pi/2\) are collected in Table 1.
Define interaction coefficients \(v_{r,s,r^{\prime},s^{\prime}}\stackrel{{\mbox{\tiny def}}}{{=}} a^{2}\hat{u}({\bf K}_{r,s}-{\bf K}_{r^{\prime},s^{\prime}})/(2\pi)^{2}\), with
\[\begin{array}{ll}v_{+,0,-,0}=v_{+,2,-,2}=U/2-2V,&\quad v_{+,\pm,-,\pm}=U/2+V \cos{(2Q)}\\ v_{r,0,r^{\prime},\pm}=v_{r,0,r^{\prime},2}=U/2,&\quad v_{r,\pm,r^{\prime}, \mp}=U/2+V\left(1+\cos{(2Q)}\right)\\ v_{r,\pm,r^{\prime},2}=U/2-r^{\prime}2V\cos{(Q)}\,,&\quad v_{r,s,r,s}=U/2+2V \end{array} \tag{121}\]
for \(r,r^{\prime}=\pm\). Introducing Approximations B.1 and B.2 into Equations (109)-(111) leads to the truncated Hamiltonian
\[\tilde{H}_{\rm Hubb}=\tilde{H}_{\rm Hubb}^{(0)}+\tilde{H}_{\rm Hubb}^{(1)} \tag{122}\]
with
\[\tilde{H}_{\rm Hubb}^{(0)}=\sum_{\alpha=\pm}\sum_{(r,s)\in{\cal I}}\sum_{{\bf k }\in\Lambda_{r,s}^{*}}\Big{(}\frac{2\pi}{L}\Big{)}^{2}\left(\varepsilon_{r,s}( {\bf k})-\left[\mu-\epsilon({\bf K}_{r,s})\right]\right)\hat{\psi}_{r,s,\alpha }^{\dagger}({\bf k})\hat{\psi}_{r,s,\alpha}({\bf k}) \tag{123}\]
and
\[\begin{split}\tilde{H}_{\text{Hubb}}^{(1)}=\sum_{ \begin{subarray}{c}(r,s),(r^{\prime},s^{\prime})\in\mathcal{I}\\ (r,s)\neq(r^{\prime},s^{\prime})\end{subarray}}f_{r,s}& v_{r,s,r^{ \prime},s^{\prime}}N_{r^{\prime},s^{\prime}}+\sum_{(r,s),(r^{\prime},s^{ \prime})\in\mathcal{I}}\sum_{\mathbf{p}\in\tilde{\Lambda}^{*}}\frac{1}{L^{2} }\Big{(}g_{r,s,r^{\prime},s^{\prime}}^{C}\hat{\rho}_{r,s}^{\ddagger}\hat{\rho }_{r^{\prime},s^{\prime}}\\ &+g_{r,s,r^{\prime},s^{\prime}}^{S}\hat{\mathbf{S}}_{r,s}^{ \dagger}\cdot\hat{\mathbf{S}}_{r^{\prime},s^{\prime}}+g_{r,s,r^{\prime},s^{ \prime}}^{P}\hat{P}_{r,s}^{\dagger}\cdot\hat{P}_{r^{\prime},s^{\prime}}\Big{)} +\tilde{H}_{rem}\end{split} \tag{124}\]
where \(\tilde{H}_{rem}\) contains interaction terms between in- or out fermions and nodal- or antinodal fermions (including e.g. the last three lines in Table 1),
\[\hat{\rho}_{r,s}(\mathbf{p})\stackrel{{\text{\tiny def }}}{{=}}\sum_{\alpha=\pm}\sum_{\mathbf{k}_{1},\mathbf{k}_{2}\in\Lambda^{*}_{r, s}}\Big{(}\frac{2\pi}{L}\Big{)}^{2}\hat{\psi}_{r,s,\alpha}^{\dagger}(\mathbf{k}_{1}) \hat{\psi}_{r,s,\alpha}(\mathbf{k}_{2})\delta_{\mathbf{k}_{1}+\mathbf{p}, \mathbf{k}_{2}} \tag{125}\] \[\hat{S}_{r,s}^{i}(\mathbf{p})\stackrel{{\text{\tiny def }}}{{=}}\frac{1}{2}\sum_{\alpha,\beta=\pm}\sum_{\mathbf{k}_{1},\mathbf{k}_{2} \in\Lambda^{*}_{r,s}}\Big{(}\frac{2\pi}{L}\Big{)}^{2}\hat{\psi}_{r,s,\alpha}^{ \dagger}(\mathbf{k}_{1})\sigma_{\alpha,\beta}^{i}\hat{\psi}_{r,s,\beta}( \mathbf{k}_{2})\delta_{\mathbf{k}_{1}+\mathbf{p},\mathbf{k}_{2}} \tag{126}\]
such that \(\hat{\mathbf{S}}_{r,s}^{\dagger}\cdot\hat{\mathbf{S}}_{r^{\prime},s^{\prime} }=\sum_{i=1}^{3}\hat{S}_{r,s}^{i}(-\mathbf{p})\hat{S}_{r^{\prime},s^{\prime}} ^{i}(\mathbf{p})\), and
\[\hat{P}_{r,s}^{\mu}(\mathbf{p})\stackrel{{\text{\tiny def}}}{{=}} \frac{1}{2}\sum_{\alpha,\beta=\pm}\sum_{\mathbf{k}_{1}\in\Lambda^{*}_{r,s}} \sum_{\mathbf{k}_{2}\in\Lambda^{*}_{r,s}}\Big{(}\frac{2\pi}{L}\Big{)}^{2}\hat{ \psi}_{r,s,\alpha}(\mathbf{k}_{1})\sigma_{\alpha,\beta}^{\mu}\hat{\psi}_{r,s, \beta}(\mathbf{k}_{2})\delta_{\mathbf{k}_{1}+\mathbf{k}_{2},\mathbf{p}} \tag{127}\]
where \(r_{s}\equiv r\) for \(s=0,2\) (antinodal-, in-, and out fermions), \(r_{s}\equiv-r\) for \(s=\pm\) (nodal fermions), and \(\hat{P}_{r,s}^{\dagger}\cdot\hat{P}_{r^{\prime},s^{\prime}}=\sum_{\mu=0}^{3}[P _{r,s}^{\mu}(\mathbf{p})]^{\dagger}P_{r^{\prime},s^{\prime}}^{\mu}(\mathbf{p})\). The coupling constants are
\[\begin{split} g_{r,s,r^{\prime},s^{\prime}}^{C}\stackrel{{ \text{\tiny def}}}{{=}}& a^{2}\left[\delta_{r,r^{\prime}} \delta_{s,s^{\prime}}v_{r,s,r^{\prime},s^{\prime}}+(1-\delta_{r,r^{\prime}} \delta_{s,s^{\prime}})\left(v_{r,s,r,s}-v_{r,s,r^{\prime},s^{\prime}}/2\right) \right]\\ g_{r,s,r^{\prime},s^{\prime}}^{S}\stackrel{{\text{ \tiny def}}}{{=}}& 2a^{2}\left(\delta_{r,r^{\prime}}\delta_{s,s^{\prime}}-1\right)v_{r,s,r^{ \prime},s^{\prime}}\\ g_{r,s,r^{\prime},s^{\prime}}^{P}\stackrel{{\text{ \tiny def}}}{{=}}& 2a^{2}\Big{[}\delta_{s,-s^{\prime}}\left(\delta_{s,+}+\delta_{s,-} \right)+\delta_{r,-r^{\prime}}\delta_{s,s^{\prime}}\left(\delta_{s,0}+\delta_ {s,2}\right)\\ &+\left(\delta_{s,+}+\delta_{s,-}\right)\delta_{s^{\prime},0}+ \delta_{s,0}\left(\delta_{s^{\prime},+}+\delta_{s^{\prime},-}\right)\Big{]}v _{r,s,r^{\prime},s^{\prime}}\end{split}. \tag{128}\]
\begin{table}
\begin{tabular}{|c|c|c|c||c||c||} \hline \(r_{1},s_{1}\) & \(r_{2},s_{2}\) & \(r_{3},s_{3}\) & \(r_{4},s_{4}\) & Restrictions & \# \\ \hline \(r,s\) & \(r,s\) & \(r,s\) & \(r,s\) & \(s=0,\pm,2\) & 8 \\ \(r,s\) & \(r,s\) & \(r^{\prime},s^{\prime}\) & \(r^{\prime},s^{\prime}\) & \((r,s)\neq(r^{\prime},s^{\prime})\), \(s,s^{\prime}=0,\pm,2\) & 56 \\ \(r,s\) & \(r^{\prime},s^{\prime}\) & \(r^{\prime},s^{\prime}\) & \(r,s\) & \((r,s)\neq(r^{\prime},s^{\prime})\), \(s,s^{\prime}=0,\pm,2\) & 56 \\ \(r,s\) & \(r^{\prime},s^{\prime}\) & \(-r,s\) & \(-r^{\prime},s^{\prime}\) & \((s,s^{\prime})=(\pm,\mp),(0,2),(2,0)\) & 16 \\ \(r,s\) & \(-r,s\) & \(r,s\) & \(-r,s\) & \(s=0,2\) & 4 \\ \(r,s\) & \(r^{\prime},s^{\prime}\) & \(r,s\) & \(-r^{\prime},s^{\prime}\) & \(s=0,2\), \(s^{\prime}=\pm\) & 16 \\ \(r,s\) & \(r^{\prime},s^{\prime}\) & \(-r,s\) & \(r^{\prime},s^{\prime}\) & \(s=\pm\), \(s^{\prime}=0,2\) & 16 \\ \(r,s\) & \(r^{\prime},s^{\prime}\) & \(r,s\) & \(r^{\prime},s^{\prime}\) & \((s,s^{\prime})=(0,2),(2,0)\) & 8 \\ \(r,s\) & \(r^{\prime},s^{\prime}\) & \(-r^{\prime},s^{\prime}\) & \(-r,s\) & \((s,s^{\prime})=(0,2),(2,0)\) & 8 \\ \(r,s\) & \(-r,s\) & \(r^{\prime},s^{\prime}\) & \(-r^{\prime},s^{\prime}\) & \((s,s^{\prime})=(0,2),(2,0)\) & 8 \\ \hline \end{tabular}
\end{table}
Table 1: List of all combinations for \((r_{j},s_{j})\in\mathcal{I}\) that satisfy the constraint in (22) when \(Q\neq\pi/2\); \(r,r^{\prime}=\pm\). The rightmost column indicates the total number of terms corresponding to each line; they sum up to 196.
### Normal-ordering
Depending on the filling of the system, different degrees of freedom will be important for the low-energy physics. To distinguish these, we define a reference state (Fermi sea)
\[\Omega\stackrel{{\mbox{\tiny def}}}{{=}}\prod_{\alpha=\pm}\prod_{{ \bf k}\in{\cal S}}\psi_{\alpha}^{\dagger}({\bf k})|0\rangle \tag{129}\]
with the set \({\cal S}\subset BZ\) chosen such that one of the following three cases hold: (I) all nodal-, antinodal- and out fermion states are unoccupied with the filling \(\nu\ll 1\), (II) all in-, nodal- and antinodal fermion states are occupied with \(\nu\gg 1\), or (III) all in fermion states are occupied and all out fermion states are unoccupied with \(\nu\approx 1\).
The filling factors of the state (129) for different fermion flavors are defined as \(\nu_{r,s}\stackrel{{\mbox{\tiny def}}}{{=}}(a/L)^{2}\langle \Omega,N_{r,s}\Omega\rangle\) with the total filling \(\nu=\sum_{(r,s)\in{\cal I}}\nu_{r,s}\). By fermion normal-ordering the bilinears in (125)-(127) with respect to (129), one finds (\(i=1,2,3\) and \(\mu=0,1,2,3\))
\[\hat{J}_{r,s}^{0}\stackrel{{\mbox{\tiny def}}}{{=}}\,\hat{:}\hat {\rho}_{r,s}\,\mbox{:}=\hat{\rho}_{r,s}-(L/a)^{2}\nu_{r,s}\delta_{{\bf p},{\bf 0 }},\qquad\,\mbox{:}\,\hat{S}_{r,s}^{i}\,\mbox{:}=\hat{S}_{r,s}^{i},\qquad\, \mbox{:}\,\hat{P}_{r,s}^{\mu}\,\mbox{:}=\hat{P}_{r,s}^{\mu} \tag{130}\]
where \(\hat{J}_{r,s}^{0}=\hat{J}_{r,s}^{0}({\bf p})\), etc. We note that the terms in \(\tilde{H}_{rem}\) are automatically normal-ordered.
**Approximation B.3**.: Drop all normal-ordered interaction terms between in- and out fermions, and between in- or out fermions and nodal- or antinodal fermions.
This approximation leads to the following (eight-flavor) Hamiltonian consisting of decoupled in-, out-, and nodal-antinodal fermions
\[\begin{split}& H_{8\mbox{-}f}\stackrel{{\mbox{ \tiny def}}}{{=}}{\cal E}+\sum_{\alpha=\pm}\sum_{r,s\in{\cal I}}\sum_{{\bf k }\in\Lambda_{r,s}^{\star}}\left(\frac{2\pi}{L}\right)^{2}[\varepsilon_{r,s}({ \bf k})-\mu_{r,s}]\,\mbox{:}\,\hat{\psi}_{r,s,\alpha}^{\dagger}({\bf k})\hat{ \psi}_{r,s,\alpha}({\bf k})\,\mbox{:}\\ &+\sum_{r,r^{\prime},s,s^{\prime}}\sum_{{\bf p}\in\Lambda^{ \star}}\frac{1}{L^{2}}\Big{(}g_{r,s,r^{\prime},s^{\prime},s^{\prime}}^{C}\hat {J}_{r,s}^{0\dagger}\hat{J}_{r^{\prime},s^{\prime}}^{0}-g_{r,s,r^{\prime},s^{ \prime}}^{S}\hat{\bf S}_{r,s}^{\dagger}\cdot\hat{\bf S}_{r^{\prime},s^{\prime }}+g_{r,s,r^{\prime},s^{\prime}}^{P}\hat{P}_{r,s}^{\dagger}\cdot\hat{P}_{r^{ \prime},s^{\prime}}\Big{)}\\ &+\sum_{r}\sum_{{\bf p}\in\hat{\Lambda}^{\star}}\frac{1}{L^{2}} \Big{(}g_{r,2,r,2}^{C}\hat{J}_{r,2}^{0\dagger}\hat{J}_{r,2}^{0}-g_{r,2,r,2}^{S }\hat{\bf S}_{r,2}^{\dagger}\cdot\hat{\bf S}_{r,2}+g_{r,2,r,2}^{P}\hat{P}_{r, 2}^{\dagger}\cdot\hat{P}_{r,2}\Big{)}\end{split} \tag{131}\]
with the sums in the second line such that \(s,s^{\prime}=0,\pm\) (the nodal- and antinodal interaction terms),
\[\mu_{r,s}=\mu-\varepsilon(Q_{r,s})-\sum_{\begin{subarray}{c}(r^{\prime},s^{ \prime})\in{\cal I}\\ (r^{\prime},s^{\prime})^{\prime}\in(r,s)\end{subarray}}\left(f_{r^{\prime},s ^{\prime}}-\nu_{r^{\prime},s^{\prime}}\right)v_{r,s,r^{\prime},s^{\prime}}-2 \nu v_{r,s,r,s} \tag{132}\]
the effective chemical potentials, and
\[\begin{split}&{\cal E}=\sum_{\alpha=\pm}\sum_{(r,s)\in{\cal I}} \sum_{{\bf k}\in\Lambda_{r,s}^{\star}}\left(\frac{2\pi}{L}\right)^{2}[ \epsilon({\bf K}_{r,s})+\varepsilon_{r,s}({\bf k})]\,\langle\Omega,\hat{\psi}_ {r,s,\alpha}^{\dagger}({\bf k})\hat{\psi}_{r,s,\alpha}({\bf k})\Omega\rangle+{ \cal E}_{1}\\ &\Big{(}\frac{a}{L}\Big{)}^{2}{\cal E}_{1}\stackrel{{ \mbox{\tiny def}}}{{=}}-\mu\nu+(U/2+2V)\nu^{2}+\sum_{ \begin{subarray}{c}(r,s),(r^{\prime},s^{\prime})\in{\cal I}\\ (r,s)\neq(r^{\prime},s^{\prime})\end{subarray}}\nu_{r,s}\left(f_{r^{\prime},s ^{\prime}}-\nu_{r^{\prime},s^{\prime}}/2\right)v_{r,s,r^{\prime},s^{\prime}} \end{split} \tag{133}\]
an additive energy constant.
### Partial continuum limit near half-filling
In this paper, we will concentrate on the nearly half-filled regime for which the in- and out fermions can be ignored in (131) (corresponding to case (III) above). To this end, we choose the momentum set \(\mathcal{S}\) in (129) such that
\[\hat{\psi}_{-,2,\alpha}^{\dagger}(\mathbf{k})\Omega=0\quad\text{for all } \mathbf{k}\in\Lambda_{-,2}^{*},\qquad\hat{\psi}_{+,2,\alpha}(\mathbf{k})\Omega=0 \quad\text{for all }\mathbf{k}\in\Lambda_{+,2}^{*} \tag{134}\]
for the in- and out fermions,
\[\hat{\psi}_{r,0}(\mathbf{k})\Omega=0\quad\text{for all }\mathbf{k}\in\Lambda_{r,0} ^{*}. \tag{135}\]
for the antinodal fermions, and
\[\hat{\psi}_{r,s,\alpha}^{\dagger}(\mathbf{k})\Omega=\hat{\psi}_{r,s,\alpha}(- \mathbf{k})\Omega=0\qquad\text{for all }\mathbf{k}\in\Lambda_{r,s}^{*}\,:\,rk_{s}\leq\sqrt{2}\left(Q_{0}-Q \right)/a \tag{136}\]
for the nodal fermions (\(s=\pm\)); the parameter \(Q_{0}\) satisfy the same requirements as \(Q\) in (103)-(104). With this, the filling factors of (129) become
\[\nu_{r,0}=0,\quad\nu_{r,\pm}=\left(1-\kappa\right)\left(2Q_{0}/\pi-1+\kappa \right)/2,\quad\nu_{-,2}=\left(1-\kappa\right)^{2},\quad\nu_{+,2}=0 \tag{137}\]
such that the total filling is \(\nu=1-\kappa^{2}+2\left(1-\kappa\right)\left(2Q_{0}/\pi-1\right)\).
The chemical potential \(\mu\) is fixed such that \(\varepsilon_{r,s}(\mathbf{k})-\mu_{r,s}=0\) for \(s=\pm\) and momenta \(\mathbf{k}\) satisfying \(\mathbf{k}+\mathbf{K}_{r,s}=\left(rQ_{0}/a,rsQ_{0}/a\right)\), i.e.
\[v_{F}\sqrt{2}\left(Q_{0}-Q\right)/a-\mu_{r,s}=0. \tag{138}\]
This is equivalent to requiring that the underlying Fermi surface corresponding to (129) crosses the points \(\left(rQ_{0}/a,rsQ_{0}/a\right)\). One finds
\[\begin{split}\mu=v_{F}\sqrt{2}\left(Q_{0}-Q\right)/a-4t\cos \left(Q\right)-4t^{\prime}\!\cos^{2}\left(Q\right)+U/2-4VC\cos\left(Q\right)\\ +\left(1-\kappa\right)\left(2Q_{0}/\pi-1\right)\left(U/4+V\right) +\left(U/2+4V\right)\nu\end{split} \tag{139}\]
with
\[C\overset{\text{\tiny def}}{=}\left(1-\kappa\right)\cos\left(Q\right)\left( 2Q_{0}/\pi-1\right)+\left(1-\kappa\right)^{2}/2. \tag{140}\]
Likewise, the energy constant \(\mathcal{E}_{1}\) in (133) becomes
\[\begin{split}\left(\frac{a}{L}\right)^{2}\!\mathcal{E}_{1}=& -\mu\nu+\left(U+4V\right)\nu^{2}/2-4VC^{2}+U\kappa^{2}\left(1- \kappa\right)\left(2Q_{0}/\pi-1\right)\\ &+U\left(1-\kappa\right)\left(2\kappa^{3}+\kappa+1\right)/4+V \kappa^{2}(1-\kappa)^{2}\left(4\!\cos^{2}\left(Q\right)-1\right)\\ &+\left(V-3U/4\right)\left(1-\kappa\right)^{2}\!\left(2Q_{0}/\pi -1\right)^{2}\end{split} \tag{141}\]
Within a mean field approximation, one can fix the parameter \(Q\) using (138) and imposing the self-consistency condition \(Q=Q_{0}\); the reader is referred to [57] for details. In the following, we simplify the presentation by taking \(Q=Q_{0}\) at the outset (thus setting \(\mu_{r,s=\pm}=0\)) at the cost of keeping \(Q\) as a free parameter.
**Approximation B.4**.: Drop all terms in (131) involving in- and out fermions.
We regularize the interaction in (131) using the cutoff functions (see Section 5.4 in [56] for further discussion of these functions)
\[\chi_{s}(\mathbf{p})\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\begin{cases} 1&\text{if }\left|p_{s}\right|<\kappa\pi/(\sqrt{2}a)\text{ and }\left|p_{-s}\right|<\pi/\tilde{a}\\ 0&\text{otherwise}\end{cases} \tag{142}\]
for \(s=\pm\) and \(\mathbf{p}\in\tilde{\Lambda}^{*}\); we use a somewhat simplified cutoff function in the main text.
**Approximation B.5**.: Replace all nodal operators \(\hat{J}^{0}_{r,s}(\mathbf{p})\), \(\hat{S}^{i}_{r,s}(\mathbf{p})\), and \(\hat{P}^{\mu}_{r,s}(\mathbf{p})\) (\(s=\pm\)) in (131) by the operators \(\chi_{s}(\mathbf{p})\hat{J}^{0}_{r,s}(\mathbf{p})\), \(\chi_{s}(\mathbf{p})\hat{S}^{i}_{r,s}(\mathbf{p})\), and \(\chi_{s}(\mathbf{p})\hat{P}^{\mu}_{r,s}(\mathbf{p})\).
With this, the UV cutoff can be partly removed for the nodal fermions:
**Approximation B.6**.: Replace the nodal momentum sets \(\Lambda^{*}_{r,s=\pm}\) in (102) by \(\Lambda^{*}_{s=\pm}\) in (85).
We will use the same notation for the reference state in (129) defined before taking the partial continuum limit, and the Dirac vacuum obtained after the limit.
In order to facilitate the bosonization of the nodal Hamiltonian (see the discussion in Section 3), we need to add certain umklapp terms to the nodal density- and spin bilinears.
**Approximation B.7**.: Replace the fermion normal-ordered nodal density- and spin operators in (125)-(126) (using (130)) by
\[\hat{J}^{0}_{r,s}(\mathbf{p}) \stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\sum_{\alpha }\sum_{\mathbf{k}_{1},\mathbf{k}_{2}\in\Lambda^{*}_{s}}\Bigl{(}\frac{2\pi}{L} \Bigr{)}^{2}:\!\hat{\psi}^{\dagger}_{r,s,\alpha}(\mathbf{k}_{1})\hat{\psi}_{r, s,\alpha}(\mathbf{k}_{2})\!:\sum_{n\in\mathbb{Z}}\delta_{\mathbf{k}_{1}+ \mathbf{p},\mathbf{k}_{2}+2\pi n\mathbf{e}_{-s}/\tilde{a}} \tag{143}\] \[\hat{S}^{i}_{r,s}(\mathbf{p}) \stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\frac{1}{2} \sum_{\alpha,\beta}\sum_{\mathbf{k}_{1},\mathbf{k}_{2}\in\Lambda^{*}_{s}} \Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}:\!\hat{\psi}^{\dagger}_{r,s,\alpha}(\mathbf{ k}_{1})\sigma^{i}_{\alpha,\beta}\hat{\psi}_{r,s,\beta}(\mathbf{k}_{2})\!:\sum_{n\in \mathbb{Z}}\delta_{\mathbf{k}_{1}+\mathbf{p},\mathbf{k}_{2}+2\pi n\mathbf{e}_ {-s}/\tilde{a}}. \tag{144}\]
After applying all this to (131), the effective Hamiltonian of the coupled system of nodal- and antinodal fermions becomes
\[H_{eff}\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}H_{n}+H_{a}+H_{na}+ \mathcal{E} \tag{145}\]
with the nodal part of (145) given by
\[H_{n}=H+g^{P}_{n}\sum_{r,r^{\prime},s=\pm}\sum_{\mathbf{p}\in \tilde{\Lambda}^{*}}\frac{1}{L^{2}}\chi_{+}(\mathbf{p})\chi_{-}(\mathbf{p}) \hat{P}^{\dagger}_{r,s}(\mathbf{p})\cdot\hat{P}_{r^{\prime},-s}(\mathbf{p}) \tag{146}\] \[H=H_{0}+H_{1}\]
with
\[H_{0}=v_{F}\sum_{\alpha=\pm}\sum_{r,s=\pm}\sum_{\mathbf{k}\in\Lambda^{*}_{s}} \Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}rk_{s}:\hat{\psi}^{\dagger}_{r,s,\alpha}( \mathbf{k})\hat{\psi}_{r,s,\alpha}(\mathbf{k}): \tag{147}\]
the free part, and
\[H_{1}=\sum_{\mathbf{p}\in\tilde{\Lambda}^{*}}\frac{1}{L^{2}} \Bigl{(}\sum_{s=\pm}\chi_{s}(\mathbf{p})\bigl{(}\sum_{r=\pm}g^{C}_{0}\hat{J}_{ r,s}0^{\dagger}\hat{J}^{0}_{r,s}+g^{C}_{1}\hat{J}^{0\dagger}_{+,s}\hat{J}^{0}_{-,s}+g^{S}_{1} \hat{\mathbf{S}}^{\dagger}_{+,s}\cdot\hat{\mathbf{S}}_{-,s}\bigr{)} \tag{148}\] \[+\chi_{+}(\mathbf{p})\chi_{-}(\mathbf{p})\sum_{r,r^{\prime}=\pm} \bigl{(}g^{C}_{2}\hat{J}^{0\dagger}_{r,+}\hat{J}^{0}_{r^{\prime},-}+g^{S}_{2} \hat{\mathbf{S}}^{\dagger}_{r,+}\cdot\hat{\mathbf{S}}_{r^{\prime},-}\bigr{)} \Bigr{)}\]
the density- and spin interaction part. The coupling contants are
\[\begin{split} g_{0}^{C}&=a^{2}\left(U/2+2V\right),& g_{n}^{P}=a^{2}\left(U+2V\left(1+\cos\left(2Q\right)\right)\right)\\ g_{1}^{C}&=a^{2}\left(U/2+2V\left(2-\cos\left(2Q \right)\right)\right),& g_{2}^{C}=a^{2}\left(U/2+V\left(3-\cos \left(2Q\right)\right)\right)\\ g_{1}^{S}&=-a^{2}\left(2U+8V\cos\left(2Q\right) \right),& g_{2}^{S}=-a^{2}\left(2U+4V\left(1+\cos\left(2Q\right) \right)\right)\end{split} \tag{149}\]
The antinodal part of (145) is given by
\[\begin{split} H_{a}=&\sum_{\alpha=\pm}\sum_{r=\pm} \sum_{\mathbf{k}\in\Lambda_{0}^{*}}\left(\frac{2\pi}{L}\right)^{2}\left( \varepsilon_{r,0}(\mathbf{k})-\mu_{0}\right):\!\hat{\psi}_{r,0,\alpha}^{ \dagger}(\mathbf{k})\hat{\psi}_{r,0,\alpha}(\mathbf{k})\!:\\ &+\sum_{r=\pm}\sum_{\mathbf{p}\in\bar{\Lambda}^{*}}\frac{1}{L^{2 }}\Big{(}g_{a}^{C}\hat{J}_{r,0}^{0\dagger}\hat{J}_{r,0}^{0}+\tilde{g}_{a}^{C} \hat{J}_{r,0}^{0\dagger}\hat{J}_{-r,0}^{0}+g_{a}^{S}\hat{\mathbf{S}}_{r,0}^{ \dagger}\cdot\hat{\mathbf{S}}_{-r,0}+g_{a}^{P}\hat{P}_{r,0}^{\dagger}\cdot\hat {P}_{-r,0}\Big{)}\end{split} \tag{150}\]
with
\[\mu_{0}=\mu-4t^{\prime}-U/2+\left(U/4+V\right)\kappa^{2}-\left(U/2+4V\right)\nu \tag{151}\]
the effective antinodal chemical potential, and
\[\begin{split} g_{a}^{C}&=a^{2}\left(U/2+2V\right), \quad\quad\tilde{g}_{a}^{C}=a^{2}\left(U/4+3V\right)\\ g_{a}^{S}&=-a^{2}\left(U-4V\right),\quad\quad g_{a}^{P }=a^{2}\left(U-4V\right)\end{split} \tag{152}\]
the coupling constants. Note that we can write \(\mu_{0}\stackrel{{\mathrm{\tiny def}}}{{=}}\mu_{r,0}\) since the right hand side of (151) is independent of \(r\), and similarly \(\Lambda_{0}^{*}\stackrel{{\mathrm{\tiny def}}}{{=}}\Lambda_{r,0}^ {*}\).
Finally, the nodal fermions couple to the antinodal fermions through the following contribution to the effective Hamiltonian in (145) (note the abuse of duplicate notation in (69) and (153))
\[H_{na}=H_{na}^{{}^{\prime}}+\frac{g_{na}^{P}}{2}\sum_{r,r^{\prime},s=\pm}\sum_ {\mathbf{p}\in\bar{\Lambda}^{*}}\frac{1}{L^{2}}\chi_{s}(\mathbf{p})\Big{(} \hat{P}_{r,s}^{\dagger}\cdot\hat{P}_{r^{\prime},0}+\hat{P}_{r^{\prime},0}^{ \dagger}\cdot\hat{P}_{r,s}\Big{)} \tag{153}\]
with
\[H_{na}^{{}^{\prime}}=\sum_{r,r^{\prime},s=\pm}\sum_{\mathbf{p}\in\bar{\Lambda }^{*}}\frac{1}{L^{2}}\chi_{s}(\mathbf{p})\Big{(}g_{na}^{C}\hat{J}_{r,s}^{0 \dagger}\hat{J}_{r^{\prime},0}^{0}+g_{na}^{S}\hat{\mathbf{S}}_{r,s}^{\dagger} \cdot\hat{\mathbf{S}}_{r^{\prime},0}\Big{)} \tag{154}\]
the density- and spin interaction part, and
\[g_{na}^{C}=a^{2}\left(U/2+4V\right),\quad\quad g_{na}^{S}=-2a^{2}U,\quad\quad g _{na}^{P}=2a^{2}U \tag{155}\]
the coupling constants.
## Appendix C Bosonization of nodal fermions - additional details
We collect without proofs some known results on non-abelian bosonization (Appendix C.1); the reader is referred to Chapter 15 in [62] and references therein for further discussion. The notation used here is the same as that in Appendix A of [58]. We also give the precise results on the bosonization of the nodal fermions (Appendices C.2-C.3).
### Non-abelian bosonization
Let \(r,r^{\prime}=\pm\) be chirality indices, \(A,A^{\prime}\in{\cal I}\) flavor indices with \({\cal I}\) some index set to be specified later, \(\alpha,\alpha^{\prime}=\pm\) spin indices, and \(k,k^{\prime}\in(2\pi/L)({\mathbb{Z}}+1/2)\) 1D Fourier modes. We consider fermion operators \(c^{(\dagger)}_{r,A,\sigma}(k)\) defined on a fermion Fock space \({\cal F}\) with normalized vacuum state \(\Omega\) (Dirac sea) such that
\[\{c_{r,A,\alpha}(k),c^{\dagger}_{r^{\prime},A^{\prime},\alpha^{\prime}}(k^{ \prime})\}=\delta_{r,r^{\prime}}\delta_{\alpha,\alpha^{\prime}}\delta_{A,A^{ \prime}}\delta_{k,k^{\prime}},\quad\{c_{r,A,\alpha}(k),c_{r^{\prime},A^{\prime },\alpha^{\prime}}(k^{\prime})\}=0 \tag{156}\]
and
\[c_{r,A,\alpha}(k)\Omega=c^{\dagger}_{r,A,\alpha}(-k)\Omega=0\quad\mbox{ for all }\ k\ \mbox{ such that }\ rk>0. \tag{157}\]
For \(p\in(2\pi/L){\mathbb{Z}}\) and \(\mu=0,1,2,3\), let
\[\hat{j}^{\mu}_{r,A}(p)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}} \sum_{\alpha,\alpha^{\prime}}\sum_{k\in\frac{2\pi}{L}({\mathbb{Z}}+\frac{1}{2} )}:\!c^{\dagger}_{r,A,\alpha}(k-p)\sigma^{\mu}_{\alpha,\alpha^{\prime}}c_{r,A, \alpha^{\prime}}(k)\!: \tag{158}\]
with the colons denoting fermion normal ordering. These are well-defined operators on \({\cal F}\) satisfying the commutation relations
\[\big{[}\hat{j}^{0}_{r,A}(p),\hat{j}^{0}_{r^{\prime},A^{\prime}}(p ^{\prime})\big{]} =2\delta_{r,r^{\prime}}\delta_{A,A^{\prime}}r\frac{Lp}{2\pi} \delta_{p+p^{\prime},0} \tag{159}\] \[\big{[}\hat{j}^{0}_{r,A}(p),\hat{j}^{i}_{r^{\prime},A^{\prime}}(p ^{\prime})\big{]} =0\] \[\big{[}\hat{j}^{i}_{r,A}(p),\hat{j}^{i}_{r^{\prime},A^{\prime}}(p ^{\prime})\big{]} =2\delta_{r,r^{\prime}}\delta_{A,A^{\prime}}\Big{(}\sum_{k=1}^{3} \mathrm{i}\epsilon_{ijk}\hat{j}^{k}_{r,A}(p+p^{\prime})+r\delta_{i,j}\frac{Lp}{ 2\pi}\delta_{p+p^{\prime},0}\Big{)}\]
and
\[\hat{j}^{\mu}_{r,A}(p)^{\dagger}=\hat{j}^{\mu}_{r,A}(-p),\qquad\hat{j}^{\mu}_{ r,A}(p)\Omega=0\quad\mbox{ for all }\ p\ \mbox{ such that }\ rp\geq 0. \tag{160}\]
Define also (generators of the Virasoro algebra [62])
\[\hat{L}_{r,A}(p)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\sum_{ \alpha}\sum_{k\in\frac{2\pi}{L}({\mathbb{Z}}+\frac{1}{2})}r(k-p/2):\!c^{ \dagger}_{r,A,\alpha}(k-p)c_{r,A,\alpha}(k)\!:\qquad(p\in(2\pi/L){\mathbb{Z}}) \tag{161}\]
such that
\[\sum_{A\in{\cal I}}\Big{[}\hat{L}_{+,A}(0)+\hat{L}_{-,A}(0)\Big{]}=\sum_{A\in {\cal I}}\sum_{r=\pm}\sum_{k\in\frac{2\pi}{L}({\mathbb{Z}}+\frac{1}{2})}rk: \!c^{\dagger}_{r,A,\alpha}(k)c_{r,A,\alpha}(k)\!: \tag{162}\]
is proportional to an ordinary 1D (massless) Dirac Hamiltonian. The operators in (161) satisfy the commutation relations
\[\Big{[}\hat{L}_{r,A}(p),\hat{L}_{r,A}(p^{\prime})\Big{]}=r(p-p^{ \prime})\hat{L}_{r,A}(p+p^{\prime})+2\frac{2\pi}{L}\delta_{p+p^{\prime},0} \frac{1}{12}rp\Big{[}\Big{(}\frac{Lp}{2\pi}\Big{)}^{2}\!-\!1\Big{]} \tag{163}\] \[\Big{[}\hat{L}_{r,A}(p),\hat{j}^{\mu}_{r,A}(p^{\prime})\Big{]}=- rp^{\prime}\hat{j}^{\mu}_{r,A}(p+p^{\prime})\]
and \(\hat{L}_{r,A}(p)\Omega=0\) if \(rp\geq 0\). The following operator identity holds true (the Sugawara construction)
\[\hat{L}_{r,A}(p)=\frac{1}{4}\sum_{p^{\prime}\in\frac{2\pi}{L}{\mathbb{Z}}}\frac {2\pi}{L}\stackrel{{\mbox{\tiny{\rm ex}}}}{{\times}}\!\Big{[} \hat{j}^{0}_{r,A}(p-p^{\prime})\hat{j}^{0}_{r,A}(p^{\prime})+\frac{1}{3}\sum_{i =1}^{3}\hat{j}^{i}_{r,A}(p-p^{\prime})\hat{j}^{i}_{r,A}(p^{\prime})\Big{]} \stackrel{{\mbox{\tiny{\rm ex}}}}{{\times}} \tag{164}\]
with \(\stackrel{{\mbox{\tiny{\rm ex}}}}{{\times}}\cdot\stackrel{{ \mbox{\tiny{\rm ex}}}}{{\times}}\) denoting boson normal ordering as in (48).
### Bosonization identites for the nodal fermions
The unspecified flavor index set \(\mathcal{I}\) in Appendix C.1 is now defined as
\[\mathcal{I}\stackrel{{\text{\tiny{def}}}}{{=}}\{(s,x)\,:\,s=\pm,\,x \in\Lambda_{\text{1D}}\}. \tag{165}\]
We can then represent the nodal fermion operators as
\[\hat{\psi}_{r,s,\alpha}(\mathbf{k})=\frac{L}{2\pi}\sqrt{\frac{\tilde{a}}{L}} \sum_{x\in\Lambda_{\text{1D}}}c_{r,s,x,\alpha}(k_{s})\,\mathrm{e}^{-\mathrm{i }k_{-s}x}\qquad(\mathbf{k}=k_{+}\mathbf{e}_{+}+k_{-}\mathbf{e}_{-}) \tag{166}\]
such that (156) and (157) are equivalent to (25) and (27).
**Proposition C.1**.: _The operators in (28) are well-defined operators on the fermion Fock space obeying the commutation relations (\(\mathbf{p}\in\tilde{\Lambda}^{*}\))_
\[\begin{split}&\Big{[}\hat{J}_{r,s}(\mathbf{p}),\hat{J}_{r,s}( \mathbf{p}^{\prime})\Big{]}=r\frac{4\pi p_{s}}{\tilde{a}}\Big{(}\frac{L}{2\pi} \Big{)}^{2}\sum_{n\in\mathbb{Z}}\delta_{\mathbf{p}+\mathbf{p}^{\prime},2\pi n \mathbf{e}_{-s}/\tilde{a}}\\ &\Big{[}\hat{S}^{i}_{r,s}(\mathbf{p}),\hat{S}^{j}_{r,s}(\mathbf{ p}^{\prime})\Big{]}=\mathrm{i}\sum_{k=1}^{3}\epsilon_{ijk}\hat{S}^{k}_{r,s}( \mathbf{p}+\mathbf{p}^{\prime})+\delta_{i,j}r\frac{\pi p_{s}}{\tilde{a}}\Big{(} \frac{L}{2\pi}\Big{)}^{2}\sum_{n\in\mathbb{Z}}\delta_{\mathbf{p}+\mathbf{p}^ {\prime},2\pi n\mathbf{e}_{-s}/\tilde{a}}\end{split} \tag{167}\]
_with all other commutators vanishing. Moreover,_
\[\hat{J}_{r,s}(\mathbf{p})^{\dagger}=\hat{J}_{r,s}(-\mathbf{p}),\quad\hat{S}^{ i}_{r,s}(\mathbf{p})^{\dagger}=\hat{S}^{i}_{r,s}(-\mathbf{p}) \tag{168}\]
_and_
\[\hat{J}_{r,s}(\mathbf{p})\Omega=0,\quad\hat{S}^{i}_{r,s}(\mathbf{p})\Omega=0, \qquad\forall\mathbf{p}\in\tilde{\Lambda}^{*}\,\text{ such that }\,rp_{s}\geq 0. \tag{169}\]
Proof.: Using (166) we can write the nodal density- and spin operators in terms of the operators in (158) as
\[\begin{split}&\hat{J}_{r,s}(\mathbf{p})=\sum_{x\in\Lambda_{\text{ 1D}}}\hat{J}^{0}_{r,s,x}(p_{s})\,\mathrm{e}^{-\mathrm{i}p_{-s}x}\\ &\hat{S}^{i}_{r,s}(\mathbf{p})=\frac{1}{2}\sum_{x\in\Lambda_{ \text{1D}}}\hat{J}^{i}_{r,s,x}(p_{s})\,\mathrm{e}^{-\mathrm{i}p_{-s}x}\qquad( \mathbf{p}=p_{+}\mathbf{e}_{+}+p_{-}\mathbf{e}_{-}).\end{split} \tag{170}\]
The results stated in the proposition now follow by applying Equations (159)-(160).
We define zero mode operators by
\[\hat{N}_{r,s,\alpha}(p_{-s})\stackrel{{\text{\tiny{def}}}}{{=}} \sqrt{\frac{\tilde{a}}{2\pi}}\left.\hat{J}_{r,s,\alpha}(\mathbf{p})\right|_{p_ {s}=0}\qquad(p_{-s}\in\tilde{\Lambda}^{*}_{\text{1D}}) \tag{171}\]
and their Fourier-transform
\[N_{r,s,\alpha}(x)\stackrel{{\text{\tiny{def}}}}{{=}}\sqrt{2\pi \tilde{a}}\sum_{p\in\tilde{\Lambda}^{*}_{\text{1D}}}\frac{1}{L}\hat{N}_{r,s, \alpha}(p)\,\mathrm{e}^{\mathrm{i}px}\qquad(x\in\Lambda_{\text{1D}}). \tag{172}\]
When rewriting the nodal Hamiltonian in bosonized form in the next section, the following linear combinations of zero mode operators will be useful
\[\begin{split}& Q_{C;r,s}(x)\stackrel{{\text{\tiny def }}}{{=}}\frac{1}{\sqrt{2}}\sum_{\alpha=\pm}\bigl{[}N_{+,s,\alpha}(x)+rN_{-,s, \alpha}(x)\bigr{]}\\ & Q_{S;r,s}(x)\stackrel{{\text{\tiny def}}}{{=}} \frac{1}{\sqrt{2}}\sum_{\alpha=\pm}\alpha\bigl{[}N_{+,s,\alpha}(x)+rN_{-,s, \alpha}(x)\bigr{]}\end{split}\qquad(x\in\Lambda_{\text{1D}}). \tag{173}\]
We also define \(\hat{Q}_{C;r,s}(p)\) and \(\hat{Q}_{S;r,s}(p)\), \(p\in\tilde{\Lambda}_{\text{1D}}^{*}\), in a similar way (replace \(N_{r,s,\alpha}(x)\) with \(\hat{N}_{r,s,\alpha}(p)\) on the right hand sides above).
**Lemma C.2**.: **(a)** _There exist unitary operators \(R_{r,s,\alpha}(x)\) on the fermion Fock space commuting with all boson operators in (45) and satisfying the commutation relations_
\[\begin{split}&[N_{r,s,\alpha}(x),R_{r^{\prime},s^{\prime},\alpha^{ \prime}}(x^{\prime})]=r\delta_{r,r^{\prime}}\delta_{s,s^{\prime}}\delta_{ \alpha,\alpha^{\prime}}\delta_{x,x^{\prime}}R_{r,s,\alpha}(x),\\ &\{R_{r,s,\alpha}(x),R_{r^{\prime},s^{\prime},\alpha^{\prime}}(x^ {\prime})^{\dagger}\}=2\delta_{r,r^{\prime}}\delta_{s,s^{\prime}}\delta_{ \alpha,\alpha^{\prime}}\delta_{x,x^{\prime}}.\end{split} \tag{174}\]
**(b)** _Let \(\mathcal{Q}\) be the set of all pairs \((\mathbf{n},\boldsymbol{\nu})\) with_
\[\mathbf{n}=\{n_{s,\alpha}(\mathbf{p})\}_{s,\alpha=\pm,\,\mathbf{p}\in\tilde{ \Lambda}_{s}^{*}}\,,\qquad\boldsymbol{\nu}=\{\nu_{r,s,\alpha}(x)\}_{r,s, \alpha=\pm,\,x\in\Lambda_{\text{1D}}} \tag{175}\]
_and integers \(\nu_{r,s,\alpha}(x)\) and \(n_{s,\alpha}(\mathbf{p})\geq 0\) such that_
\[\sum_{\alpha=\pm}\sum_{r,s=\pm}\sum_{x\in\Lambda_{\text{1D}}}\nu_{r,s,\alpha}( x)^{2}<\infty,\qquad\sum_{\alpha=\pm}\sum_{s=\pm}\sum_{\mathbf{p}\in\tilde{ \Lambda}_{s}^{*}}|p_{s}|n_{s,\alpha}(\mathbf{p})<\infty. \tag{176}\]
_Then the states_
\[\eta_{\mathbf{n},\boldsymbol{\nu}}\stackrel{{\text{\tiny def}}}{{=}} \Bigl{(}\prod_{\alpha=\pm}\prod_{s=\pm}\prod_{\mathbf{p}\in\tilde{\Lambda}_{s} ^{*}}\frac{b_{s,\alpha}^{\dagger}(\mathbf{p})^{n_{s,\alpha}(\mathbf{p})}}{ \sqrt{n_{s,\alpha}(\mathbf{p})!}}\Bigr{)}\Bigl{(}\prod_{\alpha=\pm}\prod_{r,s= \pm}\prod_{x\in\Lambda_{\text{1D}}}R_{r,s,\alpha}(x)^{\nu_{r,s,\alpha}(x)} \Bigr{)}\Omega, \tag{177}\]
_with \((\mathbf{n},\boldsymbol{\nu})\in\mathcal{Q}\), provide a complete orthonormal basis in the fermion Fock space._
**(c)** _The states \(\eta_{\mathbf{n},\boldsymbol{\nu}}\) are common eigenstates of the operators \(N_{r,s,\alpha}(x)\) and \(b_{s,\alpha}^{\dagger}(\mathbf{p})b_{s,\alpha}(\mathbf{p})\) with eigenvalues \(\nu_{r,s,\alpha}(x)\) and \(n_{s,\alpha}(\mathbf{p})\), respectively._
(_Proof:_ See the proof of Lemma 2.1 in [58].)
**Proposition C.3**.: _For \(r,s=\pm\), \(\alpha=\pm\), \(\mathbf{x}\in\Lambda_{s}\), and \(\epsilon>0\), the operator_
\[\begin{split}\psi_{r,s,\alpha}(\mathbf{x};\epsilon)\stackrel{{ \text{\tiny def}}}{{=}}&\frac{1}{\sqrt{2\pi\tilde{a} \epsilon}}\,\mathrm{e}^{\mathrm{i}r\pi x_{s}N_{r,s,\alpha}(x_{-s})/L}R_{r,s, \alpha}(x_{-s})^{-r}\,\mathrm{e}^{\mathrm{i}r\pi x_{s}N_{r,s,\alpha}(x_{-s})/L} \\ &\times\exp\Bigl{(}r\frac{\tilde{a}}{2\pi}\sum_{\mathbf{p}\in \tilde{\Lambda}_{s}}\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}\frac{1}{p_{s}}\hat{J}_{ r,s,\alpha}(\mathbf{p})\,\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\mathbf{x}}\,\mathrm{e}^{- \epsilon|p_{s}|/2}\Bigr{)}\end{split} \tag{178}\]
_is such that \(\sqrt{2\pi\tilde{a}\epsilon}\psi_{r,s,\alpha}(\mathbf{x};\epsilon)\) is a unitary operator on the fermion Fock space, and_
\[\hat{\psi}_{r,s,\alpha}(\mathbf{k})=\lim_{\epsilon\to 0^{+}}\frac{1}{2\pi}\sum_{x_{-s} \in\Lambda_{\text{1D}}}\tilde{a}\int\limits_{-L/2}^{L/2}\mathrm{d}x_{s}\,\psi_{ r,s,\alpha}(\mathbf{x};\epsilon)\,\mathrm{e}^{-\mathrm{i}\mathbf{k}\cdot\mathbf{x}}. \tag{179}\]
(_Proof:_ See the proof of Proposition 2.2 in [58].)
The operator in (178) yields a regularized version of the operator-valued distribution defined by the Fourier transform in (35). This regularization is useful when computing correlation functions involving nodal operators (see [58] for further discussion of this).
The following proposition is key in bosonizing the nodal part of the effective Hamiltonian:
**Proposition C.4**.: _The following operator identities hold true_
\[\begin{split}\sum_{\alpha=\pm}\sum_{\mathbf{k}\in\Lambda_{s}^{*} }\Bigl{(}\frac{2\pi}{L}\Bigr{)}^{2}rk_{s}:\hat{\psi}_{r,s,\alpha}^{\dagger}( \mathbf{k})\hat{\psi}_{r,s,\alpha}(\mathbf{k}):&=\tilde{a}\pi \sum_{\mathbf{p}\in\tilde{\Lambda}_{s}^{*}}\sum_{\mathbf{p}\in\tilde{\Lambda}_ {s}^{*}}\frac{1}{L^{2}}\stackrel{{\times}}{{{}^{\prime}}}\!\! \hat{J}_{r,s,\alpha}^{\dagger}\hat{J}_{r,s,\alpha}^{\ast}\!\stackrel{{ \times}}{{{}^{\prime}}}\\ &=\tilde{a}\pi\sum_{\mathbf{p}\in\tilde{\Lambda}_{s}^{*}}\frac{1 }{L^{2}}\stackrel{{\times}}{{{}^{\prime}}}\!\!\left(\frac{1}{2} \hat{J}_{r,s}^{\dagger}\hat{J}_{r,s}+\frac{2}{3}\hat{\mathbf{S}}_{r,s}^{ \dagger}\cdot\hat{\mathbf{S}}_{r,s}\right)\stackrel{{\times}}{{{} ^{\prime}}}\end{split} \tag{180}\]
_with all three expressions defining self-adjoint operators on the fermion Fock space._
Proof.: See the proof of Proposition 2.1 in [58] for the first equality. The second equality is obtained using (161) and (164) for the special case \(p=0\), together with relations (166) and (170).
### Bosonization of the nodal Hamiltonian
We write out the bosonization of the nodal Hamiltonian in (146)-(148) obtained from the extended Hubbard model. Using Proposition C.4, we find
\[\begin{split} H_{C}=\frac{v_{F}\pi\tilde{a}}{2}\stackrel{{ \times}}{{{}^{\prime}}}&\Big{(}\sum_{r,s=\pm}\sum_{ \mathbf{p}\in\tilde{\Lambda}_{s}^{*}}\frac{1}{L^{2}}\left(\left(1+\gamma_{0}^ {C}\chi_{s}(\mathbf{p})\right)\hat{J}_{r,s}^{0\dagger}\hat{J}_{r,s}^{0}+\gamma _{1}^{C}\chi_{s}(\mathbf{p})\hat{J}_{r,s}^{0\dagger}\hat{J}_{-r,s}^{0}\right) \\ &\qquad+\sum_{\mathbf{p}\in\tilde{\Lambda}^{*}}\frac{1}{L^{2}} \gamma_{2}^{C}\chi_{+}(\mathbf{p})\chi_{-}(\mathbf{p})\sum_{r,r^{\prime}=\pm }\hat{J}_{r,+}^{0\dagger}\hat{J}_{r^{\prime},-}^{0}\Big{)}\stackrel{{ \times}}{{{}^{\prime}}}\\ H_{\mathbf{S}}=2v_{F}\pi\tilde{a}\stackrel{{ \times}}{{{}^{\prime}}}&\Big{(}\sum_{r,s=\pm}\sum_{ \mathbf{p}\in\tilde{\Lambda}_{s}^{*}}\frac{1}{L^{2}}\left(\hat{\mathbf{S}}_{r,s}^{\dagger}\cdot\hat{\mathbf{S}}_{r,s}/3+\gamma_{1}^{S}\chi_{s}(\mathbf{p} )\hat{\mathbf{S}}_{r,s}^{\dagger}\cdot\hat{\mathbf{S}}_{-r,s}\right)\\ &\qquad+\sum_{\mathbf{p}\in\tilde{\Lambda}^{*}}\frac{1}{L^{2}} \gamma_{2}^{S}\chi_{+}(\mathbf{p})\chi_{-}(\mathbf{p})\sum_{r,r^{\prime}=\pm }\hat{\mathbf{S}}_{r,+}^{\dagger}\cdot\hat{\mathbf{S}}_{r^{\prime},-}\Big{)} \stackrel{{\times}}{{{}^{\prime}}}\end{split} \tag{181}\]
and where the (dimensionless) coupling constants are defined as (see also (149))
\[\gamma_{0}^{C}\stackrel{{\text{\tiny def}}}{{=}}\frac{2g_{0}^{C} }{v_{F}\pi\tilde{a}},\quad\gamma_{1}^{C}\stackrel{{\text{\tiny def }}}{{=}}\frac{g_{1}^{C}}{v_{F}\pi\tilde{a}},\quad\gamma_{2}^{C}\stackrel{{ \text{\tiny def}}}{{=}}\frac{2g_{2}^{C}}{v_{F}\pi\tilde{a}},\quad\gamma_{1}^ {S}\stackrel{{\text{\tiny def}}}{{=}}\frac{g_{1}^{S}}{4v_{F}\pi \tilde{a}},\quad\gamma_{2}^{S}\stackrel{{\text{\tiny def}}}{{=}} \frac{g_{2}^{S}}{2v_{F}\pi\tilde{a}}. \tag{183}\]
We assume these satisfy
\[\left|\gamma_{1}^{C}\right|<\left|1+\gamma_{0}^{C}\right|,\quad\left|\gamma_{2} ^{C}\right|<\left|1+\gamma_{0}^{C}+\gamma_{1}^{C}\right|,\quad\left|\gamma_{1}^ {S}\right|<1,\quad\left|\gamma_{2}^{S}\right|<\left|1+\gamma_{1}^{S}\right|, \tag{184}\]
which implies the constraint
\[\frac{\left(3U+4V\left[1+2\cos\left(2Q\right)\right]\right)\left(1-\kappa\right)} {8\pi\sin(Q)\left[t+2t^{\prime}\cos(Q)\right]}<1. \tag{185}\]
As in Section 4.3, we write
\[\begin{split} H=H_{M}+\frac{1}{2}\sum_{\mathbf{p}\in\tilde{ \Lambda}^{*}}\frac{1}{L^{2}}\Big{(}g_{1}^{S}&\sum_{s=\pm}\chi_{ s}(\mathbf{p})\big{(}\hat{S}_{+,s}^{+}(-\mathbf{p})\hat{S}_{-,s}^{-}(\mathbf{p})+h.c.\big{)}\\ &+g_{2}^{S}\sum_{r,r^{\prime}=\pm}\chi_{+}(\mathbf{p})\chi_{-}( \mathbf{p})\big{(}\hat{S}_{r,+}^{+}(-\mathbf{p})\hat{S}_{r^{\prime},-}^{-}( \mathbf{p})+h.c.\big{)}\Big{)}\end{split} \tag{186}\]
with
\[H_{M}=H_{C}+H_{S} \tag{187}\]
and
(188)
and where (\(X=C,S\) and \(\gamma_{0}^{\text{\tiny{def}}}\equiv 0\))
\[\begin{split} H_{X;z.m}=&\frac{v_{F}}{2}\Big{(} \frac{2\pi}{L}\Big{)}^{2}\Big{[}\sum_{s=\pm}\sum_{\mathbf{p}\in\tilde{\Lambda} _{s}^{*}}\breve{\big{(}\hat{\Xi}_{X;s}^{\dagger}\hat{\Phi}_{X;s}+\hat{\Phi}_{X ;s}^{\dagger}\hat{\Xi}_{X;s}\big{)}\breve{\times}}\\ &+\frac{1}{2}\sum_{r,s=\pm}\sum_{p\in\tilde{\Lambda}_{1D}^{*}} \big{(}1+\gamma_{0}^{X}+r\gamma_{1}^{X}\big{)}\hat{Q}_{X;r,s}^{\dagger}\hat{Q }_{X;r,s}+\gamma_{2}^{X}\hat{Q}_{X;+,+}(0)\hat{Q}_{X;+,-}(0)\Big{]}\end{split} \tag{190}\]
\[\begin{split}\hat{\Xi}_{X;s}(\mathbf{p})\stackrel{{ \text{\tiny{def}}}}{{=}}-\frac{1}{\sqrt{2}}\gamma_{2}^{X}\text{ip}_{s}\chi( \mathbf{p})\hat{Q}_{X;+,-s}(p_{s})\delta_{p_{-s,0}}\end{split} \tag{191}\]
denote terms involving zero mode operators; we have used the cutoff function in (34) for simplicity.
**Theorem C.5**.: _There exists a unitary operator \(\mathcal{U}\) diagonalizing the Hamiltonian in (187) as follows:_
\[\mathcal{U}^{\dagger}H_{M}\mathcal{U}=\sum_{s=\pm}\sum_{\mathbf{p}\in\tilde{ \Lambda}_{s}^{*}}\Big{(}\omega_{C;s}(\mathbf{p})b_{C;s}^{\dagger}(\mathbf{p}) b_{C;s}(\mathbf{p})+\omega_{S;s}(\mathbf{p})b_{S;s}^{\dagger}(\mathbf{p})b_{S;s}( \mathbf{p})\Big{)}+\tilde{H}_{Q}+\mathcal{E}^{(0)} \tag{192}\]
_with_
\[\omega_{C;\pm}(\mathbf{p})=\begin{cases}\tilde{v}_{F}^{C}\sqrt{\frac{1}{2} \Big{(}|\mathbf{p}|^{2}\pm\sqrt{|\mathbf{p}|^{4}-A_{C}\big{(}2p_{+}p_{-}\big{)} ^{2}}\,\,\Big{)}}&\text{if }\,\,\gamma_{2}^{C}\chi(\mathbf{p})p_{+}p_{-}\neq 0\\ v_{F}\sqrt{\big{(}1+\gamma_{0}^{C}\chi(\mathbf{p})\big{)}^{2}-\big{(}\gamma_{ 1}^{C}\chi(\mathbf{p})\big{)}^{2}|p_{\pm}|}&\text{if }\,\,\gamma_{2}^{C}\chi(\mathbf{p})p_{+}p_{-}=0 \end{cases} \tag{193}\]
\[A_{C}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}1-\big{[}\gamma_{2}^{C}/(1+ \gamma_{0}^{C}+\gamma_{1}^{C})\big{]}^{2},\qquad\tilde{v}_{F}^{\mbox{\tiny{\it def }}}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}v_{F}\sqrt{\big{(}1+ \gamma_{0}^{C}\big{)}^{2}-\big{(}\gamma_{1}^{C}\big{)}^{2}} \tag{194}\]
_and_
\[\omega_{S;\pm}({\bf p})=\begin{cases}\tilde{v}_{F}^{S}\sqrt{\frac{1}{2}\Big{(} |{\bf p}|^{2}\pm\sqrt{|{\bf p}|^{4}-A_{S}\big{(}2p_{+}p_{-}\big{)}^{2}}\,\, \Big{)}}&\mbox{ if }\ \gamma_{2}^{S}\chi({\bf p})p_{+}p_{-}\neq 0\\ v_{F}\sqrt{1-\big{(}\gamma_{1}^{S}\chi({\bf p})\big{)}^{2}}|p_{\pm}|&\mbox{ if }\ \gamma_{2}^{S}\chi({\bf p})p_{+}p_{-}=0\end{cases} \tag{195}\]
\[A_{S}\stackrel{{\mbox{\tiny{\it def}}}}{{=}}1-\big{[}\gamma_{2}^ {S}/(1+\gamma_{1}^{S})\big{]}^{2},\qquad\tilde{v}_{F}^{S}\stackrel{{ \mbox{\tiny{\it def}}}}{{=}}v_{F}\sqrt{1-\big{(}\gamma_{1}^{S}\big{)}^{2}} \tag{196}\]
_the boson dispersion relations,_
\[\begin{split}&\tilde{H}_{Q}=\frac{v_{F}\pi}{2L}\Bigg{(}\sum_{s} \sum_{x}\Bigl{[}\big{(}1+\gamma_{0}^{C}+\gamma_{1}^{C}\big{)}\,A_{C}Q_{C;+,s}( x)^{2}+\big{(}1+\gamma_{0}^{C}-\gamma_{1}^{C}\big{)}\,Q_{C;-,s}(x)^{2}\Bigr{]}\\ &+\frac{\tilde{a}}{L}\sum_{s}\Biggl{[}\frac{\big{(}\gamma_{2}^{C} \big{)}^{2}}{1+\gamma_{0}^{C}+\gamma_{1}^{C}}\Big{(}\sum_{x}Q_{C;+,s}(x)\Big{)} ^{2}+\gamma_{2}^{C}\Big{(}\sum_{x}Q_{C;+,s}(x)\Big{)}\Big{(}\sum_{x}Q_{C;+,-s} (x)\Big{)}\Biggr{]}\\ &+\sum_{s}\sum_{x}\Bigl{[}\big{(}1+\gamma_{1}^{S}\big{)}\,A_{S}Q_{ S;+,s}(x)^{2}+\big{(}1-\gamma_{1}^{S}\big{)}\,Q_{S;-,s}(x)^{2}\Bigr{]}\\ &+\frac{\tilde{a}}{L}\sum_{s}\Biggl{[}\frac{\big{(}\gamma_{2}^{S} \big{)}^{2}}{1+\gamma_{1}^{S}}\Big{(}\sum_{x}Q_{S;+,s}(x)\Big{)}^{2}+\gamma_{2 }^{S}\Big{(}\sum_{x}Q_{S;+,s}(x)\Big{)}\Big{(}\sum_{x}Q_{S;+,-s}(x)\Big{)} \Biggr{]}\Biggr{)}\end{split} \tag{197}\]
_the part involving only zero mode operators (the sums are over \(s=\pm\) and \(x\in\Lambda_{1D}\)), and_
\[{\cal E}^{(0)}=\frac{1}{2}\sum_{s=\pm}\sum_{{\bf p}\in\hat{\Lambda}_{s}^{*}} \bigl{(}\omega_{C;s}({\bf p})+\omega_{S;s}({\bf p})-2v_{F}|p_{s}|\bigr{)} \tag{198}\]
_the groundstate energy of \(H_{M}\)._
(_Proof:_ See the proof of Theorem 3.1 in [58].)
Note that (184) are necessary and sufficient constraints on the coupling constants in order for \({\cal E}^{(0)}\) to be well-defined and finite. One finds that the constraints on \(\gamma_{i}^{C}\), \(i=0,1,2\), are always satisfied, while those on \(\gamma_{i}^{S}\), \(i=1,2\), are fulfilled if (185) holds.
## Appendix D Functional integration of nodal bosons
We give the results for the induced antinodal action obtained from the effective model in Appendix B. We truncate the nodal Hamiltonian in (146) by only keeping \(H_{M}\) (cf. (186)), and then perform a similar truncation in the nodal-antinodal interaction (154); we write
\[H_{na}^{{}^{\prime}}=H_{na}^{{}^{\prime}(0)}+\frac{1}{2}\sum_{r,r^{\prime},s= \pm}\sum_{{\bf p}\in\hat{\Lambda}^{*}}\frac{1}{L^{2}}\chi({\bf p})g_{na}^{S} \Bigl{(}\hat{S}_{r,s}^{+}(-{\bf p})\hat{S}_{r^{\prime},0}^{-}({\bf p})+h.c. \Bigr{)} \tag{199}\]
(using the simplified cutoff in (34)). From (54), we find
\[H_{na}^{{}^{\prime}(0)}=\sqrt{\frac{2}{\pi\tilde{a}}}\sum_{r,s=\pm}\sum_{{\bf p} \in\Lambda_{s}^{*}}\frac{1}{L^{2}}2\pi{\rm i}p_{s}\chi({\bf p})\Big{(}g_{na}^{C }\hat{J}_{r,0}^{0{\dagger}}\hat{\Phi}_{C;s}+\frac{g_{na}^{S}}{2}\hat{S}_{r,0}^ {3{\dagger}}\hat{\Phi}_{S;s}\Big{)}+z.m.. \tag{200}\]
The induced action becomes after integrating out the nodal bosons
\[S_{ind}^{(0)}\stackrel{{\mbox{\tiny def}}}{{=}}\sum_{n\in\mathbb{ Z}}\sum_{r,r^{\prime}=\pm}\sum_{{\bf p}}\frac{1}{L^{2}}\left(\hat{v}_{C}(\omega_{n},{ \bf p})\hat{J}_{r,0}^{{\dagger}}\hat{J}_{r^{\prime},0}+\hat{v}_{S}(\omega_{n},{\bf p})(\hat{S}_{r,0}^{3})^{{\dagger}}\hat{S}_{r^{\prime},0}^{3}\right) \tag{201}\]
with the density-density interaction potential
\[\hat{v}_{C}(\omega_{n},{\bf p})=-\frac{\left(g_{na}^{C}\right)^{2}}{2\pi \tilde{a}v_{F}}\sum_{s=\pm}\frac{W_{C;s}({\bf p})}{\omega_{n}^{2}+\omega_{C;s} ({\bf p})^{2}}\chi({\bf p}) \tag{202}\]
where
\[W_{C;\pm}({\bf p})\stackrel{{\mbox{\tiny def}}}{{=}}v_{F}^{2} \left(1+\gamma_{0}^{C}-\gamma_{1}^{C}\right)\left(\left|{\bf p}\right|^{2}\pm \frac{\left(p_{+}^{2}-p_{-}^{2}\right)^{2}+\sqrt{1-A_{C}}\left(2p_{+}p_{-} \right)^{2}}{\sqrt{\left|{\bf p}\right|^{4}-A_{C}\left(2p_{+}p_{-}\right)^{2} }}\right) \tag{203}\]
(see also definitions (193)-(194)). Likewise, the induced spin-spin interaction potential is
\[\hat{v}_{S}(\omega_{n},{\bf p})=-\frac{\left(g_{na}^{S}\right)^{2}}{8\pi \tilde{a}v_{F}}\sum_{s=\pm}\frac{W_{S;s}({\bf p})}{\omega_{n}^{2}+\omega_{S; s}({\bf p})^{2}}\chi({\bf p}) \tag{204}\]
with (see (195)-(196))
\[W_{S;\pm}({\bf p})\stackrel{{\mbox{\tiny def}}}{{=}}v_{F}^{2} \left(1-\gamma_{1}^{S}\right)\left(\left|{\bf p}\right|^{2}\pm\frac{\left(p_{ +}^{2}-p_{-}^{2}\right)^{2}-\sqrt{1-A_{S}}\left(2p_{+}p_{-}\right)^{2}}{\sqrt{ \left|{\bf p}\right|^{4}-A_{S}\left(2p_{+}p_{-}\right)^{2}}}\right) \tag{205}\]
(the sign discrepancy between the numerators of (203) and (205) is due to the fact that \(\gamma_{2}^{C}\geq 0\) while \(\gamma_{2}^{S}\leq 0\)).
We also give the result when treating the nodal spin operators \(\hat{S}_{r,s}^{i}\) as mutually commuting (to lowest order in \(\tilde{a}\)). Let
\[\hat{\Phi}_{i;s}({\bf p})\stackrel{{\mbox{\tiny def}}}{{=}}\! \sqrt{\frac{\tilde{a}}{2\pi}}\frac{1}{{\rm i}p_{s}}\Big{(}\hat{S}_{+,s}^{i}({ \bf p})+\hat{S}_{-,s}^{i}({\bf p})\Big{)},\ \ \ \ \hat{\Pi}_{i;s}({\bf p})\stackrel{{\mbox{\tiny def}}}{{=}}\! \sqrt{\frac{\tilde{a}}{2\pi}}\Big{(}\!-\hat{S}_{+,s}^{i}({\bf p})+\hat{S}_{-,s} ^{i}({\bf p})\Big{)} \tag{206}\]
with \(i=1,2,3\), \(s=\pm\), and \({\bf p}\in\hat{\Lambda}_{s}^{*}\); we note that \(\hat{\Phi}_{3;s}\equiv\hat{\Phi}_{S;s}\) and \(\hat{\Pi}_{3;s}\equiv\hat{\Pi}_{S;s}\) (cf. (54)). Similar to \(H_{S}\) in (189), we can express \(H_{\bf S}\) in (182) in terms of these operators as
\[\begin{split} H_{\bf S}=&\frac{v_{F}}{2}\sum_{i=1}^{ 3}\sum_{s=\pm}\sum_{{\bf p}\in\hat{\Lambda}_{s}^{*}}\Big{(}\frac{2\pi}{L} \Big{)}^{2}\stackrel{{\mbox{\tiny$\times$}}}{{}_{\times}}\!\Big{(} \big{[}1/3-\gamma_{1}^{S}\chi({\bf p})\big{]}\hat{\Pi}_{i;s}^{{\dagger}}\hat{ \Pi}_{i;s}\\ &+\big{[}1/3+\gamma_{1}^{S}\chi({\bf p})\big{]}p_{s}^{2}\hat{ \Phi}_{i;s}^{{\dagger}}\hat{\Phi}_{i;s}+\gamma_{2}^{S}p_{+}p_{-}\chi({\bf p}) \hat{\Phi}_{i;s}^{{\dagger}}\hat{\Phi}_{i;-s}\Big{)}\!\stackrel{{ \mbox{\tiny$\times$}}}{{}_{\times}}\!+z.m.\end{split} \tag{207}\]
Likewise, the density- and spin part of the nodal-antinodal interaction given in (154) can be written as
\[H_{na}^{{}^{\prime}}=\sqrt{\frac{2}{\pi\tilde{a}}}\sum_{r,s=\pm}\sum_{\mathbf{p} \in\tilde{\Lambda}_{*}^{*}}\frac{1}{L^{2}}2\pi\mathrm{i}p_{s}\chi(\mathbf{p}) \Big{(}g_{na}^{C}\hat{\mathcal{J}}_{r,0}^{\dagger}\hat{\Phi}_{C;s}+\frac{g_{ na}^{S}}{2}\sum_{i=1}^{3}(\hat{S}_{r,0}^{i})^{\dagger}\hat{\Phi}_{i;s}\Big{)}+z.m. \tag{208}\]
The induced action is then
\[S_{ind}\stackrel{{\mbox{\tiny def}}}{{=}}\sum_{n\in\mathbb{Z}} \sum_{r,r^{\prime}=\pm}\sum_{\mathbf{p}\in\tilde{\Lambda}^{*}}\frac{1}{L^{2}} \left(\hat{v}_{C}(\omega_{n},\mathbf{p})\hat{\mathcal{J}}_{r,0}^{\dagger}\hat{ \mathcal{J}}_{r^{\prime},0}+\hat{v}_{\mathbf{S}}(\omega_{n},\mathbf{p})\hat{ \mathbf{S}}_{r,0}^{\dagger}\cdot\hat{\mathbf{S}}_{r^{\prime},0}\right) \tag{209}\]
where the spin-spin interaction potential is now given by
\[\hat{v}_{\mathbf{S}}(\omega_{n},\mathbf{p})=-\frac{\left(g_{na}^{S}\right)^{2 }}{8\pi\tilde{a}v_{F}}\sum_{s=\pm}\frac{W_{\mathbf{S};s}(\mathbf{p})}{\omega_ {n}^{2}+\omega_{\mathbf{S};s}(\mathbf{p})^{2}}\chi(\mathbf{p}) \tag{210}\]
with
\[\begin{split}&\omega_{\mathbf{S};\pm}(\mathbf{p})\stackrel{{ \mbox{\tiny def}}}{{=}}\tilde{v}_{F}^{\mathbf{S}}\sqrt{\frac{1}{2} \Big{(}|\mathbf{p}|^{2}\pm\sqrt{|\mathbf{p}|^{4}-A_{\mathbf{S}}\big{(}2p_{+}p _{-}\big{)}^{2}}\,\,\Big{)}}\\ & A_{\mathbf{S}}\stackrel{{\mbox{\tiny def}}}{{=}}1- \left[\gamma_{2}^{S}/(1/3+\gamma_{1}^{S})\right]^{2},\qquad\tilde{v}_{F}^{ \mathbf{S}}\stackrel{{\mbox{\tiny def}}}{{=}}v_{F}\sqrt{(1/3)^{2 }-\left(\gamma_{1}^{S}\right)^{2}}\end{split} \tag{211}\]
(note that this differ from (195)-(196)), and
\[W_{\mathbf{S};\pm}(\mathbf{p})\stackrel{{\mbox{\tiny def}}}{{=}} v_{F}^{2}\left(1/3-\gamma_{1}^{S}\right)\left(\left|\mathbf{p}\right|^{2}\pm \frac{\left(p_{+}^{2}-p_{-}^{2}\right)^{2}-\sqrt{1-A_{\mathbf{S}}}\left(2p_{+} p_{-}\right)^{2}}{\sqrt{\left|\mathbf{p}\right|^{4}-A_{\mathbf{S}}\left(2p_{+}p_{-} \right)^{2}}}\right). \tag{212}\]
For (210)_ff_ to be well-defined, we need to impose the somewhat stricter conditions on the coupling constants
\[\left|\gamma_{1}^{S}\right|<1/3,\qquad\left|\gamma_{2}^{S}\right|<\left|1/3+ \gamma_{1}^{S}\right| \tag{213}\]
which translates into (cf. (185))
\[\frac{\left(3U+4V\left[1+2\cos\left(2Q\right)\right]\right)\left(1-\kappa \right)}{8\pi\sin(Q)\left[t+2t^{\prime}\cos(Q)\right]}<\frac{1}{3}. \tag{214}\]
|
2305.06147 | CQSumDP: A ChatGPT-Annotated Resource for Query-Focused Abstractive
Summarization Based on Debatepedia | Debatepedia is a publicly available dataset consisting of arguments and
counter-arguments on controversial topics that has been widely used for the
single-document query-focused abstractive summarization task in recent years.
However, it has been recently found that this dataset is limited by noise and
even most queries in this dataset do not have any relevance to the respective
document. In this paper, we present a methodology for cleaning the Debatepedia
dataset by leveraging the generative power of large language models to make it
suitable for query-focused abstractive summarization. More specifically, we
harness the language generation capabilities of ChatGPT to regenerate its
queries. We evaluate the effectiveness of the proposed ChatGPT annotated
version of the Debatepedia dataset using several benchmark summarization models
and demonstrate that the newly annotated version of Debatepedia outperforms the
original dataset in terms of both query relevance as well as summary generation
quality. We will make this annotated and cleaned version of the dataset
publicly available. | Md Tahmid Rahman Laskar, Mizanur Rahman, Israt Jahan, Enamul Hoque, Jimmy Huang | 2023-03-31T15:39:54Z | http://arxiv.org/abs/2305.06147v1 | CQSumDP: A ChatGPT-Annotated Resource for Query-Focused Abstractive Summarization Based on Debatepedia
###### Abstract
Debatepedia is a publicly available dataset consisting of arguments and counter-arguments on controversial topics that has been widely used for the single-document query-focused abstractive summarization task in recent years. However, it has been recently found that this dataset is limited by noise and even most queries in this dataset do not have any relevance to the respective document. In this paper, we present a methodology for cleaning the Debatepedia dataset by leveraging the generative power of large language models to make it suitable for query-focused abstractive summarization. More specifically, we harness the language generation capabilities of ChatGPT to regenerate its queries. We evaluate the effectiveness of the proposed ChatGPT annotated version of the Debatepedia dataset using several benchmark summarization models and demonstrate that the newly annotated version of Debatepedia outperforms the original dataset in terms of both query relevance as well as summary generation quality. We will make this annotated and cleaned version of the dataset publicly available.
## 1 Introduction
Abstractive summarization is a natural language processing technique that involves generating a concise and coherent summary of a longer piece of text while preserving its most important information (Yao et al., 2017). Query-focused abstractive summarization is a specific type of abstractive summarization that generates a summary of the given text that is tailored to a specific query or topic of interest (Baumel et al., 2018; Goodwin et al., 2020; Su et al., 2020; Xu and Lapata, 2021; Laskar et al., 2020, 2020, 2022d). In other words, the summary is focused on answering a specific question or addressing a particular topic, rather than providing a general overview of the text. One widely used dataset for this task is the Debatepedia1 dataset that consists of arguments and counter-arguments on conversational topics (Nema et al., 2017).
Footnote 1: [https://github.com/PrekshaNema25/DiversityBasedAttentionMechanism](https://github.com/PrekshaNema25/DiversityBasedAttentionMechanism)
The query-focused summarization of argumentative text is a challenging task that has gained increasing attention in recent years due to its potential applications in various domains, such as policy-making, journalism, and legal reasoning (Nema et al., 2017; Laskar et al., 2020). However, it has been recently found that the quality of the Debatepedia dataset that is widely used for the query-focused abstractive summarization task is limited by noise, with many of the queries in this dataset does not have any relevance with the source document (Laskar et al., 2020). Since Debatepedia is a rich source of argumentative text on controversial topics that can serve as a valuable resource for developing and evaluating summarization models, in this paper, we present a novel methodology to annotate the Debatepedia dataset to make it a useful resource for query-focused abstractive summarization. Our data annotation approach leverages the language modeling (Radford et al., 2019) capabilities of ChatGPT2, a large pre-trained language model (Devlin et al., 2018; Brown et al., 2020; Ouyang et al., 2022) that has shown an impressive capability of generating fluent and coherent text (Qin et al., 2023; Bang et al., 2023; Yang et al., 2023; Kuzman et al., 2023; Gao et al., 2023; Wang et al., 2023; Zhou et al., 2023; Kocon et al., 2023; Kocni and Federmann, 2023). Using ChatGPT as the annotator, we regenerate the queries in the Debatepedia dataset to remove the noise in this dataset. We validate the effectiveness of our methodology by conducting extensive experiments on our newly constructed dataset that leverages ChatGPT as the annotator. Our major contributions in this paper
are summarized below:
* We proposed a novel methodology for cleaning and annotation of the Debatepedia dataset using a large language model, i.e., ChatGPT to improve its suitability for query-focused abstractive summarization. This paper also opens up a promising avenue to utilize ChatGPT as the annotator for other tasks beyond text summarization that can significantly reduce the overall cost of data annotation.
* We conducted extensive experiments using benchmark summarization models on our ChatGPT-annotated cleaned version of Debatepedia for Query-Focused Abstractive Summarization and observe that it outperforms the original dataset in terms of both query relevance and summary generation quality.
* Our annotated dataset will be made publicly available such that it can serve as a valuable resource for further research on query-focused abstractive summarization.
## 2 Related Work
Query-focused abstractive summarization using neural models has gained increasing attention in recent years Baumel et al. (2018); Laskar et al. (2022). The recent success of transformer-based encoder-decoder models Liu and Lapata (2019); Lewis et al. (2019); Raffel et al. (2019); Zhang et al. (2019) on generic3 abstractive summarization has also inspired researchers to utilize such models for query-based abstractive summarization Goodwin et al. (2020); Vig et al. (2021); Laskar et al. (2020, 2020), leading to state-of-the-art performance in benchmark query-based summarization and answer generation datasets, such as DUC4Feigenblat et al. (2017); Roitman et al. (2020); Xu and Lapata (2021, 2020), AQuaMuSe Kulkarni et al. (2020), QMSum Zhong et al. (2021), WikiHowQA Deng et al. (2019), PubMedQA Jin et al. (2019), MediQA Savery et al. (2020), MSMARCO Wang et al. (2018), Debatepedia Nema et al. (2017), etc. Though some studies Abdulah and Chali (2020) also attempted to generate the queries in generic summarization datasets (e.g., CNNDM Nallapati et al. (2016)) using the source document and the reference summary to enable such datasets for query-focused summarization, we find that these queries are generated by directly extracting words or tokens from the reference summaries. As a result, the summarization models have unexpected access to the keywords in the gold reference summaries.
Footnote 3: In Generic Abstractive Summarization, the summaries are generated based on only the given source document.
Footnote 4: [https://duc.nist.gov/data.html](https://duc.nist.gov/data.html)
Among the datasets mentioned above, DUC and AQuaMuSe require generating summaries from multiple documents, usually from the news domain. The QMSum dataset is proposed for query-based meeting summarization, while WikiHowQA is constructed from the WikiHow knowledgebase and used for answer summary generation for questions that start with "How to". Meanwhile, PubMedQA and MediQA datasets are constructed from the biomedical domain. One notable exception among these datasets is the Debatepedia dataset since it requires generating abstractive summaries from a short document containing argumentative text. None of the other datasets mentioned above addressed the issue of generating query-based summaries from documents containing arguments and counter-arguments. This makes Debatepedia a great resource for researchers to develop methods to summarize a short document containing argumentative text for the given query.
However, it has been found recently that many samples in the Debatepedia dataset are not actually query oriented Laskar et al. (2022). Moreover, it was also observed that fine-tuning pre-trained neural models in this dataset without considering the query incorporation could achieve almost similar performance as the query-focused summarization models Laskar et al. (2022). Thus, there remains a scarcity of datasets specifically tailored for creating condensed summaries of argumentative texts that are relevant to a single query.
To address the above issue, in this work, we seek to clean the Debatepedia dataset to make it usable for query-focused single document abstractive summarization of argumentative text. For that purpose, we propose a novel methodology that leverages the text generation capability of prompt-based language models Liu et al. (2023); Ouyang et al. (2022); Brown et al. (2020). To this end, we utilize ChatGPT, a powerful generative Large Language Model (LLM) developed by OpenAI5 which has received a lot of attention recently due to its impressive
language generation capability - ensuring high fluency, coherence, and grammatical correctness on its generated texts [14]. ChatGPT like such Generative LLMs [15, 16, 17, 18, 19, 20, 21, 22, 23] that leverage the prompt-based learning mechanism have obtained impressive performance in few-shot and zero-shot learning scenarios, inspiring researchers to also explore some new applications of these models, such as data annotation [20, 21]. In this paper, we also harness the text generation power of ChatGPT to fix the queries in the Debatepedia dataset to construct a cleaned version of the dataset that could be used for query-focused abstractive summarization of argumentative text. With extensive experiments, we validate that our proposed cleaned version of the Debatepedia dataset overcomes the limitations of the existing noisy version of this dataset.
## 3 Debatepedia Dataset Limitations
Debatepedia is a publicly available dataset of arguments and counter-arguments on debate topics, proposed by Nema et al. [16]. It contains 13,573 query-document-summary pairs. The average number of words per document, summary, and query in the Debatepedia dataset is 66.4, 11.16, and 9.97, respectively. The dataset covers a wide range of topics, such as politics, sports, and technology, and has been extensively used in recent years to build query-based summarization models for argumentative text.
However, the quality of Debatepedia as a dataset for query-based summarization has lots of limitations (see Table 1 for some examples), as it has been found recently that many queries in this dataset are not relevant to the document [15]. Based on a randomly sampled 100 instances, it has been found in a recent study [15] that:
* 52% of the queries in this dataset have no relevance to the documents or the summaries, as demonstrated in Table 1.
* 70% of the queries are close-ended (i.e., Yes/No type) questions (see Example 4 in Table 1).
* Though, many queries in this dataset are relevant to the documents but the summaries
\begin{table}
\begin{tabular}{p{227.6pt}} \hline \hline
**Example 1: Query having no relevance with the document and the summary.** \\ \hline
**Document:** Business schools might improve your quantitative presentation and communication skills. It might but get you thinking about ethical and strategy. But two years of case studies aren’t go to turn you into a leader if you weren’t died one. There’s no learning charisma persausiveness elegance or gut instinct. \\ \hline
**Reference Summary:** PhD will not improve cm factors of leaders. \\ \hline \hline _Example 2: One word summary having no relevance with the query or document._ \\ \hline
**Query:** Education : do child benefit from watching tv? \\ \hline
**Document:** by watching news child can learn about geography politics advances in science – everything simply and later explained. furthermore child learn about real-life situation that happens on everyday basis which will benefit them in the future. \\ \hline
**Reference Summary:** News. \\ \hline \hline _Example 3: The length of the summary is longer than the document with the query being irrelevant._ \\ \hline
**Query:** activists : where do the keys activists and organizations stand? \\ \hline
**Document:** see an analyses of the article... \\ \hline
**Reference Summary:** philip martin of berkeley davis and michael teitelbaum the mirage of mexican guest workers \\ now/dec \# foreign affairs. \\ \hline \hline _Example 4: More of a close-ended question._ \\ \hline
**Query:** friendships : does twitter harms relationships? \\ \hline
**Document:** twitter helps those stay in touches no matter how far they may be from each other. \\ \hline
**Reference Summary:** long-distance friendships. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Some examples demonstrating the limitations in the Debatepedia dataset.
are more of generic due to shorter document length. Note that the average size of the document in this dataset is only 66.4 words on average.
In addition, many instances in this dataset only contain one word summary (see Example 2 in Table 1) for a given query that appears both in the training and evaluation sets, which may also help the model to memorize such words for similar queries during the training phase. These issues may lead to an unexpected increase in the ROUGE score when the model starts learning to reproduce those words in the summary during the evaluation phase. Furthermore, we also find some instances where the length of the summary is longer than the document length, which usually happens in short documents (see Example 3 in Table 1).
To address these limitations, we propose a methodology for cleaning the Debatepedia dataset via leveraging ChatGPT as the data annotator to regenerate the queries. In the following, we describe our methodology.
## 4 Our Annotation Methodology
The recently released ChatGPT model has demonstrated impressive performance to solve a wide-range of problems, from generating fluent and coherent summaries from documents to solving mathematical problems, along with solving challenging information retrieval tasks, such as open-domain question answering, neural machine translation, writing programming solutions, and etc Qin et al. (2023); Guo et al. (2023). In this work, we leverage ChatGPT as the annotator to fix the issues in the Debatepedia dataset to use it for query-focused abstractive summarization. We denote our ChatGPT annotated cleaned dataset for **Q**uery Focused Abstractive **Summ**arization based on **D**eabetpedia as the **CQSumDP** dataset.
As demonstrated in the previous section the Debatepedia dataset has several limitations, containing noisy and irrelevant contents (e.g., queries/documents/summaries). To address these issues, we first clean the Debatepedia dataset to sample relevant instances from the dataset. Our objective for data sampling here is that the selected samples in the dataset could then be more relevant for query-focused summarization. Afterward, the sampled instances are used for data annotation using ChatGPT. Below we first describe our data sampling technique, followed by our approach of using ChatGPT as the annotator to construct the CQSumDP dataset.
### Cleaned Data Sampling
Our data sampling strategy to use a cleaned version of the dataset for query focused abstractive summarization is as follows:
* We set a minimum threshold of 75 words for the length of each selected document. This is because for the smaller-sized documents, the reference summaries are mainly the overall generic summary of the document where the additional query does not help. By excluding these smaller-sized documents by using a threshold, we can ensure that the reference summaries are more query-focused. Furthermore, setting the threshold at 75 words also helps us to address the noisy scenario in the Debatepedia dataset when the reference summary length is longer than the document length.
* As we demonstrated in Section 3 that many summaries in the Debatepedia dataset are very short (there are many summaries of only 1 word length too), we exclude instances where the length of the summary is shorter than 5 words. This helps us to clean the dataset in a way such that instead of having a dataset with very short answers, we rather propose a dataset consisting of concise but coherent and fluent summaries. This helps us to keep the dataset more relevant to summarization instead of close-ended question answering.
### Using ChatGPT for Data Annotation
As ChatGPT like LLMs has the impressive capability to solve tasks based on the given prompt Qin et al. (2023); Guo et al. (2023), we manually construct a prompting template that asks ChatGPT to generate the query for a given document-summary pair. Prompt learning is a technique where a machine learning model is trained to complete a task based on the prompted input Liu et al. (2023); Sanh et al. (2021). This approach involves presenting the model with a prompt (i.e., a partial input), and the model is then tasked with generating the complete output. Prompt learning has become increasingly popular due to its ability to generate highly accurate results with very little data. It is also highly flexible, as it allows the user to modify the prompt
to achieve the desired result. We show an example prompt in Figure 1 where ChatGPT is asked to generate a query that is relevant to the given document-summary pair.
The ChatGPT version that we used for data annotation was based on the version that was last released6 on January 30th. We choose ChatGPT over other text generation models due to its impressive capability of generating high quality responses (Qin et al., 2023; Guo et al., 2023) while being free to use (in contrast to their powerful models in OpenAI that requires the use of paid API subscription). One of the key reasons for ChatGPT to generate human like responses is because it was trained using the reinforcement learning from human feedback technique (Qin et al., 2023; Guo et al., 2023; Ouyang et al., 2022). In this technique, the model generates a response to a user's input, and then humans provide feedback on the quality and appropriateness of the response. This helps the model to generate human like responses while ensuring high accuracy, appropriateness, and fluency. For these reasons, we use ChatGPT for data annotation.
Footnote 6: [https://help.openai.com/en/articles/6825453-chatgpt-release-notes](https://help.openai.com/en/articles/6825453-chatgpt-release-notes)
Though prior research has demonstrated that many queries in the Debatepedia dataset have no relevance with the document (Laskar et al., 2022), there does not have any major issues found on the summaries in the Debatepedia dataset. Thus, we use both the document and the summary as input to ChatGPT since we already cleaned the Debatepedia dataset by removing noisy instances where the summary length is very small or exceeds the document length. While we could ask ChatGPT to generate a query followed by a query-based summary by only giving the document with the input prompt, we did not do so as it has been observed that ChatGPT tends to generate longer summaries (Qin et al., 2023) and so we use both the document and the summary as input to only regenerate the queries in the Debatepedia dataset. This also allows us to use the original gold reference summaries in our proposed CQSumDP dataset without any modification.
A total of 5914 samples were annotated using ChatGPT. After the data annotation is completed, we create the training, validation, and test set based on the split provided by Nema et al. (Nema et al., 2017) for the original version of the Debatepedia dataset7. As we construct a cleaned version of the dataset by excluding noisy instances, the number of samples in each split in our cleaned version of the dataset is smaller than the original one. The
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Split** & **Total Number of Samples** & **Avg. Query Length** & **Avg. Document Length** & **Avg. Summary Length** \\ \hline Training & 5212 & 11.64 & 106.82 & 9.77 \\ \hline Validation & 301 & 11.54 & 107.22 & 9.62 \\ \hline Test & 401 & 11.90 & 104.75 & 9.77 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data distribution on each split (train/valid/test) in our cleaned annotated version of Debatepedia: The CQSumDP Dataset.
Figure 1: Our Input Prompt to ChatGPT for Query Generation
overall statistics of our cleaned, annotated version of the Debatepedia dataset: the CQSumDP dataset is shown in Table 2.
## 5 Experimental Settings
In this section, we present our experimental settings. Below, we first describe the models we use to evaluate our ChatGPT annotated cleaned version of the Debatepedia dataset, the CQSumDP dataset, followed by our model implementation details. To keep the experimental comparisons fair, we only use the cleaned samples of both versions of the dataset (e.g., 5914 cleaned samples, with 5212, 301, 401 instances in the training, validation, and test sets respectively, as demonstrated in Section 4). From now on, we refer to the version of the Debatepedia dataset that has the original queries but only contains our sampled 5914 instances as _Original Debatepedia_.
### Models
To evaluate the effectiveness of our ChatGPT annotated CQSumDP dataset, we fine-tune some state-of-the-art pre-trained sequence to sequence models (Lewis et al., 2019; Raffel et al., 2019; Zhang et al., 2019; Goodwin et al., 2020). For this purpose, we concatenate the query with the document and give as input to these models to generate the query-focused abstractive summaries as this approach has shown impressive performance in the query-focused abstractive summarization task recently (Laskar et al., 2022). We describe these models below:
**BART (Bidirectional and Auto-Regressive Transformer):** BART (Lewis et al., 2019) is a pre-trained sequence-to-sequence model based on the encoder-decoder architecture that was pre-trained on a large amount of diverse text data using the denoising auto-encoding technique to recover the original form of a corrupted document. The pre-training involved various objectives such as rotating the document, permuting sentences, infilling text, masking tokens, and deleting tokens. We use the pre-trained BART model since fine-tuning this model was found to be very effective on a wide range of language generation tasks, including abstractive summarization.
T5 (Text-to-Text Transfer Transformer):The T5 model (Raffel et al., 2019) is a transformer-based model that uses the BERT architecture. Unlike traditional BERT-based models that classify input text into a specific category, the T5 model treats all tasks such as text classification, question answering, neural machine translation, and text summarization as a sequence-to-sequence problem
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Model** & **Training Dataset** & **Evaluation Dataset** & **ROUGE 1** & **ROUGE 2** & **ROUGE L** \\ \hline BART-Base & CQSumDP & MS-MARCO & 44.01 & 26.95 & 38.34 \\ Pegasus-Base & CQSumDP & MS-MARCO & 50.34 & 33.07 & 45.80 \\ T5-Base & CQSumDP & MS-MARCO & 48.90 & 28.66 & 43.84 \\ \hline BART-Base & Original Debatepedia & MS-MARCO & 43.09 & 23.72 & 37.90 \\ Pegasus-Base & Original Debatepedia & MS-MARCO & 46.94 & 29.24 & 42.42 \\ T5-Base & Original Debatepedia & MS-MARCO & 47.85 & 27.89 & 42.81 \\ \hline BART-Base & MS-MARCO & CQSumDP & 28.42 & 10.30 & 23.56 \\ BART-Base & MS-MARCO & Original Debatepedia & 23.56 & 7.38 & 20.88 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Domain generalization performance of different models trained on respective versions (CQSumDP and Original) of the Debatepedia dataset and evaluated on the MS-MARCO dataset, as well as trained on MS-MARCO and evaluated on the CQSumDP and Original versions of the Debatepedia dataset.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **Dataset** & **ROUGE 1** & **ROUGE 2** & **ROUGE L** \\ \hline BART-Base & CQSumDP & 42.26 & 22.45 & 38.84 \\ Pegasus-Base & CQSumDP & 36.01 & 16.30 & 32.59 \\ T5-Base & CQSumDP & 39.95 & 21.24 & 36.79 \\ \hline BART-Base & Original Debatepedia & 39.97 & 21.50 & 36.87 \\ Pegasus-Base & Original Debatepedia & 29.70 & 11.91 & 26.77 \\ T5-Base & Original Debatepedia & 37.68 & 18.92 & 34.49 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of different models trained and evaluated on the respective versions of the Debatepedia dataset.
using various pre-training objectives. After pre-training, the model is fine-tuned to generate the output for a given input sequence in the required task, leading to impressive performance gain on many downstream summarization datasets.
**Pegasus (Pre-training with Extracted Gap-sentences for Abstractive Summarization):** Pegasus (Zhang et al., 2019) is a transformer-based pre-trained encoder-decoder model for abstractive summarization. Its pre-training objective involves generating summary like text from an input document. To achieve this, the PEGASUS model first selects and masks some sentences from the input document(s). It then concatenates these selected sentences to create a pseudo-summary. The model uses different approaches to select these sentences, such as randomly selecting a certain number of sentences, selecting the first few sentences, or computing the ROUGE-1 score between each sentence and the rest of the document to choose the top-scoring sentences. This pseudo-summary is then used for self-supervised learning. By pre-training on large datasets using this approach, the model achieves impressive fine-tuning performance on downstream summarization datasets.
### Implementation
We use the HuggingFace8(Wolf et al., 2019)library to implement the baseline models for performance evaluation. Similar to the prior work, we concatenated the query with the document to give as input to the pre-trained baselines (i.e., BART, Pegasus, T5). The pre-trained model is then fine-tuned using \(4\) NVIDIA V100 GPUs. The training batch size for BART was set to \(16\), while it was set to \(4\) for Pegasus and T5. The other hyperparameters were similar for all models, with the learning rate being set to \(2e-3\) and the maximum input (i.e., the concatenated query and document) sequence length being \(150\) tokens. The minimum and the maximum target (i.e., the generated summary) sequence lengths were \(5\) and \(25\), respectively. A total of \(10\) epochs were run to fine-tune the pre-trained summarization models. We computed the ROUGE (Lin, 2004) scores in terms of ROUGE-1, ROUGE-2, and ROUGE-L using the _Evaluate9_ library to compare the performance of different models on the respective test set.
Footnote 8: [https://huggingface.co/](https://huggingface.co/)
Footnote 9: [https://huggingface.co/spaces/evaluate-metric/rouge](https://huggingface.co/spaces/evaluate-metric/rouge)
## 6 Results & Discussions
We conduct a series of experiments to evaluate the performance of strong baseline models in our proposed cleaned annotated version of Debatepedia: the CQSumDP dataset. In this section, we present our experimental findings.
### Effectiveness of ChatGPT Generated Queries
To investigate the effectiveness of our CQSumDP dataset that leverages ChatGPT to generate the queries, we compare the performance of BART, Pegasus, and T5 models on both the CQSumDP and the Original Debatepedia datasets (results are given in Table 3). We use the Base versions of these models from HuggingFace (Wolf et al., 2019), and trained and evaluated on the respective datasets.
From Table 3, we find that all three models perform better in the CQSumDP dataset in comparison to their performance on the Debatepedia dataset. This gives a strong indication that the queries generated by ChatGPT are more helpful to improve the model performance. While comparing the performance between different models, we found that BART outperforms the other two models on both datasets in all three ROUGE metrics. More specifically, in the CQSumDP dataset, BART achieves the highest ROUGE-1 (42.26), ROUGE-2 (22.45), and ROUGE-L (38.84) scores. Though in the Original Debatepedia dataset, BART also outperforms other models by achieving ROUGE-1, 2, and L scores of 39.97, 21.50, and 36.87, respectively; its performance on the Original Debatepedia is much lower than its performance on the CQSumDP dataset.
Our experimental results show the effectiveness of our proposed CQSumDP dataset that helps all these models to obtain better ROUGE scores than their counterparts on the Original Debatepedia dataset. The poor performance of these models on the Original Debatepedia dataset compared to the CQSumDP dataset further demonstrates the limitations in terms of query relevance in the Original Debatepedia.
### Generalization Capability
In the previous section, we find that all the baseline models fine-tuned on our CQSumDP dataset perform better than their counterparts that are fine-tuned on the Original Debatepedia dataset. In this
section, to further study the relevance of the Chat-GPT generated queries in our proposed CQSumDP dataset, we evaluate the performance based on domain generalization. In this regard, we use the similar setting of Laskar et al., (Laskar et al., 2022d) where they used the QA-NLG version of the MS-MARCO dataset Wang et al. (2018) to fine-tune their query-focused summarization model for abstractive answer generation and then evaluate on Debatepedia. We also use the MS-MARCO dataset for our analysis based on the following two scenarios:
* **Training: MS-MARCO, Evaluation: Debatepedia:** In this scenario, we trained the baseline models on the training set of MS-MARCO (153725 samples) and evaluated on the respective versions of the Debatepedia dataset (CQSumDP and Original Debatepedia).
* **Training: Debatepedia, Evaluation: MS-MARCO:** In this scenario, we do the opposite, as we trained the baseline models on the respective versions of Debatepedia and evaluated on the development set of MS-MARCO (12467 samples).
We show our results in Table 4 and observe that the domain generalization performance in both scenarios: (i) while using Debatepedia for training to evaluate on MS-MARCO, as well as (ii) while using MS-MARCO as the training data for evaluation on Debatepedia, the performance is better when the CQSumDP version of the Debatepedia dataset is used in comparison to the scenario when the Original Debatepedia is used. These findings further establish the effectiveness of using ChatGPT generated queries for the query-focused summarization task in the Debatepedia dataset.
### Performance Based on Model Scaling
So far, in our prior experiments, we utilize the Base version of each model and investigate the effectiveness of our proposed CQSumDP dataset. Though smaller models are preferred over larger models in real-world industrial scenarios where computing resources are limited Laskar et al. (2022b, a), in this section, to set a benchmark performance in our proposed CQSumDP dataset, we investigate how much performance gain we can achieve via scaling to a larger model. For this purpose, we select the best performing BART model Lewis et al. (2019) and compare its performance between its Base and Large versions in our dataset. From our experimental results given in Table 5, we observe that the ROUGE score is improved by a large margin (on average an improvement of 10.37 out of all three ROUGE metrics) when the BART-Large model is used. This indicates that the utilization of the Chat-GPT generated queries in the CQSumDP dataset also helps the larger summarization models to understand the query representation better, leading to an improved ROUGE score.
### Ablation Tests
It was recently found that even without incorporating query relevance, the summarization models could achieve performance on the Debatepedia dataset almost similar to what could have been achieved via incorporating query relevance Laskar et al. (2022d). While analyzing the Debatepedia dataset, we observe that this happens mostly on instances where the document size is quite small. As we already cleaned the Debatepedia dataset by removing such instances (e.g., short documents or summaries), in this section, we conduct ablation studies to investigate the importance of query relevance in the cleaned version of the dataset. For this purpose, we remove the query relevance while giv
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline BART-Large & 51.66 & 33.96 & 49.03 \\ \hline \hline _without query incorporation_ & 46.45 & 29.92 & 44.11 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation test results after removing the query relevance in the CQSumDP dataset.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline BART-Large & 51.66 & 33.96 & 49.03 \\ \hline BART-Base & 42.26 & 22.45 & 38.84 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance comparisons based on model size between BART-Large and BART-Base on CQSumDP.
ing the input to the best performing BART-Large model and investigate the effect of removing the query in our proposed dataset. We show our results in Table 6. We find from the table that the performance is dropped by a huge margin when the query is removed from the input text, demonstrating the importance of the query in our proposed CQSumDP dataset.
### Zero-Shot Learning Performance
In recent times, the zero-shot evaluation of large pre-trained language models on text generation tasks, such as abstractive summarization has been on the rise Brown et al. (2020); Qin et al. (2023); Guo et al. (2023). To establish a benchmark in our proposed dataset, we also conduct a zero-shot evaluation of the best performing BART-Large model in both the CQSumDP and the Original Debatepedia datasets. To do so, we combine the query with the document and give as input to the pre-trained BART-Large model. We observe from Table 7 that in terms of zero-shot evaluation, the pre-trained BART-Large model evaluated on our dataset performs better than its performance on the Original Debatepedia, further establishing that the utilization of ChatGPT generated queries in CQSumDP is more helpful than the original queries in the Debatepedia dataset.
### Qualitative Analysis of the Annotated Data
In this section, we do some qualitative analyses between the queries in the Original Debatepedia dataset as well as the queries generated using ChatGPT in our proposed CQSumDP version of the Debatepedia dataset. For our analysis, we collect a set of 3 samples from this dataset and present them in Table 8. While comparing between the queries
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Evaluation Dataset** & **ROUGE 1** & **ROUGE 2** & **ROUGE L** \\ \hline Pre-trained BART-Large & CQSumDP & 26.86 & 9.46 & 21.70 \\ \hline Pre-trained BART-Large & Original Debatepedia & 21.60 & 6.04 & 18.52 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Zero-Shot Learning Performance of different models on the respective evaluation sets of Debatepedia.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**\#** & **Original Query** & **ChatGPT Query** & **Source Document** & **Gold Summary** \\ \hline
1. & military : & What actions did the & provided better body amore to our troops. provided the department & improved services benefits & and respect for troops. \\ & & & prove the situation for & services to amaries. created media media & & & \\ & & & & and the return of false audience to dover all. announced creation of & & \\ & & & as? & paint bullet lifetime electronic for members of the **s.**, named & & \\ & & & & projects to improve quality of medical care.- ended the previous jobs/ & policy that kept additives in law/plainishing judgment their their efficient & date. signed the streamable can be made better and transparency & \\ & & & & act authoring advance appreciations for the department of veterans & affairs by providing two-lead year budget antibody thus enabling better & medical care for veterans. endorsed by the american legonia american veterans blinded voter. - ans association & \\ \hline
2. & & & & & & \\ & & & & & & \\ & & & & & & \\ \hline
2. & & & & & & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ \hline
3. & & & & & & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparisons between the original queries and the ChatGPT generated queries in some samples of the Debatepedia dataset. Note that the personally identifiable information in this dataset is anonymized with the # token.
in the first example in the table, we find that the original query is just one word length and very ambiguous, while the ChatGPT generated query is more descriptive and more relevant to both the document and the summary. For the second example, we find that even though the original query is descriptive, it does not have any relevance to the generated summary. Whereas the ChatGPT generated query is very relevant to both the document and the summary. For the third example, we find that the original query is related to "entrepreneurs". However, the document is about "product managers", not "entrepreneurs". Meanwhile, the ChatGPT generated query is also very relevant to the document. This analysis further demonstrates the relevance of our ChatGPT generated query in comparison to the original query in Debatepedia.
### Cost Efficiency Analysis
Recently, it was shown that using the GPT-3 Brown et al. (2020) model could significantly reduce the labeling cost without sacrificing the model's performance much, making it possible to train models on larger datasets without the need for extensive manual labeling Wang et al. (2021); Ding et al. (2022). However, to use GPT-3, it requires the use of its API10, which is not free. On the contrary, ChatGPT is free to use. Meanwhile, we observe that generating the query in the Debatepedia dataset was also quite fast, as we observe that we could generate the queries for about 4 samples on average per minute while using ChatGPT for data annotation. This is also quite fast than giving the data for human annotation, as the human not only needs to read the document and the summary, but also needs some time to think about what could be the most effective query for the given document-summary pairs. Thus, in terms of both cost and time, it is more efficient to use ChatGPT for data annotation.
Footnote 10: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models)
## 7 Conclusions and Future Work
In this paper, we presented a methodology for cleaning the Debatepedia dataset to make it suitable for query-focused abstractive summarization. We removed the noise from the dataset to construct a cleaned version of the dataset while using ChatGPT's language generation capabilities to address the limitations of the queries in this dataset. Our approach results in a cleaner version of Debatepedia that is found to be very effective for training and evaluating query-focused summarization models via outperforming the original dataset in terms of query relevance and summary generation quality. This indicates that our cleaning approach is effective in improving the dataset's quality for research in summarization.
In the future, we will explore if the chain of thought prompts Wei et al. (2022) with ChatGPT leads to better query generation. We will also explore the performance of fine-tuning other pre-trained models on our proposed dataset Sanh et al. (2021); Muennighoff et al. (2022); Chowdhery et al. (2022). In addition, we will investigate the potential of using ChatGPT as the annotator for other tasks in Information Retrieval Lin et al. (2021); Laskar et al. (2020); Xu et al. (2020); Huang and Hu (2009); Huang et al. (2005); Liu et al. (2007) to assess its generalizability. Finally, we will release our annotated version of the Debatepedia: the proposed CQSumDP dataset to encourage further research in the query-focused abstractive summarization task.
## Acknowledgements
We would like to thank OpenAI for making ChatGPT freely available which helps us to use it for data annotation. This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and the York Research Chairs (YRC) program.
|
2305.00536 | Non-equilibrium steady states of long-range coupled harmonic chains | We perform a numerical study of transport properties of a one-dimensional
chain with couplings decaying as an inverse power $r^{-(1+\sigma)}$ of the
intersite distance $r$ and open boundary conditions, interacting with two heat
reservoirs. Despite its simplicity, the model displays highly nontrivial
features in the strong long-range regime, $-1<\sigma<0$. At weak coupling with
the reservoirs, the energy flux departs from the predictions of perturbative
theory and displays anomalous superdiffusive scaling of the heat current with
the chain size. We trace back this behavior to the transmission spectrum of the
chain, which displays a self-similar structure with a characteristic
sigma-dependent fractal dimension. | Francesco Andreucci, Stefano Lepri, Stefano Ruffo, Andrea Trombettoni | 2023-04-30T17:24:15Z | http://arxiv.org/abs/2305.00536v3 | # Non-equilibrium steady states of long-range coupled harmonic chains
###### Abstract
We perform a numerical study of transport properties of a one-dimensional chain with couplings decaying as an inverse power \(r^{-(1+\sigma)}\) of the intersite distance \(r\) and open boundary conditions, interacting with two heat reservoirs. Despite its simplicity, the model displays highly nontrivial features in the strong long-range regime, \(-1<\sigma<0\). At weak coupling with the reservoirs, the energy flux departs from the predictions of perturbative theory and displays anomalous superdiffusive scaling of the heat current with the chain size. We trace back this behavior to the transmission spectrum of the chain, which displays a self-similar structure with a characteristic sigma-dependent fractal dimension.
## I Introduction
The main task of statistical mechanics is to relate the microscopic interactions of a given system to its macroscopic properties. One typical instance is the context of heat transfer. Suppose we apply a temperature gradient \(\nabla T\) to a system, after a while the system will reach a stationary state characterized by the presence of a heat flux \(\mathcal{J}\). The thermal conductivity \(\kappa\) is defined in terms of these quantities as:
\[\mathcal{J}=-\kappa\nabla T, \tag{1}\]
In the case of diffusive transport, Fourier's law holds and \(\kappa\) does not depend on the size of the system \(N\) in the thermodynamic limit. This is typically the case for three-dimensional systems with short-range interactions. We remark, however, that there is currently no generic way, given the microscopic properties of a system, to know whether Fourier's law holds or not.
A case in which Fourier's law is systematically violated is the case of harmonic interactions. For instance, for the harmonic crystal each phonon propagates freely and the transport is ballistic. This was showed for the first time for a chain with nearest-neighboors interactions in the seminal paper by Rieder, Lebowitz and Lieb [1]. They found that the thermal conductivity \(\kappa\) diverges as \(\kappa\propto N\), \(N\) being the number of particles in the chain. Moreover, the bulk temperature profile is flat, while Fourier's law would lead to a linear one. The non-equilibrium properties of quantum harmonic lattices have also been considered in the last decades [2; 3; 4; 5; 6].
Generally speaking, in harmonic lattices transport features are dictated by the spectral properties of both the thermal reservoirs and the system itself. For instance in the case of disordered lattices displaying Anderson localization, the conductivity (or energy flux) depends on the localization lengths, but also on the boundary conditions [7], the spectral density of the baths at low frequencies [8] as well as on the distribution and correlations of the random disorder [9; 10]. For more general, non homogeneous harmonic networks, the spectral properties can be accounted by random matrix theory and can describe also current fluctuations [11]. This is even more striking for active (non-equilibrium) baths that can lead to non-trivial transport regimes even for the ordered harmonic chain [12].
It became progressively become clear that in one (and two) dimensions there are violations of Fourier's law also for nonlinear systems [13; 14; 15; 16; 17], such as the Fermi-Pasta-Ulam-Tsingou (FPUT) chain. In one dimension, these violations manifest themselves as a power-law divergence of the thermal conductivity \(\kappa\) with the system's size \(\kappa\propto N^{\alpha}\). Transport in these cases is called anomalous. It is now clear that superdiffusive transport is a generic feature of non-linear one-(and two)-dimensional non-integrable systems conserving momentum, energy and stretch. There are both numerical and analytical evidences that the exponent \(\alpha\) can be used to identify different universality classes [16]. For weakly non-integrable models the scenario may be more involved since quasiparticles may have very large mean-free paths [18; 19].
A further element of interest is represented by the presence of forces that are not strictly local. Indeed, much less is known about systems with long-range interactions, that is, systems in which the interparticle interaction scales with the particle distance \(r\) as \(V(r)\propto r^{-d-\sigma}\). Several physical systems are characterized by long-range interaction, both classical (gravity, pure plasmas, \(2d\) hydrodynamics) and quantum (dipolar systems and trapped atoms). As a concrete experimental instance we mention trapped ion chains, where ions are confined in periodic arrays and interact with external reservoirs [20; 21]. On a macroscale, effective long-range forces arise for tailored macroscopic systems like chain of coupled magnets [22] and the effects of fluctuations and nonlinearity may be relevant.
Long-range systems received considerable attention in the last years, for reviews see for example [23] and [24]
for classical and quantum systems respectively. For what we are going to be concerned with in this paper, we remind that, at equilibrium, the universality class of a one-dimensional long-range system depends on the value of \(\sigma\). Indeed, for \(-1<\sigma<0\), the critical exponents are the mean field ones, that is the ones that we obtain by putting \(\sigma=-1\). Then, there exists a non universal value \(\sigma^{*}\) such that for \(\sigma>\sigma^{*}\) we recover the critical exponents of the short-range case \(\sigma=\infty\). Typically \(\sigma^{*}>0\). Furthermore, excitations in long-range systems can propagate at diverging velocity [25; 26] and therefore we can expect some form of superdiffusive transport. There are already several, mainly numerical, studies of heat transport in long-range interacting systems that confirm these expectations. On the classical side, the heat transport was analyzed for the long-range XY model [27; 28] and in the FPUT chain in [28; 29; 30; 31; 32] and the lattice \(\varphi^{4}\) theory [33]. In all cases Fourier's law is violated in different ways according to the value of \(\sigma\). Scaling analysis of equilibrium correlations also suggests that hydrodynamics is non-standard [30; 33]. Thus, one may interpret transport as a fractional diffusion process with energy carriers performing Levy flights, with jump statistics controlled by the exponent \(\sigma\).
A classical harmonic long-range model with a stochastic dynamics was studied analytically in [34; 35] and the heat flux and temperature profile for a mean-field chain were computed in [36; 37]. The same system was studied in the quantum regime in [37] and a hydrodynamic approach to study transport in quantum magnets was proposed in [38]. We refer again to [24] for more references on the study of dynamics and transport in quantum long-range systems. However, in the literature there is not yet a detailed study of the plain harmonic chain with power-law interaction, and this contribution aims at filling this gap. We will show that the results are far from trivial in the strong long-range case and deserve careful analysis.
More precisely, in this paper we study numerically heat transport in a quadratic chain with a power-law interaction by coupling the first and last site of the system to two heat baths at different temperature. We focus on computing the heat flux in the stationary state with different approaches. In section II we introduce the model and the main methods that we will use to compute the heat flux. In section III-V we report an analysis based on the spectral properties of the nonequilibrium Green's function and the transmission spectra and we discuss them. Finally, we draw our conclusions in section VI.
## II Model and methods
### The long-range coupled harmonic chain
We consider a one-dimensional chain of particles with a power-law interaction:
\[H=\frac{1}{2}\sum_{i}p_{i}^{2}+\frac{1}{2}\sum_{ij}x_{i}\Phi_{ij}x_{j}, \tag{2}\]
where the interaction matrix \(\Phi\) is given by:
\[\Phi_{ij}=\left(2\delta_{ij}-\frac{1}{N_{\sigma}}\frac{1}{|i-j|^{1+\sigma}} \right),\quad N_{\sigma}=\sum_{l=1}^{N}l^{-\sigma}, \tag{3}\]
where \(N_{\sigma}\) is the usual Kac factor introduced to guarantee extensivity of the energy, chosen as site-independent. The matrix correctly reduces to the discrete Laplacian for large \(\sigma\). Note that definition (3) corresponds to open boundary conditions, which are the ones appropriate for our problem due to the presence of the baths. For long-ranged systems we expect that the role of boundary conditions can have very important consequences, even more than for short-ranged systems, and we focus on this natural choice for simplicity.
In the case of open boundary conditions the spectrum of matrix \(\Phi\) is, to the best of our knowledge, not known analytically. The usual standing waves are not eigenvectors and the matrix cannot be diagonalized exactly. Even in the continuum limit, this would correspond to solving the spectral problem for the fractional Laplacian in a finite domain, which is notoriously not straightforward [39].
For comparison, it is useful to recall the solvable case for periodic boundary condition where the proper definition of \(\Phi\) is:
\[\Phi_{ij}=\left(2\delta_{ij}-\frac{1}{N_{\sigma}}\frac{1}{min(N-|i-j|^{1+\sigma },|i-j|^{1+\sigma})}\right). \tag{4}\]
Here the spectrum is known, see for example [40]. Due to translational invariance, the eigenvectors are plane waves of wavenumber \(k\). The nature of the eigenfrequency spectra strongly depends on whether \(\sigma\) is positive or negative. In the first case, the system has a proper continuum limit and for low momenta \(k\) the squared frequencies \(\omega^{2}\) of the plane waves behave as:
\[\omega_{k}^{2}\approx\begin{cases}|k|^{\sigma},&0<\sigma<2,\\ k^{2},&\sigma>2.\end{cases} \tag{5}\]
Thus, for \(\sigma>0\) one has the standard acoustic dispersion and a finite group velocities while the group velocity diverges as \(|k|^{\frac{\sigma-2}{2}}\) in the first case. This result can also be derived from the continuum limit, corresponding to a fractional wave equation in the infinite domain [41]. On the other hand, if \(\sigma<0\) the spectrum remains discrete
even in the thermodynamic limit and contains a countable infinite number of frequencies that accumulate at the band edge [40].
To simulate the non-equilibrium steady state, we follow the usual procedure and connect the first and last sites of the system to two Langevin heat baths at temperatures \(T_{L}\) and \(T_{R}\), respectively. The coupling with the baths introduces both noise and dissipation in the dynamics of the system. The resulting equations of motion are:
\[\ddot{x}_{i}=-\sum_{j}\Phi_{ij}x_{j}+\delta_{i1}\left(\xi_{L}-\lambda\dot{x}_{ i}\right)+\delta_{iN}\left(\xi_{R}-\lambda\dot{x}_{i}\right), \tag{6}\]
where the \(\xi\)'s are Gaussian noises that satisfy the fluctuation-dissipation relation:
\[\left\langle\xi_{a}(t)\xi_{a}(t^{\prime})\right\rangle=2T_{a}\lambda\delta(t-t ^{\prime}),\quad a=L,R. \tag{7}\]
After a transient, the system reaches a stationary state: we are interested in the heat flux and the temperature profile of the chain in this state. To compute these quantities, we will employ three different methods.
### RLL approach
The first method was introduced long time ago in this context in [1]. It consists in solving the many-body Fokker-Planck equation related to (6) (in the following we will refer to this method as the RLL method). In particular, defining the vector \(y=(x_{1},...x_{N},p_{1},...p_{N})\), and denoting by \(P(y,t)\) its probability at time \(t\), the aforementioned equation reads as:
\[\frac{\partial P(y,t)}{\partial t}=A_{ij}\frac{\partial}{\partial y_{i}}(y_{j }P)+\frac{1}{2}D_{ij}\frac{\partial^{2}P}{\partial y_{i}\partial y_{j}}, \tag{8}\]
where the drift and diffusion matrices are
\[A=\begin{pmatrix}\mathbb{O}&-\mathbb{I}\\ -\Phi&\lambda\mathcal{R}\end{pmatrix},\quad D=\begin{pmatrix}\mathbb{O}& \mathbb{O}\\ \mathbb{O}&2k_{B}\lambda T(\mathcal{R}+\eta\mathcal{S})\end{pmatrix}, \tag{9}\]
where
\[\begin{cases}T=\frac{T_{L}+T_{R}}{2},\\ \eta=\frac{T_{L}-T_{R}}{T}\end{cases},\quad\mathcal{R}_{ij}=\delta_{ij}( \delta_{i1}+\delta_{iN}), \tag{10}\] \[\mathcal{S}_{ij}=\delta_{ij}(\delta_{i1}-\delta_{iN}). \tag{11}\]
The solution of equation (8) is a multi-variate Gaussian whose covariance matrix is given by the matrix of correlations among the canonical coordinates:
\[P(y,t)\propto\exp\left\{-\frac{1}{2}C_{ij}^{-1}y_{i}y_{j}\right\}\!,\quad C= \begin{pmatrix}\left\langle x_{i}x_{j}\right\rangle&\left\langle x_{i}p_{j} \right\rangle\\ \left\langle p_{i}x_{j}\right\rangle&\left\langle p_{i}p_{j}\right\rangle \end{pmatrix}. \tag{12}\]
By plugging (12) in the Fokker-Planck equation (8) we get:
\[\partial_{t}C=D-AC-CA^{T}. \tag{13}\]
Furthermore, in the stationary state \(\partial_{t}C=0\), so we get the so-called (continuous) Lyapunov equation:
\[AC+CA^{T}=D, \tag{14}\]
which has to be solved numerically. Knowing the various correlators, we can then express the heat flux in the stationary state as the difference between the temperature of the left bath and the temperature of the first site:
\[\mathcal{J}=\lambda\left(T_{L}-T_{1}\right),\quad T_{i}=\frac{1}{2}\left\langle p _{i}^{2}\right\rangle. \tag{15}\]
### Nonequilibrium Green's function
The second method consists in writing the exact solution to (6) in terms of the Green's function \(G(\omega)\), which is possible due to the linearity of the equations. The details of this method are explained in refs. [42; 3; 14]. Since we are interested in the stationary state, we work directly in frequency space:
\[\tilde{x}_{l}(\omega) =\sum_{ln}G_{ln}(\omega)(\tilde{\xi}_{L,n}(\omega)+\tilde{\xi}_{R, n}(\omega)), \tag{16}\] \[G(\omega) =\left(-\omega^{2}\mathbb{I}+\Phi+i\lambda\omega\mathcal{R} \right)^{-1}, \tag{17}\]
where the tilde indicates the Fourier transform and \(\mathcal{R}\) is the matrix defined in Eqs.(9, 11). As explained in [14], we can express the heat flux in the stationary state as:
\[\mathcal{J}=\frac{2\Delta T\lambda^{2}}{\pi}\int_{0}^{\infty}d\omega\,\omega^{ 2}|G_{1N}(\omega)|^{2}. \tag{18}\]
### Generalized eigenvalue method
There is in the literature another approach to the Green's function method, called _generalized eigenvalues method_, which we briefly outline below (for a more detailed explanation see [43; 44; 45]). Let \(G^{L}(s)\) be the Green's function defined in Laplace's space:
\[G^{L}(s)=\left(-s^{2}\mathbb{I}+\Phi+\lambda sR\right)^{-1}, \tag{19}\]
and introduce the \(2N\) complex numbers \(\{s_{a}\}_{a=1}^{2N}\) and the \(2N\) vectors \(\{\mathbf{r}_{a}\}_{a=1}^{2N}\) as defined by the following linear problem:
\[G^{L}(s_{a})\mathbf{r}_{a}=0. \tag{20}\]
Then, the Green's function (19) can be written as [45]:
\[G^{L}(s)=\sum_{a=1}^{2N}\frac{s_{a}}{s-s_{a}}\mathbf{r}_{a}\mathbf{r}_{a}^{\dagger}. \tag{21}\]
Note that the \(s_{a}\) come in complex conjugate pairs. We now recall that we can obtain the Green's function in frequency space \(G(\omega)\) via a Wick rotation \(G(\omega)=G^{L}(-is)\)
Then we can compute the integral in (18) with a contour integration one finding [43]:
\[\mathcal{J}=2\Delta T\lambda^{2}\sum_{a,b=1}^{2N}\frac{s_{a}^{3}s_{b}}{s_{a}+s_{b }}r_{a,1}r_{a,N}r_{b,N}r_{b,1}. \tag{22}\]
Formula (22) gives yet another way of computing the heat flux and extract the scaling exponents.
### Comments
Before proceeeding, let us comment on the numerical issues connected with the above approaches. The numerical implementation of the RLL method is rather straightforward resorting to the numerical routines available to solve the Lyapunov equation based on the Bartels-Stewart algorithm, as implemented for instance in the SciPy library [46]. Indeed, one can easily reach sizes of \(N\sim 10^{3}\). Some convergence issues may arise in the case of strong degeneracies [37]. The numerical implementation of the Green's function method can be more involved than the one of the RLL method. Indeed, we need to numerically invert the matrix in the definition of the Green's function (17) in the range of \(\omega\) where the transmission is non-vanishing in order to be able to compute the integral in (18). Furthermore, the sampling over \(\omega\) has to be fine enough to ensure accuracy, especially if the transmission coefficient oscillates rapidly. This difficulty does occur in our model, as it will be clear in what follows. In practice, it is difficult to study lattices larger than \(N\sim 10^{2}\) using this method. The generalized eigenvalues method has the advantage of reducing the problem to the calculation of the eigenvalues and eigenvectors of a \(2N\times 2N\) matrix [44], which can be done by standard linear algebra routines, the main limitation being memory storage and accuracy of very small eigenvalues and avoiding the sampling problem.
## III Heat flux
In the short-range case, \(\sigma=\infty\), two of the methods outlined above have been used to obtain exact analytical results for the heat flux in the thermodynamic limit [1; 14]. This is possible because the matrix of the interactions \(\Phi\) reduces to the discrete Laplacian, which is a tridiagonal matrix. In our case the matrix \(\Phi\) is dense, and we are unable to either solve analytically the Lyapunov equation or to exactly compute the Green function. Nonetheless, it is possible to obtain a certain amount of informations about the heat flux numerically.
### Small coupling
If the coupling with baths \(\lambda\) is small, a perturbative calculation of the steady-state current is possible in terms of the eigenvalues and eigenvectors of the isolated harmonic chain. This approach yields the so-called Matsuda-Ishii's formula, whereby \(\mathcal{J}\approx\mathcal{J}_{MI}\) to the leading order in the coupling constant [47; 13], with \(\mathcal{J}_{MI}\) given by
\[\mathcal{J}_{MI}=\lambda\Delta T\sum_{k}\frac{\psi_{k,1}^{2}\psi_{k,N}^{2}}{ \psi_{k,1}^{2}+\psi_{k,N}^{2}} \tag{23}\]
where \(\Delta T=T_{L}-T_{R}\) and \(\psi_{k,n}\) denotes the \(n\) component of the \(k\)th eigenvector of the matrix \(\Phi\) defined in (3). For the model we consider here (which is homogeneous and mirror-symmetric) the above expression simplifies to
\[\mathcal{J}_{MI}=\frac{\lambda\Delta T}{2} \tag{24}\]
which expresses the fact that the chain is a ballistic conductor.
Typically, in the short-range case \(\sigma\to\infty\), this result applies for \(\lambda\ll\lambda_{0}\approx\mathcal{O}(1)\). In the our long-range case, however, the situation is more complicated. In Fig. 1, we compare formula (24) and the numerical solution of the Lyapunov equation. As we can see, (24) holds for \(\lambda\) smaller than a certain threshold \(\lambda_{0}(\sigma,N)\), that depends both on \(N\) and on \(\sigma\). More specifically, \(\lambda_{0}\) decreases with \(\sigma\) and with \(N\). On the other hand, for \(\sigma>0\) the perturbative approximation holds well in the considered range.
To have some insight into these deviations we may perform some further checks. Usually the perturbative approach is justified assuming that the separation of the unperturbed normal mode frequencies is smaller than the
Figure 1: Plots of the ratio between the heat flux \(\mathcal{J}\), computed numerically with the RLL method, and the Matsuuda-Ishii heat flux (24) versus the system size \(N\) for several values of \(\sigma\) and \(\lambda\) in the weak coupling regime.
typical dissipation caused by the coupling with the baths [43]. This assumption can actually be checked by examining the poles \(s_{a}\). In particular, we compare the spacings between the imaginary parts of consecutive poles \(Im(s_{a+1}-s_{a})\) and the real parts \(Re(s_{a})\). As we can see from Fig. 2, the former is always much larger than the latter, therefore this assumption is justified. This suggests that the observed deviations from the Matsuda-Ishii formula may have a different origin.
### Strong coupling
We now want to understand how the flux scales with the system size \(N\) for not too weak coupling \(\lambda\). In order to so, we computed the heat flux using the RLL method for several values of \(N\) and \(\sigma\) for \(\lambda=1\) (and we will set \(\lambda=1\) for the rest of the paper) As shown in Fig.3 the data can be fitted with a power law \(\mathcal{J}\propto N^{-\gamma}\).
Although the direct computation of the Green's function is numerically cumbersome, we can easily compute its poles, compute the heat flux according to (22) and fit a power law as we did before. In panel \(b)\) of Fig. 4 we report both the exponents fitted with the generalized eigenvalues method and with the RLL method. As we can see, they are qualitatively in agreement.
The results of fits using the two methods are reported in Fig. 4. We can identify three regions. The region close to the mean-field case \(\sigma=-1\) and the one close to the short-range case \(\sigma>1\), where finite-size effects are almost absent, and an intermediate region in which finite-size effects are quite strong. We also note that \(\gamma\) seems to be converging to the short-range value \(\gamma=0\) while \(\sigma\) goes to \(1\). Summarizing, even if we are not able to extract the exact values of the exponents, it is clear that the flux scales with some nontrivial power of the system's size \(N\).
## IV Transmission spectra
To understand the origin of the nontrivial dependence of the flux on the size, let us investigate the transmission spectrum of the chain. We begin by plotting the
Figure 4: Plot of the scaling exponent of the flux \(\gamma\), defined as \(\mathcal{J}\propto N^{-\gamma}\), (\(a\)), we report the exponents obtained by fitting a power law on the heat flux obtained with the RLL method. To check the finite-size effects, each data set corresponds to a fit over different length ranges, \(50\leq N\leq 1600\) (circles), \(500\leq N\leq 2000\) (squares), \(1500\leq N\leq 7500\) (triangles). Panel (\(b\)), comparison between the exponents obtained by the RLL method (circles) and the generalized eigenvalue method (triangles).
Figure 3: Log-log plot of the heat flux \(\mathcal{J}\) versus the system’s size \(N\) for \(\lambda=1\) and different values of the long-range exponent \(\sigma\). The flux is computed using the RLL method as described in the text.
Figure 2: Plots of the spacing between the imaginary parts of the poles of the Green’s function \(Im(s_{k+1})-Im(s_{k})\) (circles) and the real parts of the poles \(Re(s_{k})\) (crosses) for \(\sigma=-0.5\). Different colors correspond to different system’s size: \(N=256,512,1024\) in blue, orange, green, respectively.
transmission coefficient, namely the integrand in (18) as a function of the frequency \(\omega\). In Fig. 5 we report its plot for several values of \(\sigma\). We can see that it is characterized by a rather complicated peak structure which consists of \(N-2\) peaks (as can be checked numerically).
A main point we want to make and explore is that the structure of such resonances determines the scaling of the current. Notice that a change of sign in \(\omega\) in (17) is equivalent to the complex conjugation of \(G(\omega)\). Since the transmission coefficent depends on the square modulus of \(G(\omega)\) it is an even function of \(\omega\) and we can therefore restrict ourselves to study positive frequencies. Let us denote by \(\omega_{k}\), \(k=1,2\ldots\) the location of the peak frequencies for positive \(\omega\). The peaks accumulate at a band-edge frequency \(\omega_{B}<2\), i.e \(\omega_{k}\rightarrow\omega_{B}\) for \(k\) large. Furthermore, upon approaching \(\omega_{B}\), the width of the peaks decreases. Notice that this is the reason why it is important to finely sample the Green's function in \(\omega\), especially in the proximity of the band edge. Indeed, we used a logarithmic sampling in order to increase the sampling points near \(\omega_{B}\). The integrand is thus a much more complicated function of \(\omega\) with respect to the mean-field case \(\sigma=-1\)[36; 37], where only the first peak is present. It can be checked numerically that the first few peaks are Lorentzian with amplitude \(\Delta_{k}\approx N^{-1}\), exactly like the peak in mean-field case. The subsequent peaks are too narrow to be resolved. For positive values of \(\sigma\) the situation becomes even more complicated, as a curve emerges below the peaks, as we can see in Fig. 5 for \(\sigma=0.5\).
For the reasons outlined above, it seems more convenient to consider the cumulative function \(F(\omega)\), that is, the integral (18) performed up to frequency \(\omega\). In the rightmost panels of Fig. 5 we report the function \(F(\omega)\) for several values of \(N\) of order \(10^{2}\) and \(\sigma\), rescaled by \(N^{\gamma}\), where \(\gamma\) is the exponent obtained with the RLL method for values of \(N\) of order \(10^{2}:10^{3}\). As we can see, the curves nicely collapse for \(\sigma=-0.7,-0.5\), but for higher values of \(\sigma\), such as \(\sigma=-0.3\), the collapse is not as good due to the finite-size effects, as expected. Regardless of the lack of further quantitative progress in the computation of the exponents, the qualitative information about the peak structure will be crucial in our understanding of the model, as we will see later.
## V Poles of the Green's function
In view of the numerical difficulties encountered above and for comparison, we also performed a study of poles of the Green's function. These are computed through the generalized eigenvalue method described above.
The main advantage of the analysis is that we gain a new perspective on the peak structure discussed before. Indeed, the positions \(\omega_{k}\) of the peaks in Fig. 5 are given by the absolute value of the imaginary part of \(s_{a}\), while the absolute value of the imaginary part should be proportional to their widths \(\Delta_{k}\).
In particular, we consider all the peaks as Lorentzian - for simplicity, but also because all the peaks that we were able to resolve are actually very well approximated by Lorentzian - with width given by \(Re(s_{a})\). In this approximation, as far as scaling with the size is concerned, the heat flux can be estimated as the sum of the widths of the peaks \(\Delta_{k}(N)\):
\[\mathcal{J}(N)\approx\sum_{k=1}^{N-2}\Delta_{k}(N). \tag{25}\]
The relevant information should thus be contained in the dependence of the \(\Delta_{k}\) on \(k\) and \(N\). Physically, this is the effective damping of plane waves due to the coupling with the thermal reservoirs.
The dependence of \(\Delta_{k}\) on \(N\) is reported in Fig. 6, where we plot (parametrically) the real parts of the poles as a function of the imaginary ones, for negative and positive values of \(\sigma\), respectively. Since the resonances accumulates at the band-edges, it is convenient to report the frequencies as a function of their relative distance from \(\omega_{B}\). Let us focus on the case of negative \(\sigma\), to begin with. From the leftmost panels of Fig. 6, it is seen that the poles can be grouped in two sets, each having different dependencies on \(\omega_{k}\) and \(N\). Empirically, this is accounted for by the following scaling:
\[\Delta_{k}(N)\approx\begin{cases}d_{k}/N,&k<k_{o}\\ d_{k}/N^{\delta},&k>k_{o},\end{cases} \tag{26}\]
where \(k_{o}<<N\) and \(d_{k}\) do not depend \(N\). We do not have an a-priori theoretical estimate of \(\delta\), but we find that there is a good collapse upon choosing \(\delta\approx 1+|\sigma|\). It is interesting to point out that the exponent \(\delta\) can be interpreted as the fractal dimension of area below the graphs in Fig. 5. Indeed, if we increase the system's size \(N\) new peaks emerge with progressively shrinking area and, in a putative \(N\rightarrow\infty\) limit we would have an infinite number of peaks with vanishing area.
In addition, there are a few poles whose widths do not follow this scaling and fall consistently well outside the collapsed curve. It actually turns out that there are two degenerate eigenvalues between the \(s_{a}\)s that do not follow the scaling law. However this is inconsequential, as one can check that the contribution of the these eigenvalues to (22) vanishes. Heuristically, this is because, as one can check, the eigenvectors related to these eigenvalues are localized at the endpoints of the chain and therefore do not contribute to transport. This also explains why the peaks in Fig. 5 are \(N-2\) instead of \(N\). We can therefore infer the following scaling law for the heat flux (22):
\[\mathcal{J}\approx\frac{\sum_{k=1}^{k_{o}}d_{k}}{N}+\frac{\sum_{k=k_{o}}^{N} d_{k}}{N^{\delta}}\propto N^{1-\delta}. \tag{27}\]
For positive \(\sigma\), the scaling of \(\Delta_{k}\) is reported in the right-most panels of Fig. 6: as we can see in this case
\(\Delta_{k}\approx N^{-1}\), over the entire spectrum. Therefore, the estimate the heat flux yields
\[\mathcal{J}\approx\frac{\sum_{k=1}^{N}d_{k}}{N}\approx\mathcal{O}(1). \tag{28}\]
So the heat flux for positive \(\sigma\) behaves as the heat flux for \(\sigma=\infty\) (the nearest-neighboors case), that is, it does not scale with \(N\).
To summarize, according to approximation (25) and the numerical estimate of \(\delta\) extracted from the data, we find that the heat flux scale as:
\[\mathcal{J}\propto N^{-\tilde{\gamma}},\qquad\tilde{\gamma}\approx\begin{cases} 1-\delta,\quad\sigma<0,\\ 0,\quad\sigma>0.\end{cases} \tag{29}\]
As we already mentioned, see Fig. 6, we found a good collapse of the imaginary part of the poles of the Green's functions for \(\delta\approx 1-|\sigma|\). So this yields
\[\tilde{\gamma}\approx-\sigma \tag{30}\]
for negative \(\sigma\). Admittedly, this estimate accounts only qualitatively for the behavior of the exponents as given in Fig. 4. The deviations are sizeable and, in addition the dependence of \(\gamma\) on \(\sigma\) appears to be non-linear. While this could be due to the aforementioned finite-size effects, the discrepancy is present even for values of \(\sigma\) for which the exponent \(\gamma\) has basically converged (for example \(\sigma=-0.7,-0.5\)). Another possibility, which seems more likely, is that, while the widths of the peaks of Fig. 5 are indeed related to the real parts of \(s_{a}\) on general grounds, they are not exactly equal. On the other hand, we point out that, since the \(s_{a}\) are related to the widths of the peaks, the transition in the scaling of the \(\Delta_{k}\)s at \(\sigma=0\) suggests that the scaling of the heat-flux between the short-range and the long-range behaviour has to occur at \(\sigma=0\).
## VI Conclusions
Heat transport in short-range linear systems has been widely studied [13]. On the contrary, the behaviour of linear oscillators with long-range power-law couplings is not yet well understood beyond the mean-field (fully-coupled) case [36; 37]. In this paper, we have made a step forward along this direction by applying three different methods [1; 6; 14] that allow to compute numerically both the heat flux and its scaling with the system's size. All the methods give a clear scaling of the current with a power-law in the system's size. This scaling interpolates between the short-range behaviour, where the current is constant in the system's size, and the mean-field behaviour, where the current is inversely proportional to the system's size. However, the fitted scaling exponents show significant finite-size effects for all the three methods. The method of ref. [1] which consists in solving a matricial equation is straightforwardly applicable to the long-range case. The Green's function approach allows to express the current as an integral
Figure 5: Panels \(a)\), \(b)\), \(c)\), \(d)\) : transmission spectrum (the integrand of the heat flux expression (18)) for different values of the range exponent \(\sigma=-0.7,0.5,0.1,0.5\) and for a chain with \(N=100\). Only positive frequencies are reported. Panels \(e)\), \(f)\), \(g)\), \(h)\): rescaled cumulative function \(N^{\gamma}F(\omega)\), for \(N=80,100,120,140\) and \(\sigma=-0.7,-0.5,-0.1,0.5\) in panels \(a)\), \(b)\), \(c)\), \(d)\) respectively. The values of \(\gamma\) are taken from the blue points in Figure 4.
over frequencies, which cannot be solved analytically. However, the integrand has the interesting property of showing a sequence of peaks that accumulate near the band edges of the spectrum. Further properties of these peaks can be inferred using the third method, which allows to compute the poles of the Green's function. Indeed, the real and the imaginary part of these poles are related to the position and the width of the peaks, respectively. We find a sharp transition in the scaling of the real parts of the poles at the value of the long-range coupling exponent \(\sigma=0\) corresponding to the transition between the long-range and the short-range behaviour of the system. The crucial problem is now the dependence on \(\sigma\) of the scaling exponent of the current. Assuming that all of the peaks of the integrand are well-separated Lorentzians and that their widths are exactly given by the real parts of the poles, we might conclude that the heat current scales as \(\mathcal{J}\propto N^{-|\sigma|}\) for \(-1<\sigma<0\). cin agreement with the one derived directly from the fit of the current, which is anyway affected - at least for small values of \(|\sigma|\) - by significant finite-size effects. The disagreement between these two scaling exponents remains to be explored, even though our analysis of the scaling of the real part of the poles of the Green's function clearly supports the presence of a transition at \(\sigma=0\) from the long-range to the short-range behaviour.
###### Acknowledgements.
We gratefully thank Celia Anteneodo and Lucianno Defaveri for useful discussions. SL and SR acknowledge partial support from project MIUR-PRIN2017 _Coarse-grained description for non-equilibrium systems and transport phenomena (CO-NEST)_ n. 201798CZL.
|
2309.15588 | Few-Shot Multi-Label Aspect Category Detection Utilizing Prototypical
Network with Sentence-Level Weighting and Label Augmentation | Multi-label aspect category detection is intended to detect multiple aspect
categories occurring in a given sentence. Since aspect category detection often
suffers from limited datasets and data sparsity, the prototypical network with
attention mechanisms has been applied for few-shot aspect category detection.
Nevertheless, most of the prototypical networks used so far calculate the
prototypes by taking the mean value of all the instances in the support set.
This seems to ignore the variations between instances in multi-label aspect
category detection. Also, several related works utilize label text information
to enhance the attention mechanism. However, the label text information is
often short and limited, and not specific enough to discern categories. In this
paper, we first introduce support set attention along with the augmented label
information to mitigate the noise at word-level for each support set instance.
Moreover, we use a sentence-level attention mechanism that gives different
weights to each instance in the support set in order to compute prototypes by
weighted averaging. Finally, the calculated prototypes are further used in
conjunction with query instances to compute query attention and thereby
eliminate noises from the query set. Experimental results on the Yelp dataset
show that our proposed method is useful and outperforms all baselines in four
different scenarios. | Zeyu Wang, Mizuho Iwaihara | 2023-09-27T11:44:04Z | http://arxiv.org/abs/2309.15588v1 | Few-Shot Multi-Label Aspect Category Detection Utilizing Prototypical Network with Sentence-Level Weighting and Label Augmentation
###### Abstract
Multi-label aspect category detection is intended to detect multiple aspect categories occurring in a given sentence. Since aspect category detection often suffers from limited datasets and data sparsity, the prototypical network with attention mechanisms has been applied for few-shot aspect category detection. Nevertheless, most of the prototypical networks used so far calculate the prototypes by taking the mean value of all the instances in the support set. This seems to ignore the variations between instances in multi-label aspect category detection. Also, several related works utilize label text information to enhance the attention mechanism. However, the label text information is often short and limited, and not specific enough to discern categories. In this paper, we first introduce support set attention along with the augmented label information to mitigate the noise at word-level for each support set instance. Moreover, we use a sentence-level attention mechanism that gives different weights to each instance in the support set in order to compute prototypes by weighted averaging. Finally, the calculated prototypes are further used in conjunction with query instances to compute query attention and thereby eliminate noises from the query set. Experimental results on the Yelp dataset show that our proposed method is useful and outperforms all baselines in four different scenarios.
Keywords:Aspect category detection Few-shot learning Meta-learning Prototypical network Label augmentation.
## 1 Introduction
Aspect category detection (ACD) [13][14] is a sub-task of aspect-based sentiment analysis (ABSA) [9]. ACD is to categorize user reviews on products and services such as hotels and restaurants into a pre-defined set of aspect categories. Examples of aspect categories for hotels are location, price, room, while those
of restaurants are food, service, interior, etc. ACD will facilitate access to viewpoint information for users and provide assistance for making decisions. As in practical scenarios, user reviews are generally diversified and contain more than one aspect, the task of multi-label aspect category detection becomes essential. This task can also be perceived as a special case of multi-label text classification tasks.
Few-shot learning (FSL) [4][3] enables a quick adaption to novel classes with a limited number of samples after learning a large amount of data, being an effective solution to the issues of finite data and data sparsity. FSL problems can be dealt with by a meta-learning [7] strategy, which is also known as "learning to learn." In the meta-training phase, the dataset is divided into separate meta-tasks to learn the generalization capability of the model in the case of category changes. The meta-task adopts the \(N\)-way \(K\)-shot setting, as demonstrated in Table 1, which is an example of a 3-way 2-shot meta-task, meaning that there are altogether three classes (aspect categories) in the support set and in each class there are two samples (sentences). The prototypical network [18] utilized in this paper follows exactly the meta-paradigm described above.
A prototypical network aims to extract a prototype for each class by averaging all the instances in one class to measure the distance with the instance of the query set. However, as shown in Table 1, it is evident that the number of noise aspects contained in the sentences of each support set class is different. In terms of that, simply averaging all instances in a class neglects the differences between sentences and treats samples from the same classes equally. Related work of multi-label few-shot learning for ACD [8] merely uses attention mechanism to denoise the sentence at word-level, but just denoising over words is not enough, since variance of noise between sentences still exists.
Moreover, in the context of few-shot text classification tasks, there are several papers that incorporate label text information and have obtained promising results. In the work exemplified by [25], although a higher boost in word-level attention using label embedding was obtained, it still has some deficiencies. We point out that there are many semantically similar or poorly expressed labels in the Yelp dataset we are using. For instance, the labels of _food_food_meat_ and
\begin{table}
\begin{tabular}{c|l|l|l|l|l|l|l|l|l|l|l} \hline \hline \multicolumn{10}{c}{Support Set} \\ \hline \multirow{2}{*}{(A) experience} & \multicolumn{4}{c}{(1) Perhaps we’ll try one more time and hope our} & \multicolumn{4}{c}{**experience** is better.} \\ \cline{2-10} & \multicolumn{4}{c}{(2) The **experience** and **service** is very great!} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \hline \multirow{2}{*}{(B) drinks} & \multicolumn{4}{c}{(1) It was happy hour so the **drinks** were a little} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \cline{2-10} & \multicolumn{4}{c}{(2) Just an hour in the afternoon and only **50** cents} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \hline \multirow{2}{*}{(C) food} & \multicolumn{4}{c}{(1) They also have rotating **dining**} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \cline{2-10} & \multicolumn{4}{c}{(2) The **food** was good and **price**} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \hline \multicolumn{10}{c}{Query Set} \\ \hline \multirow{2}{*}{(A) and (C)} & \multirow{2}{*}{My **experience** as far as **service** and **the **food** are the same.} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \hline \multirow{2}{*}{(B)} & **Drinks** & \multicolumn{4}{c}{were tasty and quick, and the} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} & \multicolumn{4}{c}{} \\ \hline \end{tabular}
\end{table}
Table 1: Example of a 3-way 2-shot meta-task. The bolded parts with gray background represent the target aspects, while the square marked parts indicate the noise aspects.
_food_food_chicken_ are semantically similar in their label texts, which may lead to confusion in the classification. Furthermore, there are labels whose meanings are rather obscure or ambiguous. Take the label _drinks_alcohol_hard_ as an example, it is known that the word "_hard_" is a polysemous word. The word "_hard_" in this class name is related to hardness, difficulty, etc. It is obvious that "_hard_" modifies "_alcohol_", but the word order is reversed, which may confuse the classier. In this case, augmenting the label name with a word related to the label, such as "_vodka_", can give more specific meaning to the label name and assist separation from other aspects.
For the purpose of tackling all the issues mentioned above, we propose a novel model named **S**entence-**L**evel **W**eighted prototypical network with **L**abel **A**ugmentation (Proto-SLWLA) that can well solve the current multi-label few-shot ACD task. Our model mainly consists of two parts, LA and SLW. Specifically, in the LA part, we concatenate the synonyms obtained from the original label text with the label itself and incorporate it into the existing word-level attention. The augmented label words will add certain auxiliary information to the label, which will make the label information become more adequate. For the SLW part, we propose assigning corresponding weights to different sentences in a class inspired by [10], which likewise treats the samples in one class as differentiated individuals. After mitigating noises at word-level, we implement our idea as giving lower weights to sentences with more noise and higher weights to sentences with less noise, by means of a sentence-level attention mechanism. Then prototypes by giving weighted averages to the instances are obtained. With these two methods, the prototype can be more representative of the current class. Our experiments conducted on Yelp_review [1] dataset shows that our method Proto-SLWLA outperforms the baselines in nearly all conditions, which demonstrates the effectiveness of our method.
The rest of this paper is organized as follows: Section 2 covers related work. Section 3 describes the proposed method of Proto-SLWLA. In Section 4, performance evaluation of Proto-SLWLA and comparision with baseline methods are shown. Section 5 presents concluding remarks and future work.
## 2 Related Work
#### 2.0.1 Aspect Category Detection
Previous research on ACD has concentrated on a single aspect, which includes unsupervised and supervised methods. Unsupervised methods use semantic association analysis based on pointwise mutual information [19] or co-occurrence frequency [6, 17] to extract aspects. However, these approaches require large corpus resource and the performance is hardly satisfactory. Supervised methods exploit representation learning [26] or topic-attention network [12] to identity different aspect categories. In practice, these methods have shown to be effective, yet they heavily rely on a massive amount of labeled data for each aspect to train discriminative classifiers. In addition, a review sentence often encompasses multiple aspects due to the diversity and ar
bitrariness of human expression, which motivates the multi-label aspect category detection.
#### 2.0.1 Few-shot Learning
FSL is a paradigm to solve the problem of scarcity of data. Meta-learning for solving FSL problems has been widely adopted, notably in model-based approaches, optimization-based approaches, and metric-based approaches. In our paper, we concentrate on the metric-based method, whose representative models are matching network [22], relation network [20], prototypical network [18], and so forth. An essential element of their idea is to learn a feature mapping function in which support and query samples are projected into an embedding space, and to classify queries by learning some metrics in that space.
#### 2.0.2 Multi-Label Few-Shot Learning
Compared to single-label FSL, the potential of multi-label FSL is yet to be stimulated. In the NLP domain, Proto-HATT [5] is proposed for an intent classification task, while designing a meta calibrated threshold mechanism with logits adaption and kernel regression. Proto-AWATT [8] focuses on multi-label few-shot aspect category detection and it is also the first work to focus on this task. It utilizes attention mechanisms to alleviate noise aspects, achieving remarkable results. However, its prototypical network assigns an equal weight to all samples, even if certain samples contain abundance of noises and multiple aspects. This is relatively disadvantageous for a multi-label few-shot learning task. In our work, we introduce a sentence-level attention module to give different weights to different instances.
#### 2.0.3 Using Label Information for Text Classification
Label embedding is currently widely used in NLP for text classification tasks [23] to enhance generalization ability, and it is also very common in zero-shot and few-shot settings. In the context of zero-shot learning, prompt-based strategies [15][16] to match text against class names in an implicit way have been developed. For few-shot learning, [11] extracts semantics of class names and simply appends class names to the input support set and query set sentences to guide the feature representation. A work close to our task is [25] which takes label embeddings as a supplementary information in its LAS and LCL part. In our work, we extend its LAS part by applying a label augmentation method to expand the label information.
## 3 Methodology
### Overview
A meta-task consists of a support set and a query set. We assume that in an \(N\)-way \(K\)-shot meta-task, the support set is denoted as \(S=\{(x_{1}^{n},x_{2}^{n},...,x_{K}^{n}),y^{n}\}_{n=1}^{N}\), where \(x_{k}^{n}\) represents the \(k\)-th sentence in \(n\)-th class and \(y^{n}\) is the common aspect that all \(x^{n}\) sentences contain. The query set is denoted as \(Q=\{(x_{m},y_{m})\}_{m=1}^{M}\)
where \(x_{m}\) indicates a query instance and \(y_{m}\) is its corresponding \(N\)-bit binary label from the support classes.
Our proposed model mainly consists of three components, which are support-set attention (word-level attention), sentence-level attention and query-set attention modules, as illustrated in Fig.1(a). Given a sentence \(x=[w_{1},w_{2},...,w_{l}]\) with length \(l\), we utilize the BERT [2] pre-trained model as the encoder and obtain an embedding matrix \(H=[\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{l}]\), where \(H\in\mathbb{R}^{d\times L}\) and \(L\) is the maximum length of BERT input.
### Word-level Attention with Label Augmentation
#### 3.2.1 Support Set Attention
Following the work of [8], we primarily alleviate the noises in the support set by using support set attention (word-level attention). We extract the common aspect vector \(\mathbf{v}^{n}\in\mathbb{R}^{d}\) out of the \(K\)-shot instances by mean pooling each instance and then perform a word-level average on the \(K\) instances.
\[\mathbf{v}^{n}=avg(H_{1}^{n},H_{2}^{n},...,H_{K}^{n}) \tag{1}\]
In order to further remove the noises, we adopt the approach of [24] following [8] to train a dynamic attention matrix by feeding the repeated common aspect vector [21] into a linear layer. This approach is possible to learn to accommodate the common aspect and pick up on its different perspectives.
\[W^{n}=W(\mathbf{v}^{n}\otimes e_{M})+\mathbf{b}, \tag{2}\]
where \((\mathbf{v}^{n}\otimes e_{M})\in\mathbb{R}^{e_{M}\times d}\) denotes repeating \(\mathbf{v}^{n}\) for \(e_{M}\) times and \(W^{n}\in\mathbb{R}^{d\times d}\). The linear layer has parameter matrix \(W\in\mathbb{R}^{d\times e_{M}}\) and bias \(\mathbf{b}\in\mathbb{R}^{d}\). As different classes are trained, the parameters of this linear layer are constantly updated
Figure 1: General architecture of our proposed model.
to accommodate the new classes. Then we use the common aspect vector to calculate the attention with each instance and multiply the obtained word-level weights on each sentence.
\[\mathbf{\beta}_{k}^{n}=\text{softmax}(\mathbf{v}^{n}\tanh(W^{n}H_{k}^{n})), \tag{3}\]
where \(n\in[1,N]\) and \(k\in[1,K]\). So far, we have achieved a preliminary word-level attention weight to enhance the focus on the target aspect, for reducing the effect of noise aspects to some extent.
#### 3.2.1 Label-Guided Attention Enhanced by Label Augmentation
As previously stated, since there are semantically similar and ambiguous labels in the dataset, we attempt to augment label texts with supplementary words to enrich label information. In particular, in order to dig words that are relevant to the sentence as well as the label name, we design a template whose format is: "[X]. It is about [Label], and its synonym is [MASK]." In this template, [X] represents a sentence of a given aspect category in the dataset. [Label] stands for its aspect category label, and [MASK] denotes the mask token. We then supply the embedding vector \(\mathbf{d}\in\mathbb{R}^{l}\) into the BERT pre-trained masked language model (MLM) to predict the word that should appear at the [MASK] position, as shown in
Figure 2: Specific framework of our proposed model. (a) Structure of the word-level attention module. (b) Structure of the sentence-level attention module.
Fig. 3. The MLM head will output a probability distribution which indicates the likelihood of each word \(w\) appearing at the [MASK] position over all the vocabulary \(V\).
\[p(w|\textbf{{d}})=\text{softmax}(W_{2}\sigma(W_{1}\textbf{{d}}+\textbf{{b}}^{ \prime})), \tag{4}\]
where \(W_{1}\in\mathbb{R}^{l\times l}\), \(W_{2}\in\mathbb{R}^{|\textbf{{V}}|\times l}\) and \(\textbf{{b}}^{\prime}\in\mathbb{R}^{l}\) are learnable parameters that have been pre-trained with the MLM objective of BERT, and \(\sigma(\cdot)\) is the activation function.
After we have obtained a list of candidate label name related words, we filter the predicted words from each sentence by removing stop words, punctuations, words identical to the class name, etc. The final predicted augmenting words for each class name are shown in Table 2. Then we take the top words of all the predicted words in each category, and find the top \(m\) words with the highest frequency among the total number of sentences multiplied by \(m\) words.
Once we obtained these words, we append them to the original label name with an underline to form a new label. For instance, take \(m=1\) as an example, the original label name _drinks_alcohol_hard_ will be transformed into _drinks_alcohol_hard_vodka_, which nicely emphasizes the meaning of "text-tiliquor", thus eliminating its interference with other meanings in the synonym.
So far, we have accomplished the process of label augmentation. Now we need to integrate augmented label information into the word-level attention to give more guidance to support sentences on label information as [25]. Specifically, we enter the augmented label information into the BERT model to obtain its embedding, and compute the cosine similarity between the label information embedding and sentence embedding:
\[\boldsymbol{\alpha}_{k}^{n}=\cos(\textbf{{L}}_{n},H_{k}^{n}), \tag{5}\]
where \(\textbf{{L}}_{n}\in\mathbb{R}^{d}\) is the label information embedding of class \(n\) in the support set, which is calculated by averaging in terms of the length of label information.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Label Name** & **Predicted Words** \\ \hline food\_food & eat, delicious, dining, cooking, meal, eating, foods,... \\ \hline parking & cars, space, traffic, parking, cars, driving, bike,... \\ \hline restaurant\_ententrainment\_music & song, pop, jazz, opera, melody, rock, blues, folk,... \\ \hline drinks\_alcohol\_hard & vodka, tequila, rum, gin, bitter, bourbon,... \\ \hline restaurant\_location & place, destination, locality, spot, geography,... \\ \hline entertainment\_casino & gambling, vegas, gaming, poker, casinos, game,... \\ \hline building\_hall & hallway, lobby, corridor, wall, halls, library,... \\ \hline \end{tabular}
\end{table}
Table 2: Part of the finally obtained words relevant to a label name
\(H_{k}^{n}\) is the word embedding matrix of the \(k\)-th sentence in the \(n\)-th class. We then combine the calculated cosine similarity \(\mathbf{\alpha}_{k}^{n}\) with the previously obtained word-level attention \(\mathbf{\beta}_{k}^{n}\) in (3). We concatenate the two vectors and enter them into the linear layer to obtain the final attention weight.
\[\mathbf{\theta}_{k}^{n}=W_{g}[\mathbf{\alpha}_{k}^{n};\mathbf{\beta}_{k}^{n}]+\mathbf{b}_{g}, \tag{6}\]
where \(\mathbf{\theta}_{k}^{n}\in\mathbb{R}^{l}\), \(W_{g}\) and \(b_{g}\) are trainable parameters of the linear layer. \([\cdot\ ;\ \cdot]\) denotes the concatenation operation. We then renormalize the attention score by the softmax function to make the weight more reliable.
\[\mathbf{\tilde{\theta}}_{k}^{n}=\text{softmax}(\mathbf{\theta}_{k}^{n}) \tag{7}\]
Eventually, we assign the final word-level attention weight to each sentence in the support set.
\[\mathbf{r}_{k}^{n}=\mathbf{\tilde{\theta}}_{k}^{n}H_{k}^{n}, \tag{8}\]
where \(n\in[1,N]\) and \(k\in[1,K]\). So far, we have constructed a collection of \(R_{n}=[\mathbf{r}_{1}^{n},\mathbf{r}_{2}^{n},...,\mathbf{r}_{K}^{n}]\) which consists of denoised support set representations. The whole process of this word-level attention module is illustrated in Fig 2 (a).
### Sentence-level Attention
In this section, we describe calculation of sentence-level weights. The architecture is depicted in Fig.2(b). As previously mentioned, we would like to adjust weights on different sentences in the classes depending on the amount of estimated noises involved in the sentence. Hence, we introduce a method to compute the attention by centering on the shortest sentence in the class. As illustrated in Fig. 4, we can observe that as the length of the sentence increases, the number of aspects therein increases as well. That is, the noises of the sentence are also extending. Consequently, it is reasonable to surmise that the shorter the sentence, the fewer number of aspects the sentence contains, which will then have a larger possibility of having a single aspect. Since each sentence always contains the target aspect, the chance of having noisy aspects becomes smaller than longer
Figure 3: Using BERT masked language model (MLM) to predict words relevant to a label name.
sentences.
Thus, we introduce a mechanism that emphasizes aspects learned from the shortest sentence and apply this aspect weighting to all the sentences in the support set. We first locate the denoised support representation \(\mathbf{r}_{min}^{n}\) of the shortest sentence and then repeating it for \(e_{M}\) times, which aims to learn different perspectives of the shortest sentence. Then we feed it to a linear layer to obtain the attention matrix \(W_{s}^{n}\), where \(W_{s}^{n}\in\mathbb{R}^{d\times d}\) and \(W_{s}\), and \(\mathbf{b}_{s}\) are trainable parameters.
\[W_{s}^{n}=W_{s}(\mathbf{r}_{min}^{n}\otimes e_{M})+\mathbf{b}_{s} \tag{9}\]
Similarly, we follow the preceding method to directly use the shortest sentence embedding \(\mathbf{r}_{min}^{n}\) to compute sentence-level attention on all the denoised sentence representations. Particularly, \(R^{n}\) is multiplied with the attention matrix \(W^{\prime n}\) to exploit the relationships between the shortest sentence and other sentences from different perspectives. The weight is calculated as follows:
\[\mathbf{\gamma}^{n}=\text{softmax}(\mathbf{r}_{min}^{n}\tanh(W_{s}^{n}R^{n})), \tag{10}\]
where \(\mathbf{\gamma}^{n}\in\mathbb{R}^{k}\), and \(R^{n}\in\mathbb{R}^{d\times k}\) represents the concatenation of all the denoised representations in one class as shown in (8). By doing this, longer sentences which are dissimilar to the shortest sentence and meanwhile contain more noise will obtain lower weights. Finally, we perform a weighted average to the representations to derive the final prototype \(\mathbf{p}^{n}\in\mathbb{R}^{d}\).
\[\mathbf{p}^{n}=\mathbf{\gamma}^{n}R^{n} \tag{11}\]
In this way, we obtain \(N\) prototypes \([\mathbf{p}^{1},\mathbf{p}^{2},...,\mathbf{p}^{N}]\) after processing all the classes in the support set.
Figure 4: Distribution of sentence length and number of aspects in the Yelp dataset. We randomly selected 8000 samples in the dataset and averaged the number of aspects for all sentences of the same length.
### Query Attention
Following the denoising operation of the support set, we ought to mitigate the noises in the query set as well, since the query set also contains some irrelevant aspects. To achieve this goal, we use the prototype \(\mathbf{p}^{n}\) we just obtained to compute the attention with the embedding of a query sentence \(H_{m}\in\mathbb{R}^{d\times L}\) and acquire the query representation \(\mathbf{r}_{m}\in\mathbb{R}^{d}\). What we want to accomplish is to enable query representation to be more attentive to the prototype aspect.
\[\mathbf{r}_{m}=\text{softmax}(\mathbf{p}^{n}\tanh(H_{m})) \tag{12}\]
Up to this point, we have completed introducing all the modules of the model and finished construction of representations of a given support set and query set.
### Training Objective
In this paper, we use the Euclidean Distance (ED) to measure the distances between prototypes and query representations. The final prediction of a query instance is the negative distances and then we use a softmax to normalize the result.
\[\hat{\mathbf{y}}=\text{softmax}(-\text{ED}(\mathbf{p}^{n},\mathbf{r}_{m})), \tag{13}\]
where \(n\in[0,N]\), and \(m\in[0,M]\). Lastly, we use the mean square error (MSE) loss to be our final training objective.
\[L=\sum(\hat{\mathbf{y}}-\mathbf{y}_{m})^{2}, \tag{14}\]
where \(\mathbf{y}_{m}\) is the \(N\)-bit golden label for query instance \(x_{m}\). Note that since \(\hat{\mathbf{y}}\) is softmaxed, our golden label \(\mathbf{y}_{m}\) should be normalized as well. During the training process we allow the predicted values to be as close as possible to the golden label.
## 4 Experiments
In this section, we primarily introduce the dataset used in our work, together with baselines, evaluation metrics, and implement details. Thereafter, we present and analyze the experimental results on our dataset in four different settings.
### Dataset
Since the Yelp dataset having review aspects and used in [8] is not publicly available, we construct the dataset by combining the Yelp_dataset_round8 and Yelp_review_aspect [1] which are datasets consisting of extensive user reviews. After processing the raw data into sentences and their corresponding aspects, we collected the sentences for each aspect and selected 100 aspects from all the 135 aspects and remove 35 of them. The selected aspects are split without
intersection into 64 aspects for training, 16 aspects for validation, and 20 aspects for testing. We randomly sample 800 meta-tasks from the 64 gathered aspect sentences for training, 600 meta-tasks from the 16 gathered aspects for validation and 600 meta-tasks from the 20 gathered aspects for testing, following [8]. The statistics of our dataset is shown in Table 3. Note that at each epoch of training, the 800 meta-tasks are resampled.
### Baseline Models
Our method is compared with the following methods: Matching Network [22], Relation Network [20], Prototypical Network [18] and Proto-AWATT w/o DT [8] and Proto-SLW. Note that we use BERT as the encoder for all baseline models for the sake of fairness.
#### 4.2.1 Matching Network [22]
It learns an embedding mapping function first, combines the samples of support set and query set samples and enters them into Bi-LSTM, and finally adopts the cosine similarity as the distance measure to obtain the classification result.
#### 4.2.2 Relation Network [20]
Instead of a fixed distance metric, it uses a deep neural network with multiple layers of convolution to compute the relationship between query samples and support samples.
#### 4.2.3 Prototypical Network [18]
By averaging the corresponding support samples, it computes a prototype for each class and uses the negative Euclidean distance between the query samples and the prototype for the few-shot classification task.
#### 4.2.4 Proto-AWATT [8]
It is the first approach for multi-label aspect category detection tasks. It mitigates the adverse effects caused by noisy aspects using support set and query set attention mechanisms.
#### 4.2.5 Proto-SLW
As an ablation setting, this model is removed of the LA part from our proposing model and only utilizing the sentence-level attention to assign different weights to different sentences in one class in the support set.
\begin{table}
\begin{tabular}{|l|r r r|} \hline
**Dataset** & \#**cls.** & \#**inst./**cls.** & \#**inst.** \\ \hline FewAsp & 100 & 630 & 63000 \\ \hline \end{tabular}
\end{table}
Table 3: Statistics of the Yelp dataset. **#cls.** indicates the number of classes. **#inst./cls.** indicates the number of instances per class. **#inst.** indicates the total number of instances.
#### 4.3.2 Proto-SLW\(+\)Las
We add the LAS part from [25] to our SLW model to take the label name itself as a complementary information of the attention weights in the support set. Note that this case is actually equivalent to the case of _m_=0 in Proto-SLWLA.
### Evaluation Metrics
Traditional single-label FSL tasks in the past typically used accuracy to measure the performance of the model. In the multi-label task, we follow Proto-AWATT and choose the AUC (Area Under Curve) score which is used to select model and macro-F1 score as evaluation metrics.
### Experimental Settings
For parameter settings, we set \(m=1\), \(d=768\), \(L=50\), \(e_{M}=4\), \(Q=5\times N\). We train our model on GeForce RTX 3090 GPU and set our learning rate as 1e-5, batch size as 4 when \(N=5\), as 2 when \(N=10\). When performing label augmentation, we randomly select 2000 sentences in the query set of each class for subsequent operations.
Regarding the threshold value, we set \(\tau=0.3\) in all the conditions. We adopt an early stop strategy when the AUC score is no longer increased in 3 epochs. Then we will select the epoch which has the best result of the AUC score in the validation phase for testing.
### Experimental Results and Discussions
The experimental results are shown in Table 4. The results demonstrates that both our word-level attention with label augmentation module and sentence
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Models} & \multicolumn{2}{c|}{5-way 5-shot} & \multicolumn{2}{c|}{5-way 10-shot} & \multicolumn{2}{c|}{10-way 5-shot} & \multicolumn{2}{c|}{10-way 10-shot} \\ \cline{2-9} & AUC & F1 & AUC & F1 & AUC & F1 & AUC & F1 \\ \hline Matching Network & 0.9025 & 66.59 & 0.9230 & 70.97 & 0.8834 & 51.54 & 0.9085 & 53.84 \\ \hline Prototypical Network & 0.9017 & 65.71 & 0.9318 & 71.55 & 0.8991 & 53.82 & 0.9063 & 55.44 \\ \hline Relation Network & 0.8463 & 54.72 & 0.8473 & 55.54 & 0.8428 & 42.92 & 0.8325 & 45.86 \\ \hline Proto-AWATT w/o DT & 0.9061 & 66.32 & 0.9319 & 71.67 & 0.8999 & 53.86 & 0.9125 & 57.75 \\ \hline Proto-SLW & 0.9116 & 67.27 & 0.9387 & 72.83 & **0.9062** & 54.74 & 0.9156 & 57.33 \\ \hline Proto-SLW+LAS (\(m=0\)) & 0.9123 & 67.94 & 0.9374 & 72.90 & 0.9037 & 54.98 & 0.9119 & 57.09 \\ \hline Proto-SLWLA (\(m=1\)) & 0.9156 & 68.11 & **0.9391\({}^{\ddagger}\)** & **73.20\({}^{\ddagger}\)** & 0.9024 & 54.79 & **0.9179\({}^{\ddagger}\)** & **58.69\({}^{\ddagger}\)** \\ \hline Proto-SLWLA (\(m=2\)) & **0.9157\({}^{\ddagger}\)** & **68.30\({}^{\ddagger}\)** & 0.9377 & 72.87 & 0.9026 & 55.18 & 0.9156 & 58.20 \\ \hline Proto-SLWLA (\(m=3\)) & 0.9117 & 67.57 & 0.9380 & 72.97 & 0.9038 & **55.63\({}^{\ddagger}\)** & 0.9154 & 57.94 \\ \hline \end{tabular}
\end{table}
Table 4: Experimental results of our model with AUC and macro-F1(%) evaluated on FewAsp. m represents for the number of words augmented by each label. The symbol \(\dagger\) indicates \(p\)-value<0.05 of the T-test comparing with Proto-AWATT, while symbol \(\ddagger\) indicates that \(p\)-value<0.05 of the T-test comparing with Proto-SLW.
level attention module are effective. It is worth mentioning that in the 10-way 10-shot scenario, Proto-SLWLA(m=1) improves F1 score by 1.36% from Proto-SLW, which is considered a significant improvement in our experimental results.
When Proto-SLW is compared with Proto-AWATT, the results of Proto-SLW are better than those of Proto-AWATT in all four scenarios, which fully illustrates the effectiveness of our sentence-level attention module. Furthermore, the result on 10-shot has slightly more improvement than that of the 5-shot result. This suggests that a noise-less short sentence is more likely to be included in more shots (sentences). However, the 10-way boost is a little less pronounced than the 5-way, since 10-way (10-classes) causes more chance for short sentences to contain noise aspects than 5-way (5-classes).
Compared with Proto-SLW+LAS, the results of Proto-SLWLA outperform in all these four scenarios. This indicates that our LA part is effective, and suggests that it is reliable to use label-related words to enhance the label itself. In addition, for Proto-SLWLA we evaluated with three different values of \(m\), which represents the number of words augmented by each label. We can observe that our model can achieve the best results when \(m=1\) or \(2\), whereas for \(m=3\), the performance of the model is decreasing and sometimes even lower than Proto-SLW+LAS. This demonstrates that the first or the second of the augmenting words are highly relevant to the label. However as the number of augmenting words increases, lower-ranked words seem to be causing drifts on label semantics.
## 5 Conclusion
In this paper, we proposed Proto-SLWLA, which is based on prototypical network with sentence-level weighting and label augmentation to tackle the multi-label few-shot aspect category detection task. Existing methods in this domain often utilize prototypical network, but they perform denoising merely at the word-level and do not focus on the variations between instances. Since the concentrations of noises in the target aspects are varying between instances, we introduced sentence-level attention to assign specific weights to the instances after using word-level attention. Also, another existing approach incorporates label text information into the word-level attention module to improve the performance, but the label name texts are not sufficient, because label names are often semantically similar and ambiguous each other, causing separation hard in the representation space. To improve separation by label names, we introduced label name augmentation in which a template is designed and masked language model prediction is utilized to generate words related to each label name, and append them to the label information, which is then used for a guidance of the word-level weights. Our experimental evaluations by AUC and macro-F1 score demonstrate that our design is feasible and effective, outperforming nearly all the baseline models. |
2309.11922 | Cluster-based pruning techniques for audio data | Deep learning models have become widely adopted in various domains, but their
performance heavily relies on a vast amount of data. Datasets often contain a
large number of irrelevant or redundant samples, which can lead to
computational inefficiencies during the training. In this work, we introduce,
for the first time in the context of the audio domain, the k-means clustering
as a method for efficient data pruning. K-means clustering provides a way to
group similar samples together, allowing the reduction of the size of the
dataset while preserving its representative characteristics. As an example, we
perform clustering analysis on the keyword spotting (KWS) dataset. We discuss
how k-means clustering can significantly reduce the size of audio datasets
while maintaining the classification performance across neural networks (NNs)
with different architectures. We further comment on the role of scaling
analysis in identifying the optimal pruning strategies for a large number of
samples. Our studies serve as a proof-of-principle, demonstrating the potential
of data selection with distance-based clustering algorithms for the audio
domain and highlighting promising research avenues. | Boris Bergsma, Marta Brzezinska, Oleg V. Yazyev, Milos Cernak | 2023-09-21T09:33:41Z | http://arxiv.org/abs/2309.11922v1 | # Cluster-Based Pruning Techniques for Audio Data
###### Abstract
Deep learning models have become widely adopted in various domains, but their performance heavily relies on a vast amount of data. Datasets often contain a large number of irrelevant or redundant samples, which can lead to computational inefficiencies during the training. In this work, we introduce, for the first time in the context of the audio domain, the \(k\)-means clustering as a method for efficient data pruning. \(K\)-means clustering provides a way to group similar samples together, allowing the reduction of the size of the dataset while preserving its representative characteristics. As an example, we perform clustering analysis on the keyword spotting (KWS) dataset. We discuss how \(k\)-means clustering can significantly reduce the size of audio datasets while maintaining the classification performance across neural networks (NNs) with different architectures. We further comment on the role of scaling analysis in identifying the optimal pruning strategies for a large number of samples. Our studies serve as a proof-of-principle, demonstrating the potential of data selection with distance-based clustering algorithms for the audio domain and highlighting promising research avenues.
Boris Bergsma\({}^{1,*}\), Marta Brzezinska\({}^{1,*}\), Oleg V. Yazyev\({}^{1}\), Milos Cernak\({}^{2}\)\({}^{1}\)Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL),
CH-1015 Lausanne, Switzerland
\({}^{2}\)Logitech Europe S.A., Lausanne, Switzerland
\({}^{*}\)equal contribution \(k\)-means clustering, data pruning, keyword spotting
## 1 Introduction
With the advent of deep learning and the exponential growth in the amount of data available, the demand for efficient storage and processing has become crucial. For instance, a recently released large language model from OpenAI - GPT-4 - is reportedly about six times larger than its predecessor, with one trillion hyperparameters. Such a vast number of trainable parameters not only slows down the inference time and increases the computational cost of the model, but also requires an enormous amount of samples to train and generalize. Identifying the redundancies in datasets or models is therefore essential to reduce the computational demands. In particular, discarding - or pruning - the irrelevant information _before_ the training is the most general, model-independent approach to optimize NNs. Formally, pruning can be viewed as a discrete optimization problem, with the objective of recognizing the largest subset of irrelevant data from the training dataset while minimizing the change in network parameters resulting from removing the selected subset [1]. Data pruning techniques have been used so far to improve predictions for movie ratings [2] or image classification [3], but not for audio data. Within the audio domain, significant efforts have been rather devoted to model pruning using, for example, sparse architecture search and weight pruning [4, 5, 6, 7, 8]. Relative to image datasets, audio datasets are generally smaller and can benefit from optimal data selection techniques for small datasets [9, 10] or various cross-validation schemes implemented in deep learning frameworks such as Keras [11].
Here, we explore a model-agnostic pruning technique based on unsupervised clustering analysis for audio data. This approach allows us to reduce the size of training sets by exploiting similarities in the multidimensional feature space of a pre-trained large audio model. Reduced datasets can then be effectively used with significantly smaller neural networks without compromising accuracy. The goal of this paper is therefore twofold: (i) to discuss audio data pruning with \(k\)-means algorithm on a KWS dataset (Google Speech Commands Dataset V3 containing one-second samples with 36 classes [12]), and (ii) to provide guidance on extending this approach to more complex datasets and different tasks.
The paper is structured as follows: Section 2 introduces related work on data and model pruning, Section 3 outlines the proposed audio data pruning method, and Section 4 describes the experimental setup with obtained results. Finally, Section 5 concludes the paper and outlines the future work.
## 2 Related Work
### Parameter pruning in (deep) neural networks
Reducing the time and computational complexity of a neural network can be achieved by removing unnecessary weights, layers, or channels. Neural network pruning can be performed at initialization, for example, with single-shot network pruning [13], dynamically during the training [14], or iteratively
to find the optimal subnetwork, such as with the lottery ticket hypothesis [15]. Another approach towards optimal data selection is to construct simpler proxy models from the target network and investigate their performance [16].
### Score-based data pruning methods
**EL2N score** Normed error-L2 distance can be used early in the training process to determine a subset of data points that carry the most information [17]. Removing easy examples (characterized by the smallest EL2N scores) reduces the computational cost while maintaining test accuracy. This data selection approach can be further improved by combining it with Selective-Backprop [18] or distillation [19].
**Forgetting score** Forgetting score tracks the number of times when a neural network misclassifies an example that it was previously classified correctly [20, 21]. Samples that are rarely forgotten have a smaller impact on training accuracy and, therefore, can be pruned. Calculating the score is done at later stages of the training process as it requires collecting the statistics during the training.
**Memorization and influence estimations** Memorization is often viewed as a negative trait since it implies that a neural network cannot effectively generalize [22, 23]. The memorization score quantifies the increase in the probability of correctly predicting the label when an example is present in the training set compared to when it is absent. Examples with high memorization scores are atypical and cannot be learned from the remaining data. In addition, an influence score measures the effect of adding or removing an example from the training dataset on the test set predictions of a learned neural network. As a result, the samples with high memorization and high influence are considered relevant.
These scoring metrics have been primarily used in image classification tasks that involve large datasets and are rather model-specific. They necessitate iterative training to calculate the score, prune the data accordingly, and subsequently train the model from the beginning. For example, a Q-score metric determines if a given sample will likely be misclassified based on the data's self-supervised feature representation and subsets with strong cross-correlation [24].
## 3 Methodology
Standard validation techniques (such as \(k\)-fold cross-validation or bootstrap) often used for small datasets rely on repeated random splits of the dataset. Inspired by a computer vision method [3], we propose to perform more informative audio data subsampling, as described in the following sections.
### High-dimensional audio embeddings and clustering
In image classification, the unsupervised \(k\)-means clustering algorithm was employed in the embedding space [3] In our work, we adopt a similar clustering technique, but given the different structure of audio data, the results may not generalize in the same way.
The proposed analysis starts with a high-dimensional audio characterization - for example, using wav2vec2 embeddings [25] - with the training data as input. The wav2vec2 model was trained to optimize a combination of two losses, namely contrastive loss and diversity loss. It is expected that the samples with similar characteristics (like the same word spoken by different persons or the same high tone of different musical instruments) will be close in feature space representation. The similarities between the samples can then be quantified using the distance-based clustering algorithm.
In the \(k\)-means clustering step, each example is represented as a point in high-dimensional space, and its proximity to a centroid is determined by the Euclidean distance metric. Typical (hereafter denoted as _simple_) samples are close to the cluster centers, whereas distinct (hereafter denoted as _hard_) examples are located away from the centroids. Therefore, we can perform simple (hard) pruning by removing the closest (farthest away) points, depending on the distance and the specified fraction of data to be kept. To illustrate the proposed data selection approach, in Fig. 1, we demonstrate the \(k\)-means clusters obtained by projecting the data onto a 2D plane with principal component analysis (PCA).
The original formulation of \(k\)-means relies on the pairwise Euclidean distance. Recently, \(k\)-means has been adapted to work in hyperbolic space, which is better suited for hierarchical data [26, 27]. Other clustering algorithms, such as \(k\)-medians (based on Manhattan distance) or \(k\)-medoids (which minimizes arbitrary distance functions), may be more appropriate for audio data. Therefore, a systematic analysis of clustering-based pruning with various distance metrics is needed.
Figure 1: Cluster-based data pruning with \(k\) = 15 on a 2D projected dataset (with PCA), where we remove 50% of (a) hard and (b) simple examples. We emphasize that our subsequent analysis solely focuses on the high-dimensional representation of the embeddings.
### Class selection and dataset imbalance
The number of classes in the \(k\)-means algorithm is a hyperparameter that is not known a priori. To estimate the optimal \(k\), we can use PCA and compute the proportion of variance explained. In our example, we found that a substantial number (over 150) of principal components are needed to account for around 80% of the variance. Importantly, performing PCA before clustering reduces the dimensionality of the data, thus accelerating the clustering analysis and data selection. As the average complexity of the \(k\)-means algorithm is \(\mathcal{O}(kNT)\), where \(N\) stands for the number of samples and \(T\) is the number of iterations, we suggest using state-of-the-art libraries (such as Faiss [28]) for large datasets. The obtained optimal number of \(k\)-means classes might differ from the number of label classes, suggesting that each class might contain additional information beyond the specific keyword it represents. We speculate that these latent features may be related to factors such as the recording equipment used, background noise, or general characteristics of the speaker (for example, gender, pitch, or speech rate). To confirm the intuition behind these results, we propose investigating the cross-correlations within the clusters with Spearman's rank correlation.
Clustering algorithms often reinforce existing class imbalances, which can lead to a degradation in NN performance. Based on the Shannon entropy, we define \(balance=-\sum_{i}^{c}p_{i}\log(p_{i})/\log(c)\), where \(c\) is the number of classes and \(p_{i}\) is the ratio of the number of elements in class \(i\) to the total number of samples in the dataset. The effect of imbalance can be countered by data augmentation. However, augmentation creates adversarial examples, which can interfere with cluster-based data pruning. To understand the interplay between augmentation and pruning, it would be necessary to examine the relationship between hard and simple examples. This can be done, for instance, by quantifying the information content of the samples (see [29, 30]).
## 4 Experiments and Results
The purpose of this section is to demonstrate the practical applications of cluster-based data selection described in the previous section.
### Data preparation and training
We prepare four training sets by gradually removing simple or hard examples from the original training set (from 10% to 40% of data points excluded). We then use these subsets to train three KWS systems of different sizes and measure their performance as we manipulate the number of training samples \(N\). This allows us to investigate the asymptotic, large \(N\) limit, as well as the small \(N\) regime, which is relevant for devices with limited memory. The audio classification is performed by extracting Mel Frequency Cepstral Coefficients (MFCCs) and passing them through a simple Convolutional Neural Network (CNN) classifier. We constructed _tiny_, _small_, and _large_ NNs, with 3.5k, 29k and 270k parameters respectively, based on the original LeNet architecture [31].
We obtained the best performance of the trained models for \(k\) = 155; we emphasize that the optimal value of \(k\) is dataset-specific. Table 1 presents the dataset imbalance for the simple and hard pruning methods at exemplary pruning fractions. Notably, there is a class within our dataset that contains only the background noise, with no keywords present. As the samples in this class have a clearly distinct structure from those with other labels, over half of them are removed by simple pruning.
### Scaling analysis
It has been observed across various domains [32, 33] that the loss should scale as a power law with dataset size, _loss_\(\sim 1/N^{\nu}\), with \(\nu<1\). While accuracy is an intuitive metric for classification tasks, it does not follow any scaling law. To quantify the asymptotic performance of an NN, we extract the scaling exponent \(\nu\) from the dependence of the loss on the number of training samples. \(\nu\) can be straightforwardly obtained from a linear regression of this dependence in a log-log scale.
### Results
Fig. 2, shows the test loss and test accuracy against training set size curves for the most effective pruning strategies across _tiny_ and _large_ architectures. The exponents \(\nu\) from all experiments are collected in Table 2. When the subset of training samples \(N\) is smaller than the size of the entire training set, and a pruning fraction is small, it is necessary to reduce the pruned dataset further. In such cases, we randomly remove examples from the reduced dataset. We compare the results for various pruning strategies with random, unstructured pruning and the Q-score, informed pruning baseline [24]. As our datasets are relatively small, obtaining reliable performance estimates requires repeated cross-validation. For each value of \(N\), we randomly partitioned the pruned datasets into a training set 100 times and calculated the averaged scores. This additional step is needed to demonstrate the effect of pruning clearly and does not significantly increase computational overhead.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} pruning & None & 20 \% hard & 40\% hard & 20\% simple & 40\% simple \\ \hline balance & 0.982 & 0.975 & 0.966 & 0.984 & 0.984 \\ \end{tabular}
\end{table}
Table 1: Dataset class imbalance for different pruning methods. A dataset with equally distributed samples is characterized by \(balance=1\). Simple pruning slightly improves the balance of the original dataset, while hard pruning leads to a more significant imbalance.
Our analysis reveals that the scaling behavior of _large_ and _small_ models is similar, while the _tiny_ network performs distinctly. Specifically, we observe that as \(N\) increases, there may be either a breakdown of power-law scaling or a crossover between two regimes with different scaling exponents \(\nu\). We decided to exclude these points for the exponents computations. In fact, the scaling with respect to the dataset size or model parameters may be a more complicated, not necessarily monotonic function.
## 5 Discussion
While the differences in the exponents \(\nu\) are not pronounced (see Table 2), our data selection method has an apparent effect on the scaling behavior:
**Simple pruning**, where we discard typical samples (close to the centroids), results in a drop in accuracy compared to random pruning when \(N\) is small. As \(N\) increases, the accuracy becomes comparable between these two pruning strategies. At around \(N=5\cdot 10^{4}\), the accuracy for the pruned datasets surpasses that of the randomly pruned case. Additionally, we have observed that increasing the fraction of samples removed has a more noticeable impact on accuracy for _small_ and _large_ models. For the _tiny_ model, the 40% pruning gives rise to the largest exponent. The obtained results of the proposed simple audio data pruning differ from the original image data pruning method [3], where the authors claim that retaining easy examples (or, equivalently, removing hard) is more important for limited datasets. However, they used massive image databases such as ImageNet to confirm their analytical findings on the knowledge distillation setting.
**Hard pruning**, where we remove most informative samples (located far from the centroids), leads to a slight overall decrease in accuracy. At the same time, the extracted scaling exponents \(\nu\) are smaller. For the _tiny_ model, the values of \(\nu\) are characterized by significant uncertainty. We verified that averaging over more training does not decrease the variance. Only the _small_ model exhibits a monotonic decrease in performance as we remove more samples.
Generally, we found that large fractions of simple/hard pruning do not offer advantages over random pruning when \(N\) is small. This is not unexpected, as the networks may have limited exposure to diverse training data and, therefore, not generalize well. Above \(N=5\cdot 10^{4}\), we achieve an accuracy of over 80% for almost all pruning techniques, except for hard pruning at 40% samples removed. Remarkably, some classes (such as the ones corresponding to the keywords 'on' and 'no') are strongly impacted by simple pruning while remaining almost unaffected by hard pruning. This finding suggests that a notion of similarity captured by \(k\)-means clustering may be close to human auditory perception. As the \(k\)-means appears to identify hidden patterns, clustering may also help in improving existing data labeling. The question of to what extent the results from the scaling analysis are transferable to other datasets remains open.
Our work is a first step towards informed cluster-based audio data pruning, and it would be of great interest to apply the proposed workflow to a larger KWS dataset such as SiDi [34], different audio processing tasks, and eventually with various audio embeddings. The implementation of the proposed method and experiments is open-sourced1.
Footnote 1: [https://github.com/Soris-Bergsma/Audio_pruning](https://github.com/Soris-Bergsma/Audio_pruning)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Pruning method & \(\nu_{tiny}\) & \(\nu_{small}\) & \(\nu_{large}\) \\ \hline Random pruning & 0.244 & 0.396 & 0.391 \\ \hline
10 \% hard & 0.249 & 0.390 & 0.386 \\
20 \% hard & 0.240 & 0.386 & 0.389 \\
30 \% hard & 0.239 & 0.386 & 0.388 \\
40 \% hard & 0.248 & 0.377 & 0.387 \\ \hline
10 \% simple & 0.242 & 0.397 & 0.393 \\
20 \% simple & 0.250 & 0.406 & 0.401 \\
30 \% simple & 0.243 & 0.408 & 0.408 \\
**40 \% simple** & **0.260** & **0.413** & **0.421** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Scaling exponents \(\nu\) (\(\uparrow\)) obtained from \(loss=f(N)\) dependencies. Larger \(\nu\) indicates faster loss decay as \(N\) increases. The estimated exponent for the _tiny_ model has an error of \(0.01\), while for the _small_ and _large_ models, the error becomes smaller and is \(0.005\). We find that _small_ and _large_ NNs are affected similarly by pruning; _tiny_ model is characterized by a smaller \(\nu\). Simple pruning generally leads to more favorable scaling, while hard pruning may only benefit specific, fine-tuned cases.
Figure 2: Performance comparison of _tiny_ and _large_ models. For both architectures, we observe that simple pruning outperforms random pruning. This statement is valid for both loss and accuracy. Since the KWS dataset is rather limited, we add some extrapolation to show the potential of our method for larger datasets. The baseline Q-score pruning method performs as the random pruning and thus is not shown here. |
2306.00072 | Detection of collective modes in unconventional superconductors using
tunneling spectroscopy | We propose using tunneling spectroscopy with a superconducting electrode to
probe the collective modes of unconventional superconductors. The modes are
predicted to appear as peaks in dI/dV at voltages given by eV = {\omega}i/2
where {\omega}i denotes the mode frequencies. This may prove to be a powerful
tool to investigate the pairing symmetry of unconventional superconductors. The
peaks associated with the collective modes appear at fourth order in the single
particle tunneling matrix element. At the same fourth order, multiple Andreev
reflection (MAR) leads to peaks at voltage equal to the energy gaps, which, in
BCS superconductors, coincides with the expected position of the amplitude
(Higgs) mode. The peaks stemming from the collective modes of unconventional
superconductors do not suffer from this coincidence. For scanning tunneling
microscopes (STM), we estimate that the magnitude of the collective mode
contribution is smaller than the MAR contribution by the ratio of the energy
gap to the Fermi energy. Moreover, there is no access to the mode dispersion.
Conversely, for planar tunnel junctions the collective mode peak is expected to
dominate over the MAR peak, and the mode dispersion can be measured. We discuss
systems where the search for such collective modes is promising. | Patrick A. Lee, Jacob F. Steiner | 2023-05-31T18:00:11Z | http://arxiv.org/abs/2306.00072v3 | # On the detection of collective modes in unconventional superconductors using tunneling spectroscopy
###### Abstract
We propose using tunneling spectroscopy with a superconducting electrode to probe the collective modes of unconventional superconductors. The modes are predicted to appear as peaks in \(dI/dV\) at voltages given by \(eV=\omega_{i}/2\) where \(\omega_{i}\) denotes the mode frequencies. This may prove to be a powerful tool to investigate the pairing symmetry of unconventional superconductors. The peaks associated with the collective modes appear at fourth order in the single particle tunneling matrix element. At the same fourth order, multiple Andreev reflection (MAR) leads to peaks at voltage equal to the energy gaps, which, in BCS superconductors, coincides with the expected position of the amplitude (Higgs) mode. The peaks stemming from the collective modes of unconventional superconductors do not suffer from this coincidence. For scanning tunneling microscopes (STM), we estimate that the magnitude of the collective mode contribution is smaller than the MAR contribution by the ratio of the energy gap to the Fermi energy. Moreover, there is no access to the mode dispersion. Conversely, for planar tunnel junctions the collective mode peak is expected to dominate over the MAR peak, and the mode dispersion can be measured. We discuss systems where the search for such collective modes is promising.
In the past three decades, many examples of unconventional superconductors (SC) have been discovered. Many of these have multiple order parameters, either due to pairing in several disconnected Fermi surfaces, or due to pairing that is intrinsically multi-component. In the latter case, the order parameters may be members of a particular irreducible representation, prime examples being MgB\({}_{2}\)[1] and the iron based superconductors [2]. Alternatively, they are of mixed symmetry due to the breaking of lattice or time reversal symmetry. While there are numerous examples of mixed symmetry pairing, very often it is difficult identify the precise order parameter symmetry in these materials. In an interesting recent paper Poniatowski et al. [3] pointed out that, since these systems exhibit collective modes beyond the familiar phase and amplitude (Higgs) modes, the detection of these modes may serve as signature of the order parameter symmetry. They investigated several examples and showed that commonly these modes lie below the quasi-particle gap \(2\Delta\) and hence form well defined excitations. Some of these collective modes are analogs of the Leggett mode [4], or of the "clapping" mode, familiar from the He\({}^{3}\) literature [5; 6]. While progress in the detection of such modes has been made using nonlinear optical spectroscopy [7], they are often charge neutral and thus evade detection using conventional tools. Motivated by this, we study the question whether the collective modes of unconventional superconductors may be detected using tunneling spectroscopy. We investigate point contact tunneling such as scanning tunneling microscopy (STM) as well as planar tunneling and compare their respective advantages and disadvantages. We will also discuss examples where such experiments may be feasible.
The idea of using tunneling to detect pair fluctuations goes back to the seminal papers by Ferrell [10] and Scalapino [11]. They were interested in pair fluctuations above the critical temperature \(T_{c}\), and pointed out that the pair fluctuations appear in linear response to an external pairing order parameter, just like magnetization fluctuations appear as the linear susceptibility to an external magnetic field. More specifically, they considered a tunnel junction with voltage bias \(V\) between
Figure 1: Schematic drawing of the STM tunneling conductance \(dI/dV\) with a SC tip (with energy gap \(\Delta_{L}\)) showing the expected subgap features up to fourth order in the tunneling amplitude. The standard quasi-particle peak starting at \(\Delta_{R}+\Delta_{L}\) has been reduced by \(g=(h/e^{2})/R_{N}\), the dimensionless normal state conductance. Below this energy we find the multiple Andreev reflection (MAR) peaks at \(\Delta_{R}\) and \(\Delta_{L}\) which overlap the respective amplitude (Higgs) modes. Shown in red is the contribution from a collective mode for an unconventional SC on the R side at frequency \(\hbar\omega_{i}\). It consists of a peak at \(eV=\omega_{i}/2\) and a tail towards higher voltage. Its height has been multiplied by \(E_{F}/\delta\). Shown in blue is the Josephson current that has been broadened by dissipation. The lineshape is given by Eq. (4) for \(kT>E_{J}\) which is the typical situation [8] and is much narrower in the opposite limit [9]. The collective mode is the new feature discussed in this paper.
two SCs \(L\) (left) and \(R\) (right) with different \(T_{c,j}\) and energy gaps \(\Delta_{j}\), \(j\in\{L,R\}\), where the left SC is assumed to have a higher \(T_{c}\). The pair tunneling Hamiltonian is obtained by expanding the Josephson energy \(E_{J}\) of a junction with area \(A\) to linear order in \(\Delta_{R}\), which is then replaced by the pair destruction operator \(\hat{\Delta}_{R}(x)=\left|g_{0}\right|\psi_{R,\downarrow}(x)\psi_{R,\uparrow} (x)\), \(g_{0}\) being the BCS coupling. This gives
\[H_{\rm pair}=\int d^{2}x\ Ce^{-i\omega t}\ \hat{\Delta}_{R}(x)+{\rm h.c.}, \tag{1}\]
where we defined the Josephson frequency \(\omega=2eV\) (\(\hbar=1\)), the coupling strength \(C=\partial(E_{J}/A)/\partial\Delta_{R}\), and the Josephson energy \(E_{J}=(g/4\pi)\Delta_{R}K(\sqrt{1-\Delta_{R}^{2}/\Delta_{L}^{2}})\) in terms of the elliptic function \(K\). Moreover, \(g=(h/e^{2})/R_{N}\) is the dimensionless conductance, and \(R_{N}\) is the junction resistance in the normal state [12]. In the limit \(\Delta_{L}\gg\Delta_{R}\), \(E_{J}=(g/2\pi)\Delta_{R}\ln(4|\Delta_{L}/\Delta_{R}|)\) and we recover the expressions given in Ref. [11]. Standard linear response theory gives the current as
\[I_{\rm pair}(V,H)=(4e/\hbar)C^{2}A\ {\rm Im}\,\chi_{R}(\omega=2eV,q=q_{H}), \tag{2}\]
where \(\chi_{R}(\omega,q)\) is the Fourier transform of the pair susceptibility
\[\chi_{R}(x,t)=-i\langle[\hat{\Delta}_{R}(x,t),\,\hat{\Delta}_{R}(0)^{\dagger} ]\rangle\theta(t), \tag{3}\]
and \(q_{H}\) is the pair momentum induced by a magnetic field \(H\) parallel to the junction [11]. (We have assumed that \(R\) is a two dimensional SC. Otherwise, the current will depend on the thickness of \(R\) provided it is less than the coherence length, c.f. Ref. [11].)
The pair fluctuations were successfully measured very close to the transition temperature \(T_{c,R}\)[13]. In principle, there is no reason why the same arguments cannot be applied to low temperatures. In that case, the collective modes are poles in \(\chi_{R}(\omega,q)\) and should show up as peaks in the tunneling current. In fact, this has been proposed as a way to measure the Higgs mode [14]. In practice, peaks corresponding to putative collective modes have never been seen in tunneling experiments. A purpose of this paper is to explain this absence, and to point out the conditions under which such observations may become successful in the future.
We begin by noting that the current \(I_{p}\) is proportional to \(g^{2}\) and is fourth order in the tunneling matrix element. We first consider the STM case and consider other terms to the same order in the tunneling current. STM spectra exhibit sub-gap structures Stemming from processes commonly known as multiple Andreev reflections (MAR) [8; 15]. They may be calculated in an expansion in powers of the tunneling matrix element [16]. At fourth order, the first set of MAR peaks appears at \(eV=\Delta_{L}\) and \(\Delta_{R}\) in \(dI/dV\). They correspond to processes where a pair tunnels across the junction and gains an energy \(2eV\). For \(2eV>2\Delta_{R}\) this energy can go into exciting a pair of quasi-particles on the \(R\) side. This gives rise to a step threshold in the current \(I(V)\) and consequently a peak in \(dI/dV\) at \(eV=\Delta_{R}\). A similar argument produces a step at \(\Delta_{L}\). MAR peaks are commonly seen in STM when the tip is brought close to the surface, increasing \(g\)[8; 15]. The ratio of the lowest order MAR peak in \(dI/dV\) to the conductance above the coherence peak threshold is simply of order \(g\) (see Fig. 1). We note that \(g\) of order 0.01 or even unity can be achieved [15; 17]. Nevertheless, collective modes such as the Higgs mode have not been reported in STM experiments. One reason lies in the fact that, in conventional SC, the phase mode is pushed up to the plasma frequency and the only remaining collective mode is the Higgs mode which has energy \(2\Delta_{R}\). This gives rise to a peak at \(eV=\Delta_{R}\) which happens to coincide with the lowest order MAR peak. Furthermore, as shown below, in STM the magnitude of the collective mode contribution is reduced from the MAR magnitude by a factor \(\Delta_{R}/E_{F}\) where \(E_{F}\) is the Fermi energy of the \(R\) SC. This reduction stems from point tunneling: in this case, the current involves a convolution over the momentum of the mode. The latter disperses rapidly on the scale of the inverse coherence length \(\xi^{-1}\), giving rise to this suppression. We conclude that the collective mode may be visible in STM only for strongly correlated materials where the factor \(\Delta_{R}/E_{F}\) is not too small.
Another contribution to the same order in \(g^{2}\) commonly seen in STM is the Josephson current broadened by thermal noise. Thermal fluctuations dephase the junction and convert the Josephson current from a delta function to a peak structure at low but finite bias. The theory has been given by Ivanchenko and Zilberman [18]. The result depends on the relative size of the Josephson energy \(E_{J}\) of the junction to the noise temperature \(k_{B}T_{0}\). (Note that \(T_{0}\) is in general different from and larger than the sample temperature \(T\).) In the limit \(E_{J}\ll k_{B}T_{0}\) the current is given by
\[I_{J}(V)=\frac{e}{\hbar}\frac{E_{J}^{2}}{k_{B}T_{0}}\frac{2eV\Gamma_{0}}{(2eV)^ {2}+\Gamma_{0}^{2}}, \tag{4}\]
where the width is given by \(\Gamma_{0}=k_{B}T_{0}R_{0}(2e)^{2}/\hbar\) with the dissipation parametrized by an effective resistance \(R_{0}\). More specifically, the external circuit is modelled by a series resistance \(R_{0}\) (not to be confused with the normal state junction resistance \(R_{N}\)) which gives rise to voltage fluctuations across the junction characterized by \(\langle\delta V(t)\delta V(t^{\prime})\rangle=2k_{B}T_{0}R_{0}\delta(t-t^{ \prime})\). Note that in Eq. (4) the maximum current is proportion to \(E_{J}^{2}/T_{0}\) which is proportional to \(g^{2}\). In fact, STM data are usually in this limit: the lineshape predicted by Eq. (4) is often seen as a peak in \(dI/dV\) whose height is comparable to and scales in the same way as the MAR peak at \(eV=\Delta_{R}\) with changing tip height [8]. On the other hand, planar junctions are in the opposite limit \(E_{J}\gg k_{B}T_{0}\) because \(E_{J}\) scales with the area. In this case, the peak is very narrow and steep [9]. A useful physical picture is that of an overdamped particle moving in a tilted "washboard potential". In the limit \(E_{J}\ll k_{B}T_{0}\) thermal fluctuations lead to rapid jumps over the washboard barrier and give rise to phase slips, resulting in Eq. (4).
In the case of the Josephson current, the voltage bias is small and the washboard is relatively flat. For the collective mode we are in a large voltage regime where the phase is running rapidly down the washboard and subject to weak modulation due to \(E_{J}\). In this case the phase across the junction is given to a good approximation by \(\theta(t)\approx\frac{2e}{\hbar}[V_{ext}t+\int_{0}^{t}dt^{\prime}\delta V(t^{ \prime})]\). Hence, the fluctuating part of the phase correlation is given by \(\langle(\delta\theta(t)-\delta\theta(0))^{2}\rangle=(\frac{2e}{\hbar})^{2}2k_{ B}T_{0}R_{0}t\). Inserting this into Eq. (3), we find that the effect of thermal noise is to introduce an additional Lorentzian convolution to the response function, with a width given by \(\Gamma_{0}\). Similar arguments show that the width of the MAR peak is also given by \(\Gamma_{0}\) (see Appendix B). Thus, the minimal width of all the sub-gap structures shown in Fig. 1 is set by the width of the Josephson peak in Eq. (4) which can be readily measured. It follows that a condition for the visibility of the collective mode is simply that its width given by \(\Gamma_{0}\) is not so large that it will overlap other features such as the Josephson peak or the MAR peak. Note that the width of the collective mode and the MAR peak are the same whether \(k_{B}T_{0}\) is large or small compared with \(E_{J}\). Only the Josephson peak is affected by this condition.
Next, we turn to a microscopic treatment of the problem, including the multi-component SCs mentioned in the introduction. We will derive an extension of the pair tunneling Hamiltonian Eq. (1) by calculating the in-gap current to fourth order in \(t_{k,p}\), following earlier work by Takayama [19]. The voltage drop across the junction can be absorbed into a time dependent tunneling matrix element \(t_{k,p}e^{ieVt}\), rendering the SC leads at equilibrium. This allows a treatment within the conventional Matsubara formalism, and the more elaborate Keldysh treatment [16] is not necessary. Details of the calculation are given in the Appendix. Here we summarize the main results.
For the STM case, tunneling occurs at a single point does not resolve the momentum of the pair response function as in Eq. (2). Instead, the current involves an integral over the momentum \(\mathbf{q}\):
\[I_{\text{STM}}(V)=4e\sum_{\alpha,\beta}\int d\mathbf{q}\ M_{\alpha}^ {*}(\mathbf{q},V)M_{\beta}(\mathbf{q},V)\\ \times\text{Im}\,\chi_{\alpha,\beta}(\mathbf{q},\omega=2eV). \tag{5}\]
Here, we have introduced multiple pairing order parameters \(\Delta_{\mathbf{k}}^{(\alpha)}\) for the \(R\) SC. We will drop the \(R\) label from now on. The label \(\alpha\) may refer to pairing in different bands, or to members of a irreducible representation, or to superposition of different pairing symmetries when time reversal or crystalline symmetry is spontaneously broken. Following Ref. [3], we assume a separable form for the attractive interaction \(U_{\mathbf{k},\mathbf{k}^{\prime}}=-\sum_{\alpha}g_{\alpha}\zeta_{\mathbf{k}}^{(\alpha)} \zeta_{\mathbf{k}^{\prime}}^{(\alpha)}\), where the \(\zeta_{\mathbf{k}}^{(\alpha)}\) are orthonormal form factors and the \(g_{\alpha}\) are the coupling constants in the corresponding channel. We neglect the dependence on \(\mathbf{q}\), the center of mass momentum of the Cooper pair. This vertex is shown in Fig. 2(e). The pair destruction operator is generalized to \(\hat{\Delta}_{\mathbf{k}}^{(\alpha)}(\mathbf{q})=\sum_{\mathbf{k}}g_{\alpha}\zeta_{\mathbf{k }}^{(\alpha)}c_{-\mathbf{k},\downarrow}c_{\mathbf{k}+\mathbf{q},\uparrow}\). For simplicity of notation, we assume singlet pairing, but the calculation can be straightforwardly extended to general pairing symmetry. The pair susceptibility can be generalized from Eq. (3) in a natural way as
\[\chi_{\alpha,\beta}(\mathbf{q},t)=-i\langle[\hat{\Delta}_{\mathbf{k}}^{(\alpha)}(\mathbf{ q},t),\ \hat{\Delta}_{\mathbf{k}}^{(\beta)}(\mathbf{q},\!0)^{\dagger}]\rangle\theta(t). \tag{6}\]
The matrix element \(M_{\alpha}(\mathbf{q},V)\) is given by the sum of the two triangles shown in Fig. 2(a) and (b). It is
\[M_{\alpha}(\mathbf{q},V)=T\sum_{\omega_{m}}\int d\mathbf{k}\,d\mathbf{p}\,| \tilde{t}|^{2}\zeta_{\mathbf{p}}^{(\alpha)}F_{L}(\mathbf{k},\omega_{m})[G_{R}(\mathbf{p}+ \mathbf{q},\omega_{m}+ieV)G_{R}(-\mathbf{p},-\omega_{m}+ieV)\\ -F_{R}(\mathbf{p}+\mathbf{q},\omega_{m}+ieV)F_{R}(-\mathbf{p},-\omega_{m}+ieV )]. \tag{7}\]
Figure 2: Diagrams that contribute to the STM tunneling current to fourth order in the tunneling matrix element represented by the solid dot. The diagrams that couple to the collective mode is shown in (c) where the double line represents the pair propagator. (a) and (b) show the two diagrams \(M_{1}\) and \(M_{2}\) that contribute to the triangle in the right side of (c). Two similar diagrams contribute to the left triangle. The anomalous Green function of the SC on the L is shown in red. In (a) the R SC Green function is the regular \(G\) while in (b) it is the anomalous \(F\) function. These are shown in (d) where it is noted that the frequency and momentum changes sign on opposite ends of the \(F\) function. (e) shows the BCS coupling in separable form for each channel \(\alpha\).
Here, we have neglected the momentum dependence of the tunneling matrix element \(t_{k,p}\), replacing it by \(\tilde{t}\). We have further used the fact that the anomalous electron Green function satisfies \(F_{\downarrow\uparrow}(\mathbf{k},\omega_{m})=-F_{\uparrow\downarrow}(\mathbf{k}, \omega_{m})\) which accounts for the negative sign in the second term. Note that one factor of \(\zeta^{(\alpha)}_{\mathbf{k}}\) enters the matrix element in Eq. (7) and one factor enters the pair susceptibility in Eq. (5). It is easy to see that the left triangle in Fig. 2(c) is the complex conjugate of the right triangle. The product \(\zeta^{(\alpha)}_{\mathbf{p}}F_{L}(\mathbf{k},\omega_{m})\) in Eq. (7) determines which components of the pair fluctuation can be probed. For example, if \(L\) is a conventional s-wave SC, only collective modes with a component \(\alpha\) corresponding to s-wave will couple, as we shall illustrate by an example below.
Based on the form Eq. (5), we suggest a nonlocal generalizaion of the pair tunneling Hamiltonian Eq. (1),
\[\tilde{H}_{\rm pair}=\tilde{C}(\omega,\mathbf{r}-\mathbf{r}^{\prime})e^{-i\omega t}\ \hat{\Delta}_{R}(\mathbf{r}^{\prime})+{\rm h.c.}, \tag{8}\]
for STM tunneling at position \(\mathbf{r}\). It is clear that linear response based on Eq. (8) leads to Eq. (5) if we identify the Fourier transform of \(\tilde{C}(\mathbf{r})\) with \(M(\mathbf{q})\). Similarly, an integration over \(\mathbf{r}\) in Eq. (8) gives the generalization of Eq. (1) for planar junctions.
As shown in Appendix A, \(M\) has a smooth \(V\) dependence which can usually be ignored. More importantly, Eq. (5) involves a convolution in momentum space between the pair susceptibility and the product of the matrix element. We find that \(M_{\alpha}(\mathbf{q})\) goes to a constant for small \(q\) and falls off with \(q\) on a scale given by the inverse of the coherence length \(\xi\) when \(\Delta_{R}\sim\Delta_{L}\) (see Appendix for details). The physical origin of the nonlocality in Eq.(8) and the convolution over \(\mathbf{q}\) in Eq. (5) is that the Cooper pair is injected from the \(L\) SC one electron at a time by the single particle tunneling matrix element. Consequently quasi-particles exists virtually over a distance of order \(\xi\) before recombining to form a Cooper pair on the \(R\) SC.
Next, we estimate the magnitude of the collective mode contribution to the current. For simplicity we discuss the single order parameter case. The pair susceptibility \(\chi_{\alpha,\beta}(\mathbf{q},\omega)\) in Eq. (6) is given by the inverse of \(1-g_{o}\Pi(q,\omega)\) where \(\Pi(q,\omega)\) is the polarization function and \(g_{0}\) is the BCS coupling. The polarization function takes the form \(\Pi(q,\omega_{n})=N(0)(1+(\omega_{n}^{2}+\beta_{i}v_{F}^{2}q^{2})/\alpha_{i} \Delta^{2})\) where \(\alpha_{i}\) and \(\beta_{i}\) are numbers of order unity. The zeroes of \(\Pi(q,\omega)\) give the collective mode dispersion \(\omega_{i}(q)\)[3]. The pair susceptibility thus takes the form
\[{\rm Im}\ \chi(q,\omega)=\frac{1}{N(0)}\frac{\pi\alpha_{i}\Delta^{2}}{2\omega_{i }(q)}\delta(\omega-\omega_{i}(q)), \tag{9}\]
where \(\omega_{i}(q)^{2}=\omega_{i0}^{2}+\beta_{i}v_{F}^{2}q^{2}\) and \(\omega_{i0}=\sqrt{\alpha_{i}}\Delta\). Note that the dispersion is very steep: \(\omega_{i}(q)\) roughly doubles in value when \(q\) is of order the inverse of the coherence length \(\xi=v_{F}/\pi\Delta\). In planar junctions this form leads to delta functions in the current \(I(V)\) at \(2eV=\omega_{i}(\mathbf{q})\). Thus, planar junction tunneling allows access to the dispersion of the mode. This is not the case for STM spectroscopy which requires an additional integration over \(\mathbf{q}\). Instead of a delta function, the \(I(V)\) now features a step at \(2eV=\omega_{i0}\) followed by a smooth drop off on a scale set by \(M(q)\). The step function gives rise to a delta function in \(dI/dV\),
\[\frac{dI_{\rm STM}(V)}{dV}=e\frac{2\pi\Delta^{2}}{\beta_{i}N(0)v _{F}^{2}}\sum_{\alpha,\beta,i}M_{\alpha}^{*}(0,V)M_{\beta}(0,V)\\ \times\ \delta(2eV-\omega_{i0}) \tag{10}\]
which is followed by a negative tail towards larger voltages as sketched in Fig. 1.
Now, we compare the step in the collective mode contribution to the step in the MAR current which is given by \(eg^{2}\Delta\)[16]. (See also Appendix B.) To estimate the matrix element \(M_{\alpha}\) in Eq. (10), we show in the Appendix A that for a single s-wave order parameter, \(M_{\alpha}(q=0,V=0)=dE_{J}/d\Delta_{\alpha}\), consistent with Eqs. (1) and (8). We therefore set \(M_{\alpha}=\partial(E_{J})/\partial\Delta\approx\Delta g/8\) for a symmetric junction. Using \(N(0)=m/2\pi\) and \(E_{F}=mv_{F}^{2}/2\) we find the relative magnitude of the collective mode and MAR steps to be approximately \(\Delta/E_{F}\) as stated earlier. For conventional SC this ratio is very small which makes the detection of the collective mode infeasible. Nonetheless, there are now examples of strongly correlated SC's where this ratio is not very small. It is worthwhile to look for the collective mode contribution in STM in such systems.
The situation is more promising for planar junctions. It is useful to consider the ratio of the collective mode current \(I_{\rm planar}\) to the current in the normal state at \(eV=2\Delta_{R}\) which is given by \(I_{N}=2\Delta/(eR_{N})\). For simplicity, we consider the case \(\Delta_{R}=\Delta_{L}\) for which \(E_{J}=g\Delta/8\). [12]. We find
\[\frac{I_{\rm planar}}{I_{N}}=\frac{\pi^{3}}{32}\frac{\alpha_{i}g}{Ak_{F}^{2}} \frac{\Delta E_{F}}{\omega_{i0}}\frac{\Gamma_{i}}{(2eV-\omega_{i0})^{2}+\Gamma _{i}^{2}} \tag{11}\]
where we have replaced the delta function in Eq. (9) by a Lorentzian with width \(\Gamma_{i}\) which is given by \(\Gamma_{0}\) plus other sources of broadening such as inhomogeneity. A similar ratio for the planar MAR current is given in Appendix B, where it is found to have parametrically the same prefactor up to numerical constants, but the Lorentzian in Eq. (11) is replaced by \(\frac{eV}{\Delta\sqrt{(eV)^{2}-\Delta^{2}}}\). The latter should also be broadened by \(\Gamma_{0}\). Since this form is less singular than the Lorentzian, the collective mode contribution should dominate over the MAR peak in planar junctions, opposite to the situation in STM. More precisely, with the same broadening \(\Gamma_{i}\), the peak currents due to the collective mode and the MAR have a ratio of \(\sqrt{\Delta/\Gamma_{i}}\) which is greater than one.
We now address the size of the signal from the collective mode contribution given by Eq. (11). We interpret \(Ak_{F}^{2}\) as the number of tunneling channels in a planar junction and use the Landauer formula to define the ratio \({\cal T}_{\rm eff}=g/(Ak_{F}^{2})\) as the effective tunneling probability per channel. \({\cal T}_{\rm eff}\) gives the intrinsic transparency of a
tunnel junction and is generally a very small number. Conversely, the peak value of the Lorentzian is \(1/\Gamma_{i}\), and the ratio \(E_{F}/\Gamma_{i}\) is a very large number. For a typical planar junction, \({\cal T}_{\rm eff}\) is so small that the product is still too small to be observable for reasonable \(\Gamma_{i}\). This may be the reason why neither MAR nor collective modes have been observed in planar junctions. However, as we shall see, the numbers are not too far off, and there may be reasons for optimism. To see this we estimate that for the typical oxide tunnel barrier used in Ref. [13], the transparency is \({\cal T}_{\rm eff}\approx 10^{-8}\) (assuming \(A\sim 100^{2}\)nm\({}^{2}\), \(k_{F}\sim 1\)A\({}^{-1}\), and \(R_{N}\sim 2\Omega\)). In this experiment a fluctuating pair tunneling peak with width of about \(1\mu\)eV was readily observed. We conclude that a collective mode with width of order \(1\mu\)eV should be observable in a conventional oxide planar junction. Obviously, junctions with larger transparency will enable broader collective modes to be observed because the signal in Eq. (11) is proportional to the ratio \({\cal T}_{\rm eff}/\Gamma_{i}\). The minimal contribution to the width comes from voltage fluctuations and the corresponding \(\Gamma_{0}\) can be made very small [9]. In practice, in many of the strongly correlated SC of interest, such as cuprates or iron based SC, local inhomogeneity may lead to significant broadening of the collective mode in planar junctions. Their detection may require higher tunneling transparency \({\cal T}_{\rm eff}\) and possibly smaller area junctions. The latter requirement will reduce the current, making the experiment more challenging.
Apart from the larger signal compared with STM, we note that the current itself is predicted to show a narrow peak, so that the \(dI/dV\) signal is the derivative of a Lorentzian which has a distinctive lineshape with a large negative part. Furthermore, the planar junction has the advantage that the dispersion of the collective mode may be probed by applying an in-plane magnetic field. These distinctive features will provide strong evidence that a collective mode is being observed.
Now, we discuss several examples where the collective modes may be detected in \(dI/dV\) as sharp peaks. For simplicity, we assume that the SC being probed is inversion symmetric. If the \(L\) SC is conventional, only s-wave Cooper pairs can be injected from the \(L\) electrode and only the \(\alpha\)-components corresponding to s-wave pairing in Eq. (10) survive. Consider first to multi-band SC's, where s-wave pairing occurs in two different Fermi surfaces \(\alpha=1,2\). This is the case in MgB\({}_{2}\) and in iron based superconductors. In the latter case the s-wave pairing is out of phase between the two bands. This state is called \(s_{\pm}\). We expect a collective mode (the Leggett mode) corresponding to the out of phase oscillation of the two order parameters \(\Delta_{1}\) and \(\Delta_{2}\). This mode will manifest as a pole in \(\chi_{1,2}\). The Leggett mode has been observed by Raman scattering in MgB\({}_{2}\) at a relatively high energy of 9.2 meV which lies between the two doubled energy gaps [20]. Hence, this mode is damped, but it is nonetheless interesting to search for this peak within tunneling spectroscopy. We note that MgB\({}_{2}\) planar tunneling junctions have been successfully fabricated [21]. For the Fe based SC's, the situation is not so clear. We note that recently an observation of the Leggett mode was reported in single layer NbSe\({}_{2}\) in an experiment using a normal STM tip [22]. Here, the Leggett mode is interpreted as giving an excitation above the gap at energy \(\Delta+\omega_{i}\). Interestingly, the experiment found that \(\omega_{i}/2\approx 0.7\Delta\) which places the peak well inside the gap and well below the MAR structure. Note that NbSe\({}_{2}\) features a small ratio of \(\Delta/E_{F}\) so that the STM signal of the Leggett mode is expected to be very small, but perhaps a planar junction experiment can be attempted.
As a second application we consider the case of a time reversal breaking SC, more specifically of the type \(s+id\), i. e. an admixture of s- and d-wave pairing. This case is treated in detail in Ref. [3] and here we only summarize the salient features. The important point is that the presence of the s wave component allows us to couple to novel collective modes such as clapping modes. Let us define the s- and d-wave order parameter components as \((\Delta^{(0)},\Delta^{(2)})=(\eta_{0},-i\eta_{2})\Delta_{0}e^{i\theta}\), where \(\theta\) is the overall pair phase, and \(\eta_{0}\) and \(\eta_{2}\) are real numbers satisfying \(\eta_{0}^{2}+\eta_{2}^{2}=1\). It is convenient to introduce \(\Delta^{\pm}=\Delta^{(0)}/\eta_{0}\pm\Delta^{(2)}/(i\eta_{2})\). The saddle point solution occurs at \(\Delta^{+}=\Delta_{0}\) and \(\Delta^{-}=0\). Expanding around the saddle point we find the coordinates of the collective modes as
\[\Delta^{+}(x,t)=e^{i\theta}(\Delta_{0}+h(x,t)), \tag{12}\]
\[\Delta^{-}(x,t)=e^{i\theta}(a(x,t)+ib(x,t)). \tag{13}\]
Here, \(h\) denotes the amplitude or Higgs mode, while \(a\) and \(b\) denote two new modes which are generalizations of the clapping mode in \(p+ip\) SC's. Poniatowski et al. [3] show that these modes lie at approximately \(\sqrt{2}\Delta_{0}\). We will now show that they appear in the s-wave pair fluctuation channel in Eq. (10). To this end, we expand
\[\Delta^{(0)}(x,t)=e^{i\theta}(\eta_{0}\Delta_{0}+\tilde{\Delta}^{(0)}(x,t)). \tag{14}\]
It is easy to see that the fluctuating part is given by
\[\tilde{\Delta}^{(0)}(x,t)=\eta_{0}(h(x,t)+a(x,t)+ib(x,t)). \tag{15}\]
Hence, all three modes will appear in the \(\alpha=0\) pair fluctuation component in Eq. (10). In particular, the generalized clapping modes will show up as peaks in the vicinity of \(eV=\Delta_{0}/\sqrt{2}\), well separated from the MAR peak at \(\Delta_{0}\). This is shown schematically in Fig. 1. In the iron based SC Ba\({}_{1-x}\)K\({}_{x}\)Fe\({}_{2}\)As\({}_{2}\), a time reversal breaking SC state appears for \(x\) between 0.7 and 0.8 and is suspected to be an \(s+id\) SC [23]. This is an excellent candidate to search for these collective modes.
We consider a third example of triplet pairing where the time reversal breaking state may be of the \(p+ip\) or \(p+if\) type. UTe\({}_{2}\) may be an example [24]. This rather complicated structure preserves inversion in the bulk, but it is possible, indeed likely, that the top layer breaks inversion due to some local structural relaxation. In this case the s-wave pair injected by the \(L\) SC is admixed with
the p- and f-wave order parameter in the first layer of the \(R\) SC, and the matrix element \(M_{\alpha}\) in Eq. (5) is nonzero for non s-wave components. In this way, collective modes may couple to the current. Indeed, a recent STM experiment using a Nb tip found a subgap peak near the expected energy gap for UTe\({}_{2}\) and a peak in \(dI/dV\) near zero voltage suggestive of a broadened Josephson current [24]. The latter observation points to an admixture of s-wave pairing in the top layer, as we need. Unfortunately, the ratio \(\Delta/E_{F}\) may be too small for STM to observe the collective mode in this system.
We end by discussing the feasibility of probing collective modes of unconventional SC using either planar or STM tunnel junctions. It is generally considered difficult to make planar tunnel junctions in these systems, but with modern fabrication techniques it may be possible to create nanoscale tunnel junctions with high transparency that are free of pin-holes. Another approach may involve stacks of van der Waals materials such as transition metal dichalcogenides (TMD). A variety of insulating TMD may be used to create monolayer barriers between SC layers. In cuprates, a single layer of insulating parent state may be used as tunneling barrier [25]. On the STM front, it has been demonstrated that it is possible to pick up a piece of layered SC with an STM tip, which then serves as the SC electrode [26]. This holds promise for the present application: e.g., a cuprate SC tip would allow the collective modes of systems with \(d\)-wave [27] or \(d+id\) pairing symmetry to be probed. As noted above, the detection of collective modes in STM relies on a relatively large ratio of \(\Delta/E_{F}\). Recently, a range of strongly correlated SC have been discovered, which represent promising candidates for the present proposal. Notable examples are twisted bilayer and trilayer graphene, where the ratio is so large that the BEC limit may be reached [28]. Another example are the iron based topological SC, which have very small Fermi energy [17]. An example that is not well understood is the superconductivity observed in YPtBi which involves doping of a quadratic touching band with a very small Fermi energy and a short coherence length [29]. We conclude that, while they are challenging, tunneling experiments probing the signatures of the collective modes in unconventional SC are within reach.
## Acknowledgements
We thank Shuqiu Wang and Seamus Davis for sharing their data on UTe\({}_{2}\) which stimulated this investigation. We thank Nicholas Poniatowski, Leonid Glazman and Iliya Esin for helpful discussions. PL acknowledges support by DOE (USA) office of Basic Sciences Grant No. DE-FG02-03ER46076. JFS acknowledges support by the Air Force Office of Scientific Research under award number FA9550-22-1-0339.
|
2307.16539 | Groups of Invertible Binary Operations of a Topological Space | In this paper, continuous binary operations of a topological space are
studied and a criterion of their invertibility is proved. The classification
problem of groups of invertible continuous binary operations of locally compact
and locally connected spaces is solved. A theorem on the binary distributive
representation of a topological group is also proved. | Pavel S. Gevorgyan | 2023-07-31T10:05:44Z | http://arxiv.org/abs/2307.16539v1 | # Groups of invertible binary operations of a topological space
###### Abstract.
In this paper, continuous binary operations of a topological space are studied and a criterion of their invertibility is proved. The classification problem of groups of invertible continuous binary operations of locally compact and locally connected spaces is solved. A theorem on the binary distributive representation of a topological group is also proved.
Key words and phrases:Binary operation; topological group; groups of homeomorphisms 2020 Mathematics Subject Classification: 54H15, 22A25
## 1. Notation and auxiliary results
Throughout this paper, by a space we mean a topological space. All spaces are assumed to be Hausdorff.
By \(C(X,Y)\) we denote the space of all continuous maps of the space \(X\) to space \(Y\), endowed with the compact-open topology, that is, the topology generated by the subbase consisting of all sets of the form \(W(K,U)=\{f:X\to Y;\ f(K)\subset U\}\), where \(K\) is a compact subset of \(X\) and \(U\) is an open subset of \(Y\). All spaces of maps are considered in the compact-open topology.
If \(G\) is a topological group, then there is a natural group operation on \(C(X,G)\): given any continuous maps \(f,g\in C(X,G)\), their product \(fg\in C(X,G)\) is defined by formula \((fg)(x)=f(x)g(x)\) for all \(x\in X\).
**Theorem 1** ([1]).: _If \(G\) is a topological group, then so is \(C(X,G)\)._
The group of all homeomorphisms of \(X\) is denoted by \(H(X)\). Generally, this group is not a topological group. However, the following theorem holds.
**Theorem 2** ([2]).: _If \(X\) is a locally compact and locally connected space, then \(H(X)\) is a topological group._
The symmetric group on a set \(X\) is denoted by \(S(X)\). In the case where \(X\) is a finite set, this group is denoted by \(S_{n}(X)\) or \(S_{n}\), where \(n\) is the number of elements in \(X\). The order of the group \(S_{n}(X)\) is equal to \(n!\): \(|S_{n}(X)|=n!\)
A detailed exposition on the above used notions and results, as well as on other definitions, notions and results, used in this paper without reference, can be found in [3]-[6].
## 2. Continuous binary operations of topological spaces.
Let \(X\) be a topological space. A continuous map \(f:X^{2}\to X\) is called a continuous binary operation on the space \(X\). The set of all continuous binary operations on \(X\) we denote by \(C_{2}(X)\). A composition of two binary operations \(f,\varphi\in C_{2}(X)\) is defined by formula:
\[(f\circ\varphi)(t,x)=f(t,\varphi(t,x)), \tag{1}\]
where \(t,x\in X\).
If \(f:X^{2}\to X\) is a continuous binary operation, then for every\(t\in X\) we define a continuous map \(f_{t}:X\to X\) by formula: ft(x) = f(t,x).
\[f_{t}(x)=f(t,x). \tag{2}\]
Observe that a continuous binary operation \(f:X^{2}\to X\) can be considered as a family of continuous maps \(\{f_{t}\}\): \(f=\{f_{t}\}\), which continuously depends on the index \(t\in X\). In these notation, the composition of two binary operations \(f=\{f_{t}\}\) and \(\varphi=\{\varphi_{t}\}\), defined in (1), becomes
\[f\circ\varphi=\{f_{t}\circ\varphi_{t}\},\]
explaining the meaning of formula (1).
**Proposition 1**.: _The space \(C_{2}(X)\) is a semigroup with identity element \(e(t,x)=x\), that is, a monoid with respect to composition of binary operations._
The proof follows by checking the semigroup axioms, and so is omitted.
**Definition 1**.: A continuous binary operation \(f\in C_{2}(X)\) is said to be _invertible_ if there exists a continuous binary operation \(f^{-1}\in C_{2}(X)\) such that
\[f\circ f^{-1}=f^{-1}\circ f=e.\]
In this case, \(f\) and \(f^{-1}\) are said to be _mutually inverse binary operations_.
The subset of all invertible elements of the set \(C_{2}(X)\) we denote by \(H_{2}(X)\). Thus, \(H_{2}(X)\) is a group.
_Example 1_.: Let \(X=\{a,b\}\) be a two-point discrete space. The symmetric group \(S_{2}(X)\) of permutations of this space is the cyclic group \(\mathbb{Z}_{2}\), and the group of all invertible binary operations on \(X=\{a,b\}\) is the group of order \(4\) with two generators \(\varphi_{1}\) and \(\varphi_{2}\), which are specified as follows:
\[\varphi_{1}\colon\begin{array}{|l|l|l|l|}\hline&b&a&a\\ \hline a&b&b\\ \hline&a&b\\ \hline\end{array}\quad\begin{array}{|l|l|l|l|l|}\hline&b&b&a\\ \hline a&a&b\\ \hline&a&b\\ \hline&a&b\\ \hline\end{array}\]
This, as it is known, is the Klein four-group.
In the case of a three-point set \(X\), the order of the group \(H_{2}(X)\) is equal to \((3!)^{3}=216\) (see Corollary 2 below).
We have the following result, the proof of which is not difficult, and so is omitted.
**Theorem 3**.: _If a continuous binary operation \(f=\{f_{t}\}\in C_{2}(X)\) is invertible, then the continuous map \(f_{t}:X\to X\) defined by (2) is a homeomorphism for any \(t\in X\), and \(f^{-1}=\{f_{t}^{-1}\}\)._
The converse of Theorem 3 is true for locally compact and locally connected spaces.
**Theorem 4**.: _Let \(X\) be a locally compact and locally connected space, and let \(f=\{f_{t}\}:X^{2}\to X\) be a continuous binary operation. If the map \(f_{t}:X\to X\) is a homeomorphism for every \(t\in X\), then the binary operation \(f=\{f_{t}\}\) is invertible, and \(f^{-1}=\{f_{t}^{-1}\}\)._
Proof.: Consider the binary operation \(f^{-1}\) given by \(f^{-1}(t,x)=f_{t}^{-1}(x)\), and show that it is a continuous inverse to \(f:X^{2}\to X\).
We first establish the continuity of the map \(f^{-1}:X^{2}\to X\). Let \((t_{0},x_{0})\in X^{2}\) be an arbitrary point, and let \(f^{-1}(t_{0},x_{0})=f_{t_{0}}^{-1}(x_{0})=y_{0}\). Let \(W\subset X\) be an arbitrary open neighborhood of the point \(y_{0}\) such that the closure \(\overline{W}\) is compact. Then there exists a compact connected neighborhood \(K\) of the point \(x_{0}\) for which
\[f_{t_{0}}^{-1}(K)\subset W. \tag{3}\]
Denote by \(K^{\circ}\) the interior of the set \(K\), and observe that
\[f_{t_{0}}(y_{0})=x_{0}\in K^{\circ}. \tag{4}\]
It follows from (3) that
\[f_{t_{0}}(W^{C}\cap\overline{W})\subset K^{C}, \tag{5}\]
where \(W^{C}\) and \(K^{C}\) are the complements of the sets \(W\) and \(K\), respectively.
Next, since \(f:X^{2}\to X\) is a continuous binary operation, \(y_{0}\) and \(W^{C}\cap\overline{W}\) are compact, and \(K^{\circ}\) and \(K^{C}\) are open subsets of the space \(X\), it follows from (4) and (5) that there exists an open neighborhood \(U\) of the point \(t_{0}\), such that for every \(t\in U\)
\[f_{t}(y_{0})\in K^{\circ} \tag{6}\]
and
\[f_{t}(W^{C}\cap\overline{W})\subset K^{C}.\]
Hence
\[K\subset f_{t}(W\cup\overline{W}^{C})\]
for any \(t\in U\). Therefore
\[f_{t}^{-1}(K)\subset W\cup\overline{W}^{C}.\]
Since \(f_{t}^{-1}(K)\) is a connected set, and \(W\) and \(\overline{W}^{C}\) are disjoint open sets, it follows from the last inclusion that \(f_{t}^{-1}(K)\) is contained in one of the sets \(W\) and \(\overline{W}^{C}\). However, it is clear that in view of (6) we have \(f_{t}^{-1}(K)\subset W\). Therefore, for all \(t\in U\)
\[f_{t}^{-1}(K^{\circ})\subset W. \tag{7}\]
Thus, for an arbitrary open neighborhood \(W\) of the point \(y_{0}=f_{t_{0}}^{-1}(x_{0})\), we have found open neighborhoods \(U\) of the point \(t_{0}\) and \(K^{\circ}\) of the point \(x_{0}\) for which (7) is satisfied. This proves the continuity of the binary operation \(f^{-1}=\{f_{t}^{-1}\}\).
To complete the proof of the theorem it remains to observe that the continuous binary operation \(f^{-1}:X^{2}\to X\) is inverse to \(f:X^{2}\to X\), which can be verified easily. Theorem 4 is proved.
Theorems 3 and 4 imply the following invertibility criterion of continuous binary operations on locally compact and locally connected spaces.
**Theorem 5**.: _Let \(X\) be a locally compact and locally connected space. A continuous binary operation \(f=\{f_{t}\}:X^{2}\to X\) is invertible if and only if the continuous map \(f_{t}:X\to X\) is a homeomorphism for any \(t\in X\)._
## 3. Classification Of Groups Of Invertible Binary Operations
The next proposition shows that the groups of invertible continuous binary operations are natural extensions of the group of homeomorphisms.
**Proposition 2**.: _The group \(H(X)\) of all homeomorphisms of a topological space \(X\) is isomorphic (algebraically and topologically) to a subgroup of the group \(H_{2}(X)\) of invertible binary operations._
Proof.: To each \(f\in H(X)\) we associate a continuous map \(\tilde{f}:X^{2}\to X\), defined by \(\tilde{f}(t,x)=f(x),\,t,x\in X\). It is clear that \(\widetilde{f^{-1}}=\tilde{f}^{-1}\). Hence \(\tilde{f}\) is a continuous invertible binary operation, that is, \(\tilde{f}\in H_{2}(X)\). The correspondence \(f\to\tilde{f}\) is the desired isomorphism between the group \(H(X)\) and a subgroup of \(H_{2}(X)\). Proposition 2 is proved.
The next theorem contains a solution of the problem of classification of groups of invertible continuous binary operations of locally compact and locally connected spaces by means groups of homeomorphisms.
**Theorem 6**.: _Let \(X\) be a locally compact and locally connected space. Then the group \(H_{2}(X)\) is isomorphic (algebraically and topologically) to the group \(C(X,H(X))\)._
Proof.: Consider the map \(p:C(X,H(X))\to H_{2}(X)\) defined by
\[p(f)(t,x)=f(t)(x),\]
for \(f\in C(X,H(X))\) and \(t,x\in X\). Since for every \(t\in X\) the map \(f(t):X\to X\) is a homeomorphism, by Theorem 5 the binary operation \(p(f):X\times X\to X\) is invertible, that is, it belongs to the group \(H_{2}(X)\).
Now we show that \(p\) is a monomorphism. To this end, we take \(f,g\in C(X,H(X))\) such that \(f\neq g\), and observe that there exists a point \(t_{0}\in X\) such that \(f(t_{0})\neq g(t_{0})\). Since \(f(t_{0}),g(t_{0})\in H(X)\), it follows that \(f(t_{0})(x_{0})\neq g(t_{0})(x_{0})\) for some point \(x_{0}\in X\). Thus, \(p(f)(t_{0},x_{0})\neq p(g)(t_{0},x_{0})\), implying that \(p(f)\neq p(g)\).
Next, observe that the map \(p\) is also an epimorphism. Indeed, let \(\varphi\in H_{2}(X)\) be any continuous binary operation. Then by Theorem 5, the map \(\varphi_{t}:X\to X\) defined by \(\varphi_{t}(x)=\varphi(t,x)\), \(t,x\in X\), is a homomorphism. It is easy to see that the element \(f\in C(X,H(X))\), determined by the equality \(f(t)=\varphi_{t}\), is the preimage of the binary operation \(\varphi\): \(p(f)(t,x)=f(t)(x)=\varphi_{t}(x)=\varphi(t,x)\).
Thus, the map \(p^{-1}:H_{2}(X)\to C(X,H(X))\) defined by
\[p^{-1}(\varphi)(t)(x)=\varphi(t,x),\]
for \(\varphi\in H_{2}(X)\) and \(t,x\in X\), is inverse to \(p:C(X,H(X))\to H_{2}(X)\).
The map \(p\) is a homomorphism, that is, \(p(f\circ g)=p(f)\circ p(g)\). Indeed, for any \(t,x\in X\) we have
\[p(f\circ g)(t,x)=(f\circ g)(t)(x)=(f(t)\circ g(t))(x)=f(t)(g(t)( x))=\\ =f(t)(p(g)(t,x))=p(f)(t,p(g)(t,x))=(p(f)\circ p(g))(t,x).\]
Now we prove the continuity of \(p\). Let \(W(K\times K^{\prime},U)\) be any element of the subbase of the compact-open topology on \(H_{2}(X)\), where \(U\subset X\) is open and \(K,K^{\prime}\subset X\) are compact subsets of the space \(X\). We show that the preimage of the set \(W(K\times K^{\prime},U)\) is the set \(W(K,W(K^{\prime},U))\), which is an element of the subbase of the compact-open topology on \(C(X,H(X))\). Indeed, for any \(\varphi\in W(K\times K^{\prime},U)\) and \(f=p^{-1}(\varphi)\in C(X,H(X))\) we have
\[\varphi\in W(K\times K^{\prime},U)\iff\varphi(t,x)\in U \iff p(f)(t,x)\in U\iff\\ \iff f(t)(x)\in U\iff f\in W(K,W(K^{\prime},U)),\]
where \(t\in K\) and \(x\in K^{\prime}\) are arbitrary elements, and the continuity of \(p\) follows.
The continuity of the inverse map \(p^{-1}:H_{2}(X)\to C(X,H(X))\) can be shown similarly. Theorem 6 is proved.
The group of invertible continuous binary operations \(H_{2}(X)\) generally is not a topological group. However, the following results hold.
**Corollary 1**.: _If \(X\) is a locally compact and locally connected space, then \(H_{2}(X)\) is a topological group._
Proof.: By theorem 2, \(H(X)\) is a topological group. Therefore, by Theorem 1, \(C(X,H(X))\) is also a topological group. Now we can use Theorem 6 to conclude that \(H_{2}(X)\) is a topological group.
**Corollary 2**.: _Let \(|X|=n<\infty\). Then \(|H_{2}(X)|=(n!)^{n}\)._
Proof.: For a finite set \(X\), we have \(H(X)=S_{n}(X)\), where \(S_{n}(X)\) is the symmetric group of permutations of the set \(X\). Taking into account that \(|S_{n}(X)|=n!\), the result immediately follows from Theorem 6.
## 4. Binary Distributive Representations Of Topological Groups
**Definition 2**.: A subgroup \(D\subset H_{2}(X)\) is said to be distributive if for all \(x,x^{\prime},x^{\prime\prime}\in X\) and for all \(g,h\in D\) the following condition is fulfilled
\[g(h(x,x^{\prime}),h(x,x^{\prime\prime}))=h(x,g(x^{\prime},x^{\prime\prime})). \tag{8}\]
**Theorem 7**.: _A subgroup \(D\subset H_{2}(X)\) is distributive if and only if for any \(g=\{g_{t}\},h=\{h_{t^{\prime}}\}\in D\), \(t,t^{\prime}\in X\), the following equality holds:_
\[g_{t}\circ h_{t^{\prime}}=h_{g_{t}(t^{\prime})}\circ g_{t}. \tag{9}\]
Proof.: Let \(D\subset H_{2}(X)\) be a distributive subgroup. Then, in view of (8), we obtain
\[(g_{t}\circ h_{t^{\prime}})(x)=g_{t}(h_{t^{\prime}}(x))=g_{t}(h( t^{\prime},x))=g(t,h(t^{\prime},x))=h(g(t,t^{\prime}),g(t,x))=\\ =h_{g(t,t^{\prime})}(g(t,x))=h_{g_{t}(t^{\prime})}(g_{t}(x))=(h_{ g_{t}(t^{\prime})}\circ g_{t})(x)\]
for any \(x\in X\), and hence the equality (9) is satisfied.
Now assume that (9) is satisfied. Then for any \(g,h\in G\) and \(t,t^{\prime},x\in X\) we can write
\[h(g(t,t^{\prime}),g(t,x))=h_{g(t,t^{\prime})}(g(t,x))=h_{g_{t}(t ^{\prime})}(g_{t}(x))=(h_{g_{t}(t^{\prime})}\circ g_{t})(x)=\\ =(g_{t}\circ h_{t^{\prime}})(x)=g_{t}(h_{t^{\prime}}(x))=g_{t}(h( t^{\prime},x))=g(t,h(t^{\prime},x)),\]
implying that \(D\) is a distributive subgroup. Theorem 7 is proved.
The groups of invertible continuous binary operations are sufficiently rich by distributive subgroups. Moreover, every topological group can be considered as a distributive subgroup of a suitable chosen group of invertible continuous binary operations.
**Theorem 8** (on binary distributive representation of a topological group).: _Every topological group is a distributive subgroup of some group of invertible binary operations._
Proof.: Let \(G\) be a topological group. Consider the group of invertible binary operations \(H_{2}(G)\) of \(G\), and define the map \(i:G\to H_{2}(G)\), which to each element \(g\in G\) associates the binary operation \(i_{g}\in H_{2}(G)\), defined by
\[i_{g}(h_{1},h_{2})=h_{1}gh_{1}^{-1}h_{2},\]
where \(g,h_{1},h_{2}\in G\).
Since \(i_{g}(e,e)=g\) for any \(g\in G\), where \(e\) is the identity element of the group \(G\), the map \(i\) is a monomorphism.
The map \(i\) is also a homomorphism. Indeed, we have
\[i_{gk}(h_{1},h_{2})=h_{1}gkh_{1}^{-1}h_{2}=h_{1}gh_{1}^{-1}h_{1} kh_{1}^{-1}h_{2}=i_{g}(h_{1},h_{1}kh_{1}^{-1}h_{2})=\\ =i_{g}(h_{1},i_{k}(h_{1},h_{2}))=[i_{g}\circ i_{k}](h_{1},h_{2}),\]
where \(g,k,h_{1},h_{2}\in G\).
The continuity of the map \(i\) follows from the continuity of operations \((g,h)\to gh\) and \(g\to g^{-1}\) for all \(g,h\in G\). Thus, \(i\) is an isomorphism of the group \(G\) on its image.
Observe that \(i(G)\) is a distributive subgroup of the group \(H_{2}(G)\). Indeed, for any \(g,h,k,k_{1},k_{2}\in G\) we have the following chain of equalities:
\[i_{g}(i_{h}(k,k_{1}),i_{h}(k,k_{2}))=i_{g}(khk^{-1}k_{1},khk^{-1} k_{2})=khk^{-1}k_{1}gk_{1}^{-1}kh^{-1}k^{-1}khk^{-1}k_{2}=\\ =khk^{-1}k_{1}gk_{1}^{-1}k_{2}=i_{h}(k,k_{1}gk_{1}^{-1}k_{2})=i_{ h}(k,i_{g}(k_{1},k_{2})),\]
and the result follows. Theorem 8 is proved.
Note that Theorem 8 is a binary topological version of the Cayley's classical theorem on representation of an arbitrary finite group by unary operations (permutations). |
2305.00522 | How to enumerate trees from a context-free grammar | I present a simple algorithm for enumerating the trees generated by a Context
Free Grammar (CFG). The algorithm uses a pairing function to form a bijection
between CFG derivations and natural numbers, so that trees can be uniquely
decoded from counting. This provides a general way to number expressions in
natural logical languages, and potentially can be extended to other
combinatorial problems. I also show how this algorithm may be generalized to
more general forms of derivation, including analogs of Lempel-Ziv coding on
trees. | Steven T. Piantadosi | 2023-04-30T16:40:54Z | http://arxiv.org/abs/2305.00522v1 | # How to enumerate trees
###### Abstract
I present a simple algorithm for enumerating the trees generated by a Context Free Grammar (CFG). The algorithm uses a pairing function to form a bijection between CFG derivations and natural numbers, so that trees can be uniquely decoded from counting. This provides a general way to number expressions in natural logical languages, and potentially can be extended to other combinatorial problems. I also show how this algorithm may be generalized to more general forms of derivation, including analogs of Lempel-Ziv coding on trees.
## 1 Introduction
While context-free grammars (CFGs) are important in computational linguistics and theoretical computer science, there is no simple, memoryless algorithm for enumerating the trees generated by an arbitrary CFG. One approach is to maintain a priority queue of partially expanded trees according to probability, and expand them through (e.g.) the leftmost unexpanded nonterminal in the tree. This, however, requires storing multiple trees in memory, which can become slow when enumerating many trees. Incremental polynomial time algorithms are also known [1] and related questions have been studied for lexicographic enumeration [2, 3, 4]. These algorithms are not particularly well-known, and the tools required to state and analyze them are complex. In contrast, simple techniques exist for enumerating binary trees with a fixed grammar (e.g. \(S\to SS\mid x\)). A variety of techniques and history is reviewed in Section 7.2.1.6 of [5], including permutation-based methods and gray codes [6, 7, 8, 9]. These algorithms, however, do not obviously generalize to arbitrary CFGs.
The goal of the present paper is to present an variant of integer-based enumeration schemes that works for arbitrary CFGs. The algorithm is itself very basic--just a few lines--but relies on a abstraction here called an IntegerizedStack that may be useful in other combinatorial problems. The proposed algorithm does not naturally enumerate in lexicographic order (though variants may exist) but it is efficient: its time complexity is linear in the number of nodes present in the next enumerated tree, and it does _not_ require additional data structures or pre-computation of anything from the grammar. Because the algorithm constructs a simple bijection between a the natural numbers \(\mathbb{N}\) and trees, it also provides a convenient scheme for Godel-numbering [10, 11], when the CFG is used to describe formulas. We then extend this algorithm to a tree-based algorithms analogous to LZ compression.
## 2 Pairing functions
To construct a bijection between trees and integers, we use a construction that has its roots in Cantor [12]'s proof that the rationals can be put into one-to-one correspondence with the integers. Cantor used a _pairing function_[13] to match up \(\mathbb{N}\times\mathbb{N}\) with \(\mathbb{N}\) itself:
\[C(x,y)=\frac{(x+y)\cdot(x+y+1)}{2}+y \tag{1}\]
This function essentially traces the position of an integer pair \(\left\langle x,y\right\rangle\) in the line shown in Figure 1. This pairing function is (uniquely) invertible via
\[\left\langle x,y\right\rangle=C^{-1}(z)=\left\langle z-\frac{w\cdot(w+1)}{2}, \frac{w\cdot(w+3)}{2}-z\right\rangle, \tag{2}\]
for \(w=\lfloor\frac{1}{2}(-1+\sqrt{1+8z})\rfloor\). This function has, interestingly, been the study of additional formal work. It is, for example, the only quadratic bijection between \(\mathbb{N}\times\mathbb{N}\) and \(\mathbb{N}\)[14, 15]; an analysis of the computational complexity of different pairing functions can be found in [16].
Other pairing functions are more convenient for some applications. A popular alternative, illustrated in Figure 1 is the Rosenberg-Strong pairing function [17],
\[R(x,y)=max(x,y)^{2}+max(x,y)+x-y \tag{3}\]
with inverse,
\[R^{-1}(z)=\begin{cases}\left\langle z-m^{2},m\right\rangle&\text{if }z-m^{2}<m\\ \left\langle m,m^{2}+2m-z\right\rangle&\text{otherwise},\end{cases} \tag{4}\]
where \(m=\lfloor\sqrt{z}\rfloor\). Pairing functions are reviewed in [13], who also shows how they may be used to enumerate binary trees. The key idea is that we can imagine that any integer \(n\) is a pairing of its two subtrees (e.g. \(n=R(x,y)+1\) for subtrees \(x\) and \(y\)). If we iterate over integers, we may then "translate" each integer into a binary tree by breaking it down into two integers and then recursively doing the same on \(x\) and \(y\) until we reach \(0\). Specifically, assume that
\[\phi(R(x,y)+1)=\left\langle\phi(x),\phi(y)\right\rangle. \tag{5}\]
Figure 1: Enumeration order of Cantor’s pairing function (left), the Rosenberg-Strong pairing function (center) and the \(M_{4}(x,y)\) (right).
Then, for example, \(n=147\) can be broken down as,
\[147\] \[\left\langle 2,12\right\rangle\] \[\left\langle\left\langle 0,1\right\rangle,\left\langle 2,3\right\rangle\right\rangle\] \[\left\langle\left\langle 0,\left\langle 0,0\right\rangle\right\rangle, \left\langle 2,3\right\rangle\right\rangle \tag{6}\] \[\left\langle\left\langle 0,\left\langle 0,0\right\rangle\right\rangle, \left\langle\left\langle 0,1\right\rangle,\left\langle 1,1\right\rangle\right\rangle\right\rangle\] \[\left\langle\left\langle 0,\left\langle 0,0\right\rangle\right\rangle, \left\langle\left\langle 0,\left\langle 0,0\right\rangle\right\rangle,\left\langle \left\langle 0,0\right\rangle,\left\langle 0,0\right\rangle\right\rangle\right\rangle \right\rangle.\] \[\left\langle\left\langle\bullet,\left\langle\bullet,\bullet \right\rangle\right\rangle,\left\langle\left\langle\bullet,\bullet\right\rangle \right\rangle,\left\langle\left\langle\bullet,\bullet\right\rangle,\left\langle \bullet,\bullet\right\rangle\right\rangle\right\rangle\right\rangle.\]
So long as \(R\) is any max-dominating1 pairing function, \(\phi\) is an enumeration of trees [13].
Footnote 1: A function \(f\) such that \(f(x,y)>max(x,y)\).
It may not also be obvious how to use this approach to generate from an arbitrary CFG where the productions allowed at each step vary depending on the CFG's nonterminal types. In particular, there may be multiple ways of expanding each nonterminal, which differs depending on which non-terminal is used. A simple scheme such as giving each CFG rule an integer code and then using a pairing function like \(R\) to recursively pair them together will not, in general, produce a bijection because there may be integer codes that do not map onto full trees (for instance, pairings of two terminal rules in the CFG).
The issue is that in generating from a CFG, we have to encode a choice of which rule to expand next, of which there are only finitely many options. In fact, the number of choices will in general depend on the nonterminal. Our approach to address this is to use two different pairing functions: a modular "pairing function" to encode which nonterminal to use and the Rosenberg-Strong pairing function to encode integers for the child of any node. Thus, if a given nonterminal has \(k\) expansions, define a pairing function that pairs \(\{0,1,2,\ldots,k-1\}\times\mathbb{N}\) with \(\mathbb{N}\). A simple mod operation, shown in Figure 1, will work:
\[M_{k}(x,y)=x+k\cdot y \tag{7}\]
with inverse
\[M_{k}^{-1}(z)=\left\langle z\mod k,\frac{z-(z\mod k)}{k}\right\rangle. \tag{8}\]
## 3 Enumerating trees
It is convenient to combine mod pairing and Rosenberg-Strong pairing into a simple abstraction, we here call an IntegerizedStack. This term is intentionally different from "integer stack" (which is a stack of integers): an IntegerizedStack is a stack of integers that is itself _stored in an integer_. This class allows us to pack and unpack a finite list of integers from a single integer, using push and pop operations of a standard stack. For use later, we can push or pop raw integer, or modulo some number: Here, we have assumed that decode is the inverse of a pairing function like \(R^{-1}\) above, and mod_decode is \(M_{k}^{-1}\). Note, that the stored value of an IntegerizedStack is always only an integer, but the abstraction of an IntegerizedStack allows us to treat it as though it currently contains a stack of other integers, either through the pairing function or through a modulo pairing function. This stack has the special property that popping a stack with value \(0\) always returns \(0\) and leaves the stack with value \(0\). IntegerizedStack also includes one special helper function split, which partitions the integer into \(k\) different components by successive pops.
```
1classIntegerizedStack:
2def__init__(self,v=0):
3self.value=v
4
5defpop(self):
6#removeanintegerfromself.valueandreturn
7self.value,ret=decode(self.value)
8returnret
9
10defmodpop(self,modulus):
11#popfromself.valuemodn
12self.value,ret=mod_decode(self.value,modulus)
13returnret
14
15defsplit(self,n):
16#Assumevaluecodesexactlynintegers.Zeroafterwards.
17out=[self.pop()for_inrange(n-1)]
18out.append(self.value)
19self.value=0
20returnout
```
Note that this operation exhaustively uses up the remainder of the stack and leaves it empty. This could alternatively be achieved using a pairing function on \(\mathbb{N}^{d}\) (see [13]).
To see how an IntegerizedStack works in enumeration, let us assume we are working with a context-free grammar \(\mathcal{G}=(V,\Sigma,R,S)\), where \(V\) is a set of nonterminal symbols, \(\Sigma\) is a set of terminal symbols, \(R\) is a relation \(V\to(V\cup\Sigma)^{*}\), and \(S\in V\) is a start symbol. We require notation to distinguish the _terminal_ from _non-terminal_ rules in \(R\). Let \(T_{v}\subseteq R\) denote the set of _terminal_ rules, meaning those that expand \(v\) with _no_ non-terminals on the right hand side (i.e. such that \(v\to\Sigma^{*}\)). Let \(N_{v}\subset R\) be those that expand \(v\) to some nonterminal. Following typical implementations, we will talk about \(T_{v}\) and \(N_{v}\) as an ordered list of rules.
Without loss of generality, we make two further assumptions about \(\mathcal{G}\):
1. For each \(v\in V\), the set of trees that \(v\) can expand to is infinite. Note that any non-finite context-free language \(\mathcal{G}\) can be converted into this format by, for instance, taking any \(v\in V\) which only expands to finitely many trees, giving each tree a unique terminal symbol, and then removing \(v\) from the grammar. This will create a new grammar \(\mathcal{G}^{\prime}\) whose productions can be translated back and forth to those of \(\mathcal{G}\) with appropriate transformation of the new terminal symbols.
2. The rule ordering in \(G\) must be such that choosing the first (zeroth) rule for each nonterminal will eventually yield a terminal. This ensures that the first (0'th) item in any enumeration is a finite tree.
In practice, it will often be useful to put the terminals and then high-probability expansions _first_ in each \(N_{v}\).
The generating algorithm, denoted **Algorithm A**, is then very simple. To expand an integer \(n\) for nonterminal type \(v\in V\), we first check if \(n<|T_{v}|\) and if so, we return the \(n\)'th terminal rule. Otherwise, we treat \(n-|T_{v}|\) as an IntegerizedStack. We pop modulo \(|N_{v}|\) to determine which rule expansion to follow, and then use split to specify the integers for the children, which are then recursively expanded. Algorithm A is shown in the function from_int which takes a nonterminal type and an integer n, and constructs a tree:
```
1#Givenanonterminal(string),anintegern,andcfg(ahash
2#fromstringstolistsofrighthandsideexpansions),return
3#then'thtree.Here,Nodesiasimpleclassthatstoresa
4#nonterminalNode.ntandalistofchildren(Nodesorstrings),
5#Node.children.
6deffrom_int(nt,n,cfg):
7
8#counthenonterminals
9nterminals=sum([is_terminal_rule(rhs,cfg)forrhsincfg[nt]])
10
11ifn<nterminals:
12#ifniscodinganonterminal
13returnNode(nt,cfg[nt][n])
14else:
15#Treatn-nterminalsasastackofintegers
16i=IntegerizedStack(n-nterminals)
17
18#howmanynonterminalrules
19nnonterminals=len(cfg[nt])-nterminals
20
21#ifirstencodeswhich*non*-terminal
22rhs=cfg[nt][nterminals+i.modpop(nnonterminals)]
23
24#splittheremainingintothenonterminalsontheright
25#sideoftherule
26t=i.split(sum(is_nonterminal(r,cfg)forrinrhs))
27
28#nowecanexpandalloftthechildren
29children=[]
30forrinrhs:
31ifis_nonterminal(r,cfg):
32children.append(from_int(r,t.pop(0),cfg))
33else:
34children.append(Node(r))
35
36#ReturnthenewNode
37returnNode(nt,children)
```
In this listing, we have assumed that cfg is a dictionary from nonterminal string symbols to lists of rules obeying (i) and (ii). In this algorithm, assumption (i) guarantees that any value of \(n\) can be converted into a tree. Assumption (ii) ensures that when \(n\) is zero, the algorithm will halt. Note that in both the mod and Rosenberg-Strong pairing functions, a value of \(0\) will be unpaired into two zeros. This means that generally, at some point in the algorithm, the call to from_int will take zero as an argument, and so (ii) is required to ensure that, in this case, it returns a finite tree rather than running forever.
It may be counterintuitive in Algorithm A that we subtract \(|T_{v}|\) from n. This is required for from_int to be a bijection, but the argument is more clear the inverse algorithm (converting trees to integers). If a tree with nonterminal \(v\in V\) only consists only of a terminal rule, we simply specify which rule. Otherwise, we use an IntegerizedStack to encode the nonterminal rule (modulo \(|N_{v}|\)) and all of the children. However, we do not want to give this IntegerizedStack a number which overlaps with \(0,1,\ldots,|T_{v}|-1\) since that would be confusable for a terminal rule. To avoid this, we start indexing the child trees at \(|T_{v}|\). It should be clear, then, that this pairing is a bijection between trees and integers, for grammars satisfying (i) and (ii).
An implementation of this algorithm is provided in the author's library github2 which is distributed under GPL. As an example, Figure 2 shows expansions from a simple CFG that one
might find in a natural language processing textbook:
\[\begin{split} S&\to NP\;VP\\ NP&\to n\mid d\;n\mid d\;AP\;n\mid NP\;PP\\ AP&\to a\mid a\;AP\\ PP&\to p\;NP\\ VP&\to v\mid v\;NP\mid v\;S\mid VP\;PP\\ \end{split} \tag{9}\]
Note that this encoding is a bijection between trees and integers, though not necessarily between terminals strings (yields) and integers, due to ambiguity in the grammar. Any number specifies a unique derivation, and vice-versa, giving rise to the bijection between trees and integers. The key assumption of this algorithm is context-freeness since that allows the pairings for each child of the tree to be independently expanded. However, a similar approach may be amenable to other combinatorial problems.
## 4 LZ-trees
An interesting family of variants to Algorithm A can be created by noting that an integer can encode information other than rule expansions in the grammar. For example, an integer might reference complete subtrees that have been generated previously. This idea is inspired by work formulating probabilistic models that expand CFGs to favor re-use of existing subtrees [18, 19]. We call this approach LZ-trees because we draw on an idea from the LZ77/LZ78 algorithm [20], which compresses strings by permitting pointers back to previously emitted strings. Here, we permit our enumeration to potentially point back to previously generated complete trees. For
Figure 2: Enumeration of the grammar in (9) using Algorithm A.
instance, suppose we are currently decoding an integer at the point \(x\) in the tree.
(10)
Then at \(x\), we should be allowed to draw on prior complete trees (rooted at \(B\) and \(D\)) assuming they are of the correct non-terminal type for \(x\). Since there are two previously-generated trees when expanding \(x\), we can let IntegerizedStack values of 0 and 1 reference these trees, and otherwise, we encode the node below \(x\) according to Algorithm A. This has the effect of preferentially re-using subtrees that have previously been generated early in the enumeration, although it should be noted that the mapping is no longer a bijection. Note that unlike LZ77, this algorithm does not require us to store an integer for the length of the string/tree that is pointed to, because we assume it is a complete subtree. Also, the integer pointing to a previous tree is an integer specifying the target in any enumeration of nodes in the tree. A listing for this algorithm, Algorithm B, is shown below:
```
1#returnalistofpossiblesubtreesoftthatLZcouldreference
2#usuallywewillwanthesetobecompleestubtreesinvolvingmorethan
3defpossible_lz_targets(nt,T):
4out=[]
5ifTisnotNone:
6fortinT:
7if(tnotinout)and(len(t)>=3)andt.completeandt.nt==nt:
8out.append(t)
9returnout
10
11#providethen'thexpansionofnonterminalnt
12deffrom_int(nt,n,cfg,root=None):
13
14#countupthenumberofterminals
15nterminals=sum([is_terminal_rule(rhs,cfg)forrhsincfg[nt]])
16
17#HowmanytreescouldLZreference?
18lz_targets=possible_lz_targets(nt,root)
19
20ifn<len(lz_targets):
21#warecodinganLZtarget
22returndeepcopy(lz_targets[n])#mustdeepcopy
23elifn-len(lz_targets)<nterminals:
24#checkifnisaternal(remembertosubtractlen(lz_targets))
25returnNode(nt,cfg[nt][n-len(lz_targets)])
26else:
27#niswhat'selfoveraftertryingtocodelz_targetsandterminals
28n=n-len(lz_targets)-nterminals
29
30#n-nterminalsshouldbeanIntegerizedStackwherewe
31i=IntegerizedStack(n)
32
33#howmanynonterminalrules
nonterminals=len(cfg[nt])-nterminals
35
36#ifirstencodoesswhich*non=-terminal
47which=i.modpop(nnonterminals) rhs=cfg[nt][nterminals+which]
48
49#countuphowmanyontherhsarenonterminals
50#anddivideintothatmanyintegers t=i.split(sum(is_nonterminal(r,cfg)forrinrhs))
51
52#Alitlestublety:wehavetostorewhethernodec
53#is"complete"sowecanknownottousetintrecursive
54#callsuntilallitsexpansionsaredone out=Node(nt)#mustbuildinchildrenhere out.complete=False forrinrhs: ifis_nonterminal(r,cfg): out.children.append(from_int(r,t.pop(0),cfg,\)\
55#rootifrootisnotNoneelseout))
56else: #elseit'sjustastring--copy out.children.append(r)
57#nowthenodeciscopmlete out.complete=True
58
59returnout ```
Results from enumerating the grammar in (9) are shown in Figure 4. Note here that the main differences are places where the Algorithm B re-uses a component earlier in the string. However, the algorithms do agree in many places, likely because of the requirement that only complete subtrees of the same type can be references (of which there are often not any). Similar approaches might allow us to write potentially write any kind of encoder and enumerate trees relative to that encoding scheme. For instance, we might permit a pointer to a previous _subtree_, we might use an integer coding which codes prior tree components relative to their frequency, etc.
## 5 Conclusion
This work describes a simple algorithm that enumerates the trees generated by a CFG by forming a bijection between these trees and integers. The key abstraction, an IntegerizedStack, allowed us to encode arbitrary information into a single integer through the use of pairing functions.
|
2309.06286 | Transferability analysis of data-driven additive manufacturing
knowledge: a case study between powder bed fusion and directed energy
deposition | Data-driven research in Additive Manufacturing (AM) has gained significant
success in recent years. This has led to a plethora of scientific literature to
emerge. The knowledge in these works consists of AM and Artificial Intelligence
(AI) contexts that have not been mined and formalized in an integrated way.
Moreover, no tools or guidelines exist to support data-driven knowledge
transfer from one context to another. As a result, data-driven solutions using
specific AI techniques are being developed and validated only for specific AM
process technologies. There is a potential to exploit the inherent similarities
across various AM technologies and adapt the existing solutions from one
process or problem to another using AI, such as Transfer Learning. We propose a
three-step knowledge transferability analysis framework in AM to support
data-driven AM knowledge transfer. As a prerequisite to transferability
analysis, AM knowledge is featurized into identified knowledge components. The
framework consists of pre-transfer, transfer, and post-transfer steps to
accomplish knowledge transfer. A case study is conducted between flagship metal
AM processes. Laser Powder Bed Fusion (LPBF) is the source of knowledge
motivated by its relative matureness in applying AI over Directed Energy
Deposition (DED), which drives the need for knowledge transfer as the less
explored target process. We show successful transfer at different levels of the
data-driven solution, including data representation, model architecture, and
model parameters. The pipeline of AM knowledge transfer can be automated in the
future to allow efficient cross-context or cross-process knowledge exchange. | Mutahar Safdar, Jiarui Xie, Hyunwoong Ko, Yan Lu, Guy Lamouche, Yaoyao Fiona Zhao | 2023-09-12T14:46:56Z | http://arxiv.org/abs/2309.06286v1 | Transferability Analysis of Data-Driven Additive Manufacturing Knowledge: A Case Study Between Powder Bed Fusion and Directed Energy Deposition
###### Abstract
Data-driven research in Additive Manufacturing (AM) has gained significant success in recent years. This has led to a plethora of scientific literature to emerge. The knowledge in these works consists of AM and Artificial Intelligence (AI) contexts that haven 1 been mined and formalized in an integrated way. Moreover, no tools or guidelines exist to support data-driven knowledge transfer from one context to another. As a result, data-driven solutions using specific AI techniques are being developed and validated only for specific AM process technologies. There is a potential to exploit the inherent similarities across various AM technologies and adapt the existing solutions from one process or problem to another using AI, such as Transfer Learning. We propose a three-step knowledge transferability analysis framework in AM to support data-driven AM knowledge transfer. As a prerequisite to transferability analysis, AM knowledge is featurized into identified knowledge components. The framework consists of pre-transfer, transfer, and post-transfer steps to accomplish knowledge transfer. A case study is conducted between flagship metal AM processes. Laser Powder Bed Fusion (LPBF) is the source of knowledge motivated by its relative matureness in applying AI over Directed Energy Deposition (DED), which drives the need for knowledge transfer as the less explored target process. We show successful transfer at different levels of the data-driven solution, including data representation, model architecture, and model parameters. The pipeline of AM knowledge transfer can be automated in the future to allow efficient cross-context or cross-process knowledge exchange.
Keywords: Data-driven Additive Manufacturing Knowledge, Knowledge Transferability Analysis, Knowledge Transfer, Machine Learning, Transfer Learning
## 1 Introduction
Additive Manufacturing (AM) or three-dimensional (3D) printing is used to fabricate parts layer-wise as opposed to the subtractive approach of material removal. The American Society of Testing Materials (ASTM) defines seven standardized categories of AM processes [1]. These technologies can support various applications such as tool elimination, material savings, design freedom, cost reduction, part consolidation, prototyping ease, mass customization, and production efficiency. The advantages of AM have led to increased attention from academia and industry to advance AM technologies for even broader applications and ultimately mature them to rival conventional manufacturing methods at the industrial scale.
Metal AM (MAM) can manufacture fully dense metallic parts directly from 3D digital designs. Though these processes provide much more design freedom and lead to material saving, they encounter distinct challenges of process control and part reproducibility due to their unique nature of lengthy material joining and part fabrication. Laser Powder Bed Fusion (LPBF) and Directed Energy Deposition (DED) are two representative MAM technologies, each providing unique benefits. LPBF uses
an energy source to fuse pre-laid powder layers leading to a 3D part upon successive repetition of the process. DED also uses a focused energy source as the material is supplied on the fly. Both processes have advantages and disadvantages, suiting diverse applications, but share similar part quality control challenges.
To enhance process control and part reproducibility in MAM and other AM processes, various analytical, numerical, and empirical solutions are being developed independently, and usually toward a specific process, a combination of materials, layering, and machine technology. The resulting knowledge is only validated for the specific process. For instance, the recent wave of Machine Learning (ML) aided research in AM has introduced data-driven solutions to solve different problems of each process [2]. Some of these frameworks and solutions are expected to expedite AM development significantly. For example, co-authors of this paper developed ML models on melt-pool data to enhance in-situ monitoring and control of LPBF [3, 4]. In [3, 5], the co-authors also presented ML-based methods to extract new process-structure-property causal knowledge from AM data to enhance control activities and part reproducibility. [2, 6, 7] present more ML approaches in AM in their literature reviews.
There is a significant research problem identified from the literature reviews - the ML solutions of many existing AM studies are only developed and verified for specific AM processes. While research on different aspects of AM knowledge and its management exists [3, 5, 8, 9], no framework supports cross-process knowledge sharing. To address the challenge, this paper proposes a novel Transfer-Learning (TL)-based framework for knowledge transferability analysis in AM. This paper also presents a case study demonstrating the framework using LPBF and DED data.
The remainder of the paper is as follows. Section 2 introduces the framework. Section 3 demonstrates the case study. Section 4 presents the results and discussion. We conclude this article with concluding remarks and future work in Section 5.
## 2 Framework for Transferability Analysis
We present a three-step framework to support the transferability analysis of AM knowledge between two flagship process categories: LPBF and DED. Figure 1 introduces the framework indicating the steps pre-transfer, transfer, and post-transfer involved in the process of knowledge transfer. The knowledge components are identified as a pre-requisite to transferability analysis. The proposed framework is independent of the nature of a source solution and provides a generic approach to conduct knowledge transfer across ML solutions.
### Knowledge Components
While the representation and management of ML-aided AM knowledge is considered out of the scope, a criterion to represent AM knowledge is needed within the scope of transferability analysis. We featurize ML-aided AM knowledge into AM and ML knowledge components in support of transferability analysis. This step serves as the pre-requisite to the proposed framework. The components are chosen to cover varying levels of domain knowledge (both AM and ML) and are arranged according to their significance in each domain.
The ordinal levels of AM knowledge are defined below:
_AM Process (AM_P):_ The AM process types are identified as the first level of AM knowledge. This is where different categories of AM significantly vary from each other.
_AM Material (AM_MT):_ The AM material types fall at the second level of AM knowledge. Since different material systems represent different knowledge developed for the same process, it
Figure 1: Three-step Knowledge Transferability ANALYSIS FRAMEWORK. THE ORANGE SHADE REPRESENTS EXISTING SOURCE AND TARGET KNOWLEDGE WHERAS THE GREY SHADE HIGHLIGHTS STEPS OF THE FRAMEWORK
is important to distinguish each AM activity in terms of the specific material type used. Some materials may be more developed than others (e.g., Ti4Al6V), leading to high chances of knowledge transfer.
_AM System (AM_S)_: The AM system setups for a specific AM process type are identified as the third level of AM knowledge. A system refers to both base printer setup and added hardware (e.g., sensors) supporting the ML solutions.
_AM Model (AM_M)_: An AM model refers to the developed knowledge specific to a given process, material, and system. Over the past decade, researchers have developed and proposed numerous AM models (e.g., analytical, numerical).
_AM Activity (AM_A)_: AM activity is the focus of each data-driven solution and lies at the fifth level of AM knowledge. A full or partial overlap in the preceding levels can open possibilities of activity-based knowledge transfer. This level organizes characteristics according to the lifecycle of AM, namely design, process, and product-based activities.
_AM Concern (AM_C)_: AM concern is identified as the sixth and final level of AM knowledge. Design concerns can have types such as manufacturability or deviation prediction. Processes can be divided into different states, such as normal and abnormal. Products can be judged against various quality metrics, such as macro and micro mechanical defects. A match at the activity level doesn't imply a full match until a specific type of AM concern is found to be the focus of said AM activity.
The key levels of ML knowledge are also identified normally:
_ML Task (ML_T)_: An ML task is identified as the first level of ML knowledge. ML tasks are divided into broad categories of regression, classification, and clustering. ML applications in AM can be first arranged into specific tasks indicating maximum variation of contained knowledge.
_ML Model (ML_M)_: An ML model represents the second level of ML knowledge. For a given task, a multitude of ML models exists to either learn input-output relations or discover underlying patterns. It is important to identify the model type used in each application for knowledge transfer. Shallow and deep models are representative of two major categories of ML models. However, specific types of models (e.g., Linear Regression, Support Vector Machines, Convolutional Neural Network or CNN) are used in practice.
_ML Input (ML_I)_: ML input stands at the third level of ML knowledge in AM applications. Empirical models can unearth correlations for a characteristic of interest using different types of inputs. Based on trends in AM, input types are divided into Graphic, 3D, Tabular, and Sequence [10].
_ML Preprocessing (ML_P)_: ML preprocessing is added as the fourth level of ML knowledge. Data handling techniques refer to any action that leads to improvement in data quality for a given learning task.
_ML Output (ML_O)_: ML output lies at the end of ML knowledge context. Identifying the nature and availability of outputs is important in knowledge transferability scenarios and helps to compare a label space during the task comparison of the transfer steps.
### Pre-Transfer
Once an instance (or several instances) of a target context is selected, the pre-transfer, which begins the transferability analysis steps, is followed to conduct knowledge transfer. The main objective of the pre-transfer step is to comprehensively compare source and target knowledge components, identify the similarity between the components, and select the potential sources that can be used to transfer the knowledge to a target's context. This step is also aimed at the applicability analysis of a given source knowledge to a target context.
The similarity is defined as the one-to-one similarity of AM and ML knowledge in terms of their components. Each one-to-one comparison leads to a similarity/applicability (1) or dissimilarity/inapplicability (0) score. All scores are added and normalized to get a final similarity index in the range of 0 to 1.
The pretransfer step also performs a maturity check on source knowledge solely. The maturity is defined in terms of the newness and performance of the knowledge being transferred. The metric for performance depends on the ML task, while the newness is indicated by the robustness of the approach and its history of successive development. The performance metric gets a value of 1 for the highest reported performance and is successively reduced by 0.1 for sources with low performance. The newness metric is assigned 1 for the latest reported work and gets lowered by 0.1 for previously reported sources in the order of publication. The maturity factor is defined as the sum of performance and newness metrics and is later normalized in the range of 0 to 1.
Before knowledge can be transferred, all components of the knowledge from AM and ML are needed. Availability check results in 1 for available and 0 for unavailable knowledge. Both maturity and availability results are factored in the similarity index to get the results from the pre-transfer analysis. The availability of knowledge (especially data and models) could be a major bottleneck to knowledge sharing in AM as datasets, and specific solutions are often not openly available for re-use and transfer. Table 1 highlights sub-analysis steps for pre-transfer leading to a pre-transfer score indicative of knowledge transfer potential.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Similarity** & **Maturity** & **Availability** \\
**Analysis** & **Analysis** & **Analysis** \\ \hline AM Similarity Score = & & Available \\ \(\sum S_{AM}\) & Newness & Knowledge = \\ ML Similarity Score = & Score = \(n\) & 1 \\ \(\sum S_{ML}\) & Performance & Unavailable \\ Knowledge Components & Score = \(p\) & Knowledge = \\ \(=\sum KC_{t}\) & & 0 \\ \hline Similarity & & Availability \\ Index (S) = & & Factor (M) = \\ \((\sum S_{AM}+\sum S_{ML})/\sum KC_{t}\) & \((n+p)/2\) & 0 or 1 \\ \hline \multicolumn{3}{|c|}{Pre-Transfer Score = \(S\times M\times A\)} \\ \hline \end{tabular}
\end{table}
Table 1: Pre-Transfer ANALYSIS
The highest possible score of 1 from the pre-transfer step represents an ideal match with the same process. For cross-process knowledge transfer, the source context with the maximum score can be considered for the transfer step in the case of several potential sources. A score greater than 0.5 represents a significant overlap of a target context with mature and available source knowledge.
### Transfer
The pre-transfer analysis leads to the identification of potential knowledge sources and helps to quickly filter and arrange them based on the assigned pre-transfer scores. However, there is no guarantee that these can support knowledge transfer, as some scenarios can be without any meaningful overlap (e.g., scenarios with zero ML similarity scores). Nonetheless, some knowledge may still be transferable such as specific experimental design methods or AM hardware used to generate data. As such, the non-ML solutions to knowledge transfer can be straightforward. We focus on ML-based solutions for AM knowledge transfer. The ML-based solutions can be grouped under the umbrella term of TL and are identified systematically in Figure 2.
The pre-transfer analysis can lead to different types of similarity scenarios, such as full AM similarity, full ML similarity, partial AM similarity, partial ML similarity, and mixed AM and ML similarity. To link these with a potential TL model, we propose to translate the identified similarity scenario to a TL scenario using TL knowledge. TL is motivated by the fact that learning done for a previous and similar task can expedite the learning for a related new task in the absence of sufficient high-quality data. In their representative survey on TL, Pan and Yang gave a generic definition of transfer learning as [11]: "Given a source domain \(D_{\text{s}}\) and learning task \(T_{\text{s}}\), a target domain \(D_{\text{t}}\) and learning task \(T_{\text{t}}\), transfer learning aims to help improve the learning of the target predictive function (Ft(.)) in \(D_{\text{t}}\) using the knowledge in \(D_{\text{s}}\) and \(T_{\text{s}}\), where:"
\[D_{\text{s}}\ \neq D_{\text{t}} \tag{1}\]
or,
\[T_{\text{s}}\neq T_{\text{t}} \tag{2}\]
Figure 2: Domain and Task Comparison to INDENTIFY TRANSFER LEARNING SCENARIO, METHOD, AND MODELS. ML_I AND ML_O REFER TO ML INPUT AND ML OUTPUT WHERAS AM_P, AM_S, AM_MT, AM_A, AND AM_C REFER TO AM PROCESS, AM SYSTEM, AM MATERIAL, AM ACTIVITY, AND AM CONCERN RESPECTIVELY.
The AM similarity scenarios are represented in terms of domains and tasks from source and target contexts. They define a domain in terms of a feature space \(X\) and a marginal probability distribution \(P(X)\) as
\[\mathfrak{D}=\{X,P(X)\} \tag{3}\]
where X= \(\{x_{1},x_{2},x_{3},...,x_{n}\}\in X\)
For two domains to be considered similar, their feature spaces and marginal distribution should match. A feature space is representative of all the features in each domain, while a marginal probability distribution is the probability of seeing a specific instance of the feature in that domain. Examples of a feature space can be a data representation such as an image format or the language of a text. Whereas a marginal distribution will be the probability of specific features (e.g., pixels, words) in each space.
For a given domain, a task is defined by a label space \(Y\) and a conditional probability distribution \(P(Y|X)\) as
\[\mathcal{T}=\{Y,P(Y|X)\} \tag{4}\]
where Y= \(\{y_{1},y_{2},x_{3},...,y_{n}\}\in Y\)
For two tasks to be considered similar, both label space and conditional distributions should match. A label space is representative of all labels for a given task, while conditional probability is the probability of seeing a specific label against a specific instance of the input. Examples of a label space include anomaly prediction, binary, or multi-class label spaces. A conditional probability will be one specific label value for a given instance in the feature space.
The domain and task for source and target AM context are defined in terms of the knowledge components identified earlier and as shown in Figure 2. An AM domain for TL can be defined based on AM_P, AM_S, AM_MT, and ML_I types where ML_I represents feature space and remaining parameters determine the marginal distribution of features in the space as
\[\mathfrak{D}_{AM}=\{X_{ML,J},P(X_{ML,J})\} \tag{5}\]
Similarly, an AM task for TL can be defined based on AM_A, AM_C and associated ML_O where ML_O defines the label space and the remaining parameters determine the conditional distribution of labels as
\[\mathcal{T}_{AM}=\{Y_{ML,T},P(Y_{ML,T}|X_{ML,J})\} \tag{6}\]
Once the AM source and target contexts are discretized into domain and task, the overall process of knowledge transfer revolves around answering three key questions as below:
_When to transfer_: This is the first step in Figure 2. Domain and task comparison between source and target scenarios is the main step to answering, "when to transfer?" question. The comparison between source and target can lead to four broad outcomes as: Same domain with same task, same domain with different task, different domain with same task, and different domain with different task. This sets the stage for knowledge transfer and helps answer the remaining questions. The first outcome represents a traditional machine learning problems where nothing is different, and source represents training setting while target represents test setting. The second outcome is referred to as inductive transfer learning. Based on the nature of task difference (label space or conditional probability distribution), an appropriate TL method and model can be selected. The third outcome where task is same for source and target is referred to as transductive transfer learning. Finally, the situation where both domain and task differ is referred to as unsupervised transfer learning.
_What to transfer_: Identification of a TL scenario helps select an appropriate TL method and answers "what to transfer" question, identified as the second step in Figure 2. Bang et al. surveyed TL methods and evaluated their applicability in manufacturing scenarios [12]. Their approach to select TL method combines TL scenario with the availability of labels in the target domain. A similar approach is worth considering in AM since it's difficult to obtain sufficient labeled data in industrial settings. A recent text on the topic classified TL methods into four main categories namely instance-based transfer, feature-based transfer, parameter-based transfer, and relation-based transfer [13].
_How to transfer_: Different models are available for each TL method. TL model development steps involve iterations over different model types to maximize transfer of mined knowledge. Figure 2 indicates that TL models can be seen to fall into the categories of instance-based, feature-based, parameters-based, and relations-based.
### Post-Transfer
The post-transfer step is simple as compared to the previous two steps. It involves validation of the transferred knowledge. In addition to testing on the target data post knowledge transfer, this step also involves the update of existing AM knowledge depending on the results of the transfer process. The post-transfer step can also involve rigorous testing on the limits of knowledge transfer once the initial TL method yield sufficient results. This can help determine the limits on the target knowledge (e.g., dataset size) that can yield acceptable performance with the available source data and model.
## 3 Case Study
The case study is conducted between LPBF and DED processes that have been published in the past. Specifically, the LPBF process and associated model come from the National Institute of Standards and Technology (NIST) and dataset is openly available [14]. The DED dataset is from Mississippi State University (MSU) without any associated model or knowledge [15]. The rationale is that source model and knowledge can be re-used following the presented framework for the task at hand. The task considered for the target context is anomaly detection based on process data.
The detailed experimental settings used to produce LPBF data can be found in the description of dataset. The experiment was carried out on Additive Manufacturing Metrology Testbed (AMMT) at NIST, an open LPBF system. The experiment results in a geometry (5mm x 9mm x 5mm) on wrought nickel alloy 625 (IN625) plate. The process parameters (power and speed) for pre-contour and infill hatching were (100 W, 900 mm/s) and (195 W, 800 mm/s) respectively. A total of 250 layers each 20\(\upmu\)m thick were printed with 90\({}^{\circ}\) rotation between layers. An optically
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Knowledge** & **Source** & \multicolumn{1}{p{56.9pt}|}{**Target**} & \multicolumn{1}{p{56.9pt}}{**Pre-Transfer**} \\
**Component** & **Context** & \multicolumn{1}{p{56.9pt}|}{**Context**} & \multicolumn{1}{p{56.9pt}|}{**Observation (Score)**} \\ \hline \multicolumn{5}{|c|}{_AM Knowledge Components_} \\ \hline _AM Process_ & LPBF: The LPBF process is used to fuse pre-laid layers of powder using an energy source & DED: The DED process uses an energy source to fuse materials as they are being deposited. & Both AM processes differ leading to change in marginal distribution (0) \\ \hline _AM Material_ & Substrate: Wrought nickel alloy 625 (IN625) & Substrate: T14Al6V Feedstock: T14Al6V powder & Both AM materials differ leading to change in marginal distribution (0) \\ \hline _AM System_ & Base System: Additive Manufacturing Metrology Testbed (AMMT) at NIST Camera Specifications: & Base System: OPTOMEC LENSTM3 750 system equipped with a 1 kW Nd:YAG laser (IPG) at MSU & The systems used were different. The data collected represents melt pool of both processes, each with different technical camera specifications \\ - & Visible Light Camera & Camera Specifications: & processes, each with different technical camera specifications \\ - & CMOS Detector & - Dual Wavelength Pyrometer CMOS Detector & The process monitoring data type was same (e.g., graphic or pixel topology data), representing a similarity of feature space for ML input. (0) \\ \hline _AM Model_ & A spatiotemporal model for process data representation & None & The source AM model can be adapted to target’s context to represent and arrange data (1) \\ \hline _AM Activity_ & Process & Process & Both contexts focus on same level of AM characteristic (1) \\ \hline _AM Concern_ & Process Anomaly Detection & Process Anomaly Detection & The type of process characteristic in source and target is same. (1) \\ \hline \multicolumn{5}{|c|}{_ML Knowledge Components_} \\ \hline _ML Task_ & Classification & Classification & Both contexts focus on the same ML task. (1) \\ \hline _ML Model_ & Convolutional LSTM Autoencoder & None & There exists no spatiotemporal model for target scenario. However, the ML model used for source task can be adapted once the AM model has been adapted. (1) \\ \hline _ML Input_ & Used: Graphic data of process melt pools in each layer of the printed specimen & Available: Graphic data of the process melt pools in each track of the printed sample & From ML perspective, there is a similarity in the process inputs (e.g., melt pool images) available to detect anomalies. (1) \\ \hline _ML Preprocessing_ & Applied: Miscellaneous preprocessing techniques, including cropping, noise reduction, scaling, and rotations & Applicable: transformations & All graphic The ML preprocessing used when developing source model can be applied to target data. (1) \\ \hline _ML Output_ & Binary (presence or absence of anomaly) & Binary (presence or absence of anomaly) & Output similarity leads to label space similarity for both tasks. (1) \\ \hline \end{tabular}
\end{table} TABLE II: Pre-transfer ANALYSIS BETWEEN SOURCE AND TARGET KNOWLEDGEMENTS COMPONENTS
aligned coaxial camera was used to monitor the melt pool. The camera specifications are detailed in Table 2.
In the case of DED process, A thin wall sample from Ti4Al6V powder was printed using an OPTOMEC LENS(tm) 750 system equipped with a 1 kW Nd:YAG laser (IPG) at MSU. The set dimensions (L x H) for the thin wall sample were 50.8 mm x 30.48 mm. The actual measured dimensions varied slightly. The substrate made of Ti4Al6V had dimensions of 153 x 153 x 3.3 mm\({}^{3}\). The processing parameters used for the fabrication were 290 W laser power, 12.7 mm/s scan rate, and the powder was fed at 0.32 g/s. A total of 60 tracks were printed with an upward increment of 0.508 mm. The overall process lasted seven minutes and thirty-seven seconds. The melt pool was recorded with pyrometer camera. The technical specifications are detailed in the Table 2.
Based on the framework, the pre-transfer analysis is carried out first. Table 2 comprehensively compares the identified knowledge components of the source, which is LPBF in this case study. The target is DED due to the lack of existing ML-based knowledge. Based on the similarity index (S=0.73), maturity factor (M=1) and availability factor (A=1), a pre-transfer score of 0.73 is obtained highlighting significant potential to knowledge transfer. In the ideal scenario, a comprehensive analysis with the existing literature may be conducted to identify all potential sources of knowledge.
The process to identify the TL model is carried out following steps of Figure 2. Similarities are observed in both AM domain as well as AM task. In AM domain, ML input (e.g., images) has the same representation allowing similar ML models to be developed in both cases. However, process (LPBF vs DED), material (Inconel vs Ti4Al6V) and system (Customized vs OPTOMEC LENS(tm) 750) differ leading to differences in marginal distribution. In AM task, an overlap is observed for both label space (e.g., anomalies) and conditional distribution (process activity and anomaly concern).
The pre-transfer step highlights significant transfer potential (e.g., pre-transfer score = 0.73) from source whereas domain and task comparisons indicate domain dissimilarity and task similarity (e.g., transductive learning). The applicable method to handle this scenario from TL literature is referred to as model or parameter-based transfer learning. This implies that the source and target scenarios share some common knowledge at the ML model level and the goal then becomes to exploit those common elements of source knowledge leading to the development of target data-driven model.
The datasets from source and target represent melt pool images captured in a time sequence. Figure 3 shows the source (A) and target (B) melt pools in a sequence taken randomly from the datasets. The source images represent the original LPBF melt pools in a 120 by 120 pixel window with greyscale representation. The target images represent processed DED melt pools in a much smaller window than the original size of 752 by 480. The processing represents their conversion from RBG to greyscale and subsequent cropping. This is done to bring both raw datasets in similar representation where the ML model can learn solely from the intensity gradients of pixels. Details on the source and target data can be found in [14] and [15] respectively. Later, both datasets are pre-processed to meet the input representation of the source model as closely as possible.
As a pre-requisite to model or parameter-based transfer, the datasets from source and target context are represented in the spatiotemporal structure proposed by Ko et al. [4]. Specifically, the build process can be decomposed into inner-layer and layer-wise transitions to jointly represent the resulting build. Let \(x_{i}\) represents the states of neighboring layers in the build, then the build can be represented as the concatenation with a series of \(L\) states representing neighboring layer as
\[x=\ \prod_{i=1}^{L}x_{i}\,,1\leq L<\infty \tag{7}\]
The inner layer transitions are defined as those by control and time advance. As a result, the layer state \(x_{i}\) can be decomposed into the controls that contribute to the completion of that layer via inner states \(x_{i,j}\), represented by \(M\) in total. This is represented as
\[x_{i}=\ \prod_{j=1}^{M}x_{i,j}\,,1\leq M<\infty \tag{8}\]
Each control step can be further decomposed in terms of the transitions \(x_{i,j,k}\) resulting from time advance. For a given control step \(x_{i,j}\), this can be represented as follows for a total of \(N\) time advances involved:
\[x_{i,j}=\ \prod_{K=1}^{N}x_{i,j,k}\,,1\leq N<\infty \tag{9}\]
Finally, to include both layer-wise and inner-layer transitions, the build dataset is represented as Equation (10):
\[\prod_{i=1}^{L}\prod_{j=1}^{M}\prod_{K=1}^{N}x_{i,j,k}\,,1\leq L,M,N<\infty \tag{10}\]
The LPBF and DED datasets are structured following the framework described above. Specifically, the spatial representations of individual melt pools are concatenated in a temporal order leading to spatiotemporal concatenations for ML model inputs. We use a time window of 4 (e.g., four melt pools)
Figure 3: Normal MELT pool examples from LPBF (A) and DED (B) in TIME order from LEFT to RIGHT
and apply a sliding window to generate these concatenations from both source and target datasets. In the case of LPBF, only the frame belonging to one layer (210\({}^{\text{th}}\)) are used for training while the frames belonging to another layer (150\({}^{\text{th}}\)) are used for testing. Each source layer has large number (e.g., \(\sim\)20,000 for 210\({}^{\text{th}}\) and \(\sim\)16,000 for 150\({}^{\text{th}}\)) of melt pool images owing to the high frame capture rate of the camera. We used a subset of 5,000 frames from each layer for training and testing purposes. Since the frame capture rate in DED is much lower than LPBF, significantly less melt pool images are captured. As a result, almost the entire dataset representing the spatiotemporal depositions in the thin-wall sample is used. After removing the frames corresponding to laser off mode, we are left with approximately 1578 melt pool frames, representing a 1:3 of DED to LPBF images. Out of these frames, 1047 normal frames are used for training representing almost a ratio of 1 to 5 for DED to LPBF images used for training. The total anomaly images available for test are 148. This scenario represents a practical situation for applying TL to exchange knowledge from data rich source to data scarce target.
The anomaly detection model requires a criterion for filtering the anomalous concatenations from the normal ones to support the training process. The process of transferring knowledge is expected to be independent of the condition of having the same anomaly criteria between source and target. This allows domain and application specific criteria to be selected. In the case of LPBF, the anomalies are defined as the presence of noise, plume, and spatter. Figure 4 (A) shows some examples of LPBF melt pool anomalies. In the case of DED, the anomalies are defined based on irregular melt pool shapes such as the ones shown in part B of the Figure 4. The exact criteria to process and filter the anomalous concatenations in both cases are refined through trial and error involving visual validation and manual selection.
Once the data is processed and structured, we first reproduce the implementation of source ML model as per the details in [4]. The ML model being reimplemented and later considered for TL is Convolutional Long Short-Term Memory (LSTM) Autoencoder normally deployed for video anomaly detection tasks. Table 3 presents architectural details of the model. The unique aspect in this kind of anomaly detection model is the bottleneck layer that preserves both spatial and temporal dependencies in a lower dimension. This is accomplished by Convolutional LSTM layer which learns both spatial and temporal features [16]. Equations (11) to Equation (16) usually represent a convolutional LSTM layer.
\[i_{t} = \sigma(W_{xi}*X_{t}+\ W_{hi}*H_{t-1}+W_{cl}\bigodot C_{t-1}+b_{hi}) \tag{11}\] \[f_{t} = \sigma\big{(}W_{xf}*X_{t}+\ W_{hf}*H_{t-1}+W_{cf}\bigodot C_{t-1}+ b_{f}\big{)}\] (12) \[g_{t} = \tanh(W_{xg}*X_{t}+\ W_{hg}*H_{t-1}+b_{hg-g})\] (13) \[C_{t} = f_{t}\bigodot C_{t-1}+i_{t}\bigodot g_{t}\] (14) \[o_{t} = \sigma(W_{ko}*X_{t}+\ W_{ho}*H_{t-1}+W_{co}\bigodot C_{t}+b_{f-o})\] (15) \[\mathcal{H}_{t} = \ o_{t}\bigodot\tanh(C_{t}) \tag{16}\]
As per the original article on Convolutional LSTM or ConvLSTM, the network preserves the convolutional structure in both input to state and state to state transitions [16]. The future states are predicted based on the previous states and the input. The key equations in this regard are presented above with \(X\) representing the data, \(W\) the kernels, \(b\) the biases, \(C\) the cell outputs, \(H\) the hidden states, and \(i\), \(f\) and \(o\) as the input, forget, and the output gates respectively. For the operations, \(*\), \(\bigodot\), \(\sigma\), and tanh represent convolutions, Hadamard product, sigmoid, and hyperbolic tangent functions.
We reproduce the model with Keras and associated libraries. While the chosen source melt pool images and subsequent preprocessing steps vary in the reimplementation, same hyperparameters (e.g., epochs, loss function, optimizer) are used to train the model as input distribution and domain remains the same. This also leads to a difference in the performance of reimplemented source model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Layers**} & **Channel** & **Kernel** & \multirow{2}{*}{**Stride**} & \multirow{2}{*}{**Activation**} \\ & **(in/out)** & & & \\ \hline Conv\_2D & 1/128 & 5 by 5 & 2 & Relu \\ \hline BN & 128/128 & & & & \\ \hline Conv\_2D & 128/64 & 5 by 5 & 2 & Relu \\ \hline BN & 64/64 & & & & \\ \hline ConvLSTM & 64/64 & 3 by 3 & 1 & Relu \\ \hline BN & 64/64 & & & \\ \hline ConvLSTM & 64/32 & 3 by 3 & 1 & Relu \\ \hline ConvLSTM & 32/64 & 3 by 3 & 1 & Relu \\ \hline BN & 64/64 & & & & \\ \hline Conv 2D T & 64/64 & 5 by 5 & 2 & Relu \\ \hline BN & 64/64 & & & & \\ \hline Conv\_2D\_T & 64/128 & 5 by 5 & 2 & Relu \\ \hline BN & 128/128 & & & & \\ \hline Conv 2D T & 128/1 & 2 by 2 & 1 & Sigmoid \\ \hline \end{tabular}
\end{table} TABLE 3: Source model architecture. ALL LearnABLE LAYERS ARE TIME DISTRIBUTED AND PERFORM FRAME-WISE 2D OPERATIONS. THE ENCODER (BLUE), DECODER (ORANGE) AND SPATIOTEMPORAL BOTTLENECK (GREAN) ARE HIGHLIGHTED. “BN” REPRESENTS BATCHORMALIZATION AND “T” INDICATES TRANSPOSE OPERATION
Figure 4: Anomalous MELT pool examples from LPBF (A) AND DED (B) PICED RANDITY
TL is conducted after the source model carrying the LPBF melt pool anomaly detection knowledge is obtained. Transferability analysis justifies the knowledge transfer from the source to target tasks with high similarities such as the task and data. It also highlights the key geometric and temporal differences to guide TL. Geometrically, the sizes of the melt pools from the two datasets are different dictated by different process scale and measurement devices: temporally, different frame rates and solidification speeds yields different time-series patterns. Thus, both the CNN and ConvLSTM layers must be retrained to adapt to the target task. In such as deep and complex source model, it has the risk of 'catastrophic forgetting' if all layers are re-trained simultaneously [17]. Overwhelmed by the re-training process, the model might fail to adapt to the target task and even lose the source knowledge. Therefore, three re-training strategies are applied to investigate and avoid negative transfer. As a result, three levels of source knowledge (e.g., data representation, data structuring/processing, and model parameters) eventually get transferred to target's context. Figure 5 highlights different levels of transferred source knowledge.
## 4 Results and Discussion
To test the performance on video anomaly detection task, we use a regularity score to detect concatenations with irregular or anomalous frames. Then, a threshold is chosen to filter the normal concatenations having higher regularity score from those having lower scores. The accuracy of the autoencoder is defined as the percentage of detected anomalies from all anomalies in the test data. The regularity score first computes pixel-wise intensity differences between original and reconstructed frames at each timestamp of a given concatenation. This is done using L2 Norm. The reconstruction error for a given frame is computed by summing all pixel-wise differences. Later, the frame-wise differences are summed across a concatenation to get its reconstruction cost. The reconstruction costs of all concatenations in the train dataset are normalized between 0 and 1, defined as the abnormality score. Finally, the regularity scores are obtained by subtracting abnormality scores from 1. The step to calculate regularity scores are shown in Equations (17) to (21).
\[d(x,y)=\left\|{\left|{{{\left|{{{\left|{{O(x,y)-f_{\alpha}e} \left({{O(x,y)}}\right)}}\right)}}\right|}}}\right.\right. \tag{17}\] \[d_{t}=\sum\nolimits_{(x,y)}{d(x,y)}}\] (18) \[r_{t}=\sum\nolimits_{t=1}^{t+4}{d_{t}}\] (19) \[sa_{t}=(r_{t}-r_{min})/r_{max}\] (20) \[sr_{t}=\ 1-sa_{t} \tag{21}\]
where \(d(x,y)\) matrix represents pixel-wise differences within an image. \(d_{t}\) represents sum of errors over the entire image, \(r_{t}\) represents sum or errors over the entire concatenation. \(sa_{t}\) is the normalized abnormality score and \(sr_{t}\) is the regularity score.
The reimplemented model on source dataset from NIST obtained an accuracy of 90% when tested on unseen data from a different layer. To have a fair comparison with the source performance, we used reconstruction error instead of the final regularity scores and chose the maximum value on the train set as the threshold (e.g., source uses 99\({}^{\text{th}}\) percentile of anomaly scores on train set). The difference in performance from reported results (e.g., 98%) can be explained on several factors such as preprocessing, anomaly metrics, and dataset selection. First, we only normalized the raw dataset as opposed to augmentations (e.g., rotation and scaling) that were done in the source context to improve the performance and avoid overfitting. Secondly, the choice of dataset varies in two aspects as the exact images and anomaly criteria chosen for re-implementation are different. The labeling of source dataset was done manually to filter different types of abnormalities. This requires significant effort. While preparing the data to train with normal concatenations, we only filtered images with spatter, plume or obvious noises leaving the irregular melt pool shapes in the set. This reimplemented model was considered good enough to test transfer learning scenarios from source to target on anomaly detection task.
The source architecture is trained on target DED data from scratch to see how well it performs on the test data before the transfer learning is carried out. A total of 1,047 concatenations are used for train set while total available anomalous concatenations constituting the test set for the target are 148. Since regularity score differs from reconstruction error, minimum regularity score on train set can be selected as opposed to maximum reconstruction error. This ensures that threshold accounts for all regular images. We choose 3\({}^{\text{rd}}\) minimum
Figure 5: Regularity scores for 200 DED concatenations with detected anomalies
regularity score on train set to avoid any noisy outlier. Without TL, the trained and optimized source architecture performs 84% accurately on the target test set when trained from scratch. Figure 6 shows the regularity scores for a subset of target data showing three anomaly images with their low regularity scores.
A total of three TL strategies are used to fine tune reimplemented source model of LPBF data on target DED data. In the first strategy, all layers are re-trained simultaneously for 200 epochs. The second strategy first freezes the CNN layers while re-training the ConvLSTM layers for 100 epochs, thereafter, freezes the ConvLSTM layers while re-training the CNN layers for the next 100 epochs. The third strategy reverses the sequence of the second strategy by first re-training the CNN layers then re-training the ConvLSTM layers. Figure 7 shows significant reduction in loss value during the training process which is not achievable when the model is trained solely on the target data. The three TL strategies appear to converge at the same loss value.
potential and applicable sources of knowledge. We link the nature of similarity to the type of transfer possible and validate post-transfer to update the existing AM knowledge. A case study is conducted between knowledge rich source and knowledge scarce target AM contexts. We successfully adapt source data representation, ML model architecture, and ML model parameters to target context. In the future, the pre-transfer process can be automated to allow efficient and comprehensive comparison with the existing literature. The identified knowledge components can be made robust to enable better transferability analysis.
## Acknowledgements
McGill Engineering Doctoral Award (MEDA) fellowship for Mutahar Safdar is acknowledged with gratitude. Mutahar Safdar also received financial support from National Research Council of Canada (Grant# NRC INT-015-1). McGill Graduate Excellence Award (Grant# 00157), Mitacs Accelerate Program (Grant# IT13369), and MEDA fellowship for Jiarui Xie are acknowledged with gratitude. The authors are grateful to Digital Research Alliance of Canada (RRG# 4294) for providing computational resources to support this research. We acknowledge the open availability of data from NIST and Mississippi State University that made this research possible.
|
2309.16095 | First Principles Investigation of Polymorphism in Halide Perovskites | Halide perovskites have been extensively studied as materials of interest for
optoelectronic applications. There is a major emphasis on ways to tailor the
stability, defect behavior, electronic band structure, and optical absorption
in halide perovskites, by changing the composition or structure. In this work,
we present our contribution to this field in the form of a comprehensive
computational investigation of properties as a function of the perovskite
phase, different degrees of lattice strains and octahedral distortion and
rotation, and the ordering of cations in perovskite alloys. We performed first
principles-based density functional theory computations using multiple
semi-local and non-local hybrid functionals to calculate optimized lattice
parameters, energies of decomposition, electronic band gaps, and theoretical
photovoltaic efficiencies. Trends and critical observations from the
high-throughput dataset are discussed, especially in terms of the range of
optoelectronic properties achievable while keeping the material in a
(meta)stable phase or distorted, strained, or differently ordered polymorph.
All data is made openly available to the community and is currently being
utilized to train state-of-the-art machine learning models for accelerated
prediction and discovery, as well as to guide rational experimental discovery. | Jiaqi Yang, Arun Mannodi-Kanakkithodi | 2023-09-28T01:38:23Z | http://arxiv.org/abs/2309.16095v1 | # First Principles Investigation of Polymorphism in Halide Perovskites
###### Abstract
Halide perovskites have been extensively studied as materials of interest for optoelectronic applications. There is a major emphasis on ways to tailor the stability, defect behavior, electronic band structure, and optical absorption in halide perovskites, by changing the composition or structure. In this work, we present our contribution to this field in the form of a comprehensive computational investigation of properties as a function of the perovskite phase, different degrees of lattice strains and octahedral distortion and rotation, and the ordering of cations in perovskite alloys. We performed first principles-based density functional theory computations using multiple semi-local and non-local hybrid functionals to calculate optimized lattice parameters, energies of decomposition, electronic band gaps, and theoretical photovoltaic efficiencies. Trends and critical observations from the high-throughput dataset are discussed, especially in terms of the range of optoelectronic properties achievable while keeping the material in a (meta)stable phase or distorted, strained, or differently ordered polymorph. All data is made openly available to the community and is currently being utilized to train state-of-the-art machine learning models for accelerated prediction and discovery, as well as to guide rational experimental discovery.
## Introduction
Halide perovskites (HaPs) are very attractive materials for a variety of electronic and optical applications, primarily owing to their massive chemical space and engineerability in terms of composition, structure, alloying, and doping [1, 2, 3, 4, 5]. The canonical ABX\({}_{3}\) perovskite family of materials are of great interest as absorbers in solar cells [6, 7], with record efficiencies achieved to date of 25.7% in single-junction solar cells [8] and very recently, 32.5% in Si-perovskite tandem solar cell [9, 10]. While these efficiencies firmly place HaPs in the fastest growing market for solar absorption, they are also well below theoretical maximum values [11, 12, 13]. The sheer size of the possible HaP chemical space, including 3D and layered materials, purely inorganic and hybrid organic-inorganic perovskites (HOIPs), and complex alloys, makes it difficult to screen promising materials in a brute-force manner, but also provides massive opportunities for discovery and understanding via high-throughput computation.
The published literature contains several glittering examples of data-driven and experimental efforts to optimize perovskite compositions for optoelectronic performance [14, 15, 16, 17, 18]. However, comprehensive understanding of the effect of polymorphism on the properties of HaPs is still an active area of research. The cubic phase is the standard prototype structure any perovskite is simulated in, and there are established numerical metrics such as the Goldschmidt tolerance and octahedral factors that determine the stability of any ABX\({}_{3}\) compound in the cubic phase [19]. As shown in our recent work, such factors are important but not sufficient conditions for perovskite stability, as a thermodynamic evaluation based on first principles reveals many materials that may decompose to alternative phases despite suitable ionic radii [20]. Perovskites could further adopt a series of other prototype phases such as tetragonal, orthorhombic, or hexagonal, as well as other corner-, edge-, or face-shared phases such as distorted orthorhombic and needle-like [21, 22, 23, 5]. Polymorphism may also manifest within the same phase, in terms of energetically favorable (or metastable) distortions or rotations in corner-shared BX\({}_{6}\) octahedra or via uni-axial or multi-axial lattice strains [24, 25, 26, 27], as well as
in terms of re-optimization of the compound in larger supercells with symmetry-breaking via small distortions [28]. The result is typically the existence of multiple competing phases that may all contribute as an ensemble to experimentally measured band gaps and optical absorption, instead of a sole ground state structure determining the properties.
Doping and alloying are two of the most common ways to engineer the properties of perovskites; the former typically involves a heterovalent ion of a suitable size substituting an A or B cation [29, 30, 31], whereas the latter involves multiple homovalent cations or anions mixed together at A/B or X sites [32, 33, 34, 35], respectively. The number of ways to dope HaPs or create mixed compositions are practically infinite, introducing several degrees of freedom in the HaP structure-composition-properties space [5]. In multiple recent studies, we comprehensively explored both B-site dopants [36, 37] and A/B/X-site alloying [38, 20] in ABX\({}_{3}\) compounds using density functional theory (DFT) computations, and examined how the stability and optoelectronic properties depend on the nature of ionic mixing. We used the special quasi-random structures (SQS) approach [39] to simulate alloys in large supercells, as is the norm in the literature, but for any given mixed composition, the properties may depend heavily on ionic ordering as well, creating more polymorphs that must be considered.
Additionally, the difficulties of brute-force experimentation within a massive structure-composition space means that high-throughput DFT (HT-DFT) computations are essential for systematically assessing trends and correlations that may guide multi-objective optimization and rational experimental synthesis and testing. DFT is extensively applied for determining lattice parameters, heat of formation or decomposition (\(\Delta\)H), band gap (E\({}_{g}\)), optical absorption-derived spectroscopic limited maximum efficiency (SLME) [40, 41], and defect formation energy (DFE) in perovskites, with mixed accuracy compared to experiments [5, 42]. While semi-local GGA-PBE and variants such as PBEsol (improved PBE for solids) [43] and PBE-D3 (for weak dispersion interactions) [44] reproduce bulk stability and structure well, they
generally under-predict E\({}_{g}\) compared to non-local hybrid HSE06 functional or beyond-DFT GW approximation [42, 45]. For many Pb/Sn HOIPs, PBE E\({}_{g}\) without including spin-orbit coupling (SOC) is often as accurate as HSE E\({}_{g}\) with SOC included [5, 20, 38], an effect that holds true for DFEs and corresponding defect charge transition levels (CTLs) [37, 46]. HSE+SOC is more expensive but generally more accurate for electronic properties. Oftentimes, the Hartree Fock to semi-local exchange parameter \(\alpha\) needs to be tuned in HSE [47], which is highly sensitive to material composition and not easy to perform over massive chemical spaces. Thus, the DFT functional itself is an added factor that determines perovskite properties: while general property-polymorph relationship trends may be reliable from semi-local functionals, more advanced theories would be necessary for quantitative estimates that could be compared with experiments.
Based on the above ideas, and building upon our past work, we present here a systematic investigation of the following types of polymorphism in a selected chemical space of ABX
Figure 1: (a) 4 prototype ABX\({}_{3}\) HaP phases, namely cubic, tetragonal, orthorhombic, and hexagonal. (b) 4\(\times\)4\(\times\)4 cubic supercells showing a MA(Pb-Sn-Ba-Sr-Ca)I\({}_{3}\) quinary alloy with different ionic ordering. (c) Octahedral distortion in the MAPbBr\({}_{3}\) cubic lattice.
HaPs, employing different types of PBE and HSE06 functionals within DFT:
1. The effect of perovskite phase, as it changes from cubic to tetragonal to orthorhombic to hexagonal. The four phases are pictured in **Fig. 1(a)**.
2. The effect of ionic ordering in B-site mixed compounds simulated in large supercells; two example structures for a quinary alloys are pictured in **Fig. 1(b)**.
3. The effect of lattice strain and octahedral distortion/rotation within the cubic perovskite lattice, as shown in **Fig. 1(c)**.
This work provides an understanding of how the above factors may positively or adversely affect the HaP stability and properties of interest for optoelectronic applications, especially single-junction solar absorption. Within a chemical space defined by A = FA, MA, or Cs (where FA and MA are organic molecules, formamidinium and methylammonium, respectively), B = Pb, Sn, Ge, Ba, Sr, and Ca, and X = I, Br, and Cl, we define easily-attainable and generalizable descriptors that encode the composition, phase, elemental properties, and ionic ordering, leading to important design rules such as the favorability of Ba-Ba clustering in increasing the bulk stability and the extent to which strain and distortions may keep the lattice stable while tuning the band gap. We also obtain important insights on the accuracy of different functionals and the inter-relationships between them. We believe the datasets and understanding obtained from this work will be crucial for guiding subsequent studies, both computational and experimental, as well as training machine learning (ML) models for accelerated prediction and screening over hundreds of thousands of possible structures and compositions. In the following sections, we describe computational details and present a series of plots and discussions unraveling the effects of different factors on the properties of interest. All data is made openly available for the benefit of the community.
## 3 Computational Methodology
### DFT Details
All DFT computations were performed using VASP version 6.2 [48, 49, 50], employing Projector Augmented Wave (PAW) pseudopotentials [51, 52]. The Perdew, Burke, and Ernzerhof (PBE) functional within the generalized gradient approximation (GGA) [53] as well as the Heyd-Scuseria-Ernzerhof (HSE) functional (\(\alpha\)=0.25 and \(\omega\)=0.2) are used for the exchange-correlation energy. The energy cutoff for the plane-wave basis is set to 500 eV. A Monkhorst-Pack k-point mesh of 6\(\times\)6\(\times\)6 is used for cubic unit cells and a mesh of 4\(\times\)4\(\times\)3 is used for prototypical tetragonal, orthorhombic, and hexagonal unit cells. For cubic supercell calculations, the k-point meshes are reduced to 3\(\times\)3\(\times\)3 and gamma-point only for 2\(\times\)2\(\times\)2 and 4\(\times\)4\(\times\)4 supercells, respectively. The k-point meshes are accordingly scaled down for the tetragonal, orthorhombic, and hexagonal supercells as well. Starting from the PBE-optimized structures, full geometry optimization is additionally performed using PBEsol, PBE-D3, and PBEsol-D3, by adding relevant input tags. The force convergence threshold is set to be -0.05 eV/A for all geometry optimization runs. Spin-orbit coupling (SOC) is incorporated in HSE06 calculations using the LORBIT tag and the non-collinear magnetic version of VASP 6.2 [54].
The PBE-optimized structure is used as input for calculating the optical absorption spectrum using the LOPTICS tag and setting the number of energy bands to 1000, and the approach developed by Yu et al. [55] is then applied to determine the spectroscopic limited maximum efficiency (SLME) as a function of sample thickness. The SLME value at 5 \(\mu\)m thickness is taken as the theoretical photovoltaic (PV) efficiency. The band gap is computed from the PBE optimization runs and from static HSE calculations based on the PBE-optimized structure, where k-point meshes of 2\(\times\)2\(\times\)2 and 2\(\times\)2\(\times\)1 are respectively used for cubic and tetragonal/orthorhombic/hexagonal supercells. SLME values at the PBEsol, PBE-D3, PBEsol-D3, and HSE levels are determined by shifting the PBE-computed optical
spectra by the difference between the PBE band gap and that computed from the corresponding functional, and recalculating the SLME based on the approach used in past work [20]. We consider a series of pure and mixed-composition ABX\({}_{3}\) compounds in this work; all alloys are simulated using SQS except for the 4\(\times\)4\(\times\)4 supercell structures, where we explicitly examine the effect of ionic ordering by considering 20 to 25 randomly ordered quaternary or quinary B-site mixed compounds.
Ultimately, multiple types of PBE (PBE-optimized, PBEsol-optimized, PBE-D3-optimized, and PBEsol-D3-optimized) and HSE (static HSE+SOC on PBE-optimized structure) functionals are applied for subsets of all compounds being studied, and the effect of each is examined for the one or all of the following properties: the effective lattice parameter (a\({}_{eff}\)), decomposition energy (\(\Delta\)H), band gap (E\({}_{g}\)), and SLME. While the a, b, c lattice constants are computed for each cubic and non-cubic structure from every level of theory, for efficient comparison, we define the "effective lattice parameter" for any given pure or mixed composition material as a\({}_{eff}\) = (V\({}_{sc}\)/pfu)\({}^{1/3}\), where V\({}_{sc}\) is the supercell volume and pfu is the number of perovskite formula units in the supercell. A negative \(\Delta\)H implies an inherent resistance of any ABX\({}_{3}\) compound to decompose to AX and BX\({}_{2}\) phases. E\({}_{g}\) should typically be between 1 eV and 2 eV for suitable single-junction solar absorption, whereas the SLME should be as high as possible. Equations for calculating SLME can be found in the original publications [55] and in our past work [20]. \(\Delta\)H is calculated using equation (1), where E\({}_{opt}\)(S) is the total DFT energy per formula unit of any system S, k\({}_{B}\) is the Boltzmann constant, T is the temperature fixed to be 300K here, and x\({}_{i}\) is the mixing fraction of any species at A/B/X sites.
\[\Delta H=E_{opt}(ABX_{3})-\sum_{i}x_{i}E_{opt}(AX)-\sum_{i}x_{i}E_{opt}(BX_{2} )+k_{B}T(\sum_{i}x_{i}ln(x_{i})) \tag{1}\]
### Simulating Polymorphs using Perovskite Supercells
We first study 27 pure ABX\({}_{3}\) compounds, defined by A = FA, MA, or Cs, B = Pb, Sn, or Ge, and X = I, Br, or Cl, in 4 different phases, namely cubic, tetragonal, orthorhombic, and hexagonal, as pictured in **Fig. 1(a)**. All 27*4 = 108 structures are optimized using PBE, PBEsol, PBE-D3, and PBEsol-D3. We then perform HSE+SOC computations on all 108 structures using the PBE-optimized structures as input. Next, we consider 19 random alloyed compositions each in two (most likely) phases each and perform PBE as well as HSE+SOC computations on them, which leads to a total dataset of 146 points. We thus have the ability to compare the a\({}_{eff}\), \(\Delta\)H, E\({}_{g}\), and SLME calculated from multiple functionals for 146 compounds with any known experimental values, as well as to visualize entire datasets of different properties plotted against each other.
After studying the effect of DFT functional and perovskite phase, we turn our attentions to ionic ordering in mixed compounds: this is accomplished by simulating cubic MA-(Pb/Sn/Ba/Sr/Ca)-I\({}_{3}\) 4\(\times\)4\(\times\)4 supercells in quaternary (4 species mixed at B) and quinary (5 species mixed at B) compositions, with 20 possible structures each considered for the 5 quaternaries and 25 structures for the quinary. This leads to a dataset of 125 compounds, and the PBE-computed \(\Delta\)H and E\({}_{g}\) are visualized in terms of clustering of different B-site cations. Finally, we consider 6 compounds in the cubic phase, namely CsPbI\({}_{3}\), CsPbBr\({}_{3}\), CsPbCl\({}_{3}\), MAPbI\({}_{3}\), MAPbBr\({}_{3}\) and MAPbCl\({}_{3}\), and induce a series of lattice strains and octahedral distortions/rotations starting from the PBE-optimized ground state structures. First, we apply systematic compression and elongation in the lattice by changing the lattice constants and running volume-fixed geometry optimization; the changes in \(\Delta\)H and E\({}_{g}\) for the newly optimized structures are visualized against the amount of strain. Further, we distort corner-shared octahedra by changing positions of the bridging X atoms, perform geometry optimization on the new structures containing different amounts of distortion and rotation, and finally visualize the computed \(\Delta\)H and E\({}_{g}\).
### Compiled Datasets
**Table 1** presents a description of all the DFT datasets generated as part of this work, as well as the experimental data collected for comparison. Ultimately, we compute lattice constants, stability, band gap, and PV efficiency for 146 (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) compounds (both pure and mixed composition), from PBE, PBEsol, PBE-D3, PBEsol-D3, and HSE-PBE+SOC (henceforth referred to as HSE). 125 quinaries and quaternaries are simulated in 4\(\times\)4\(\times\)4 cubic supercells of MA-(Pb/Sn/Ba/Sr/Ca)-I\({}_{3}\) which yields lattice constants, stability, and band gaps from PBE. Lattice strain and octahedral distortion/rotation applied on (MA-Cs)(Pb)(I-Br-Cl)\({}_{3}\) results in datasets of 65 and 677 points respectively, with lattice constants, stability, and band gaps computed from PBE. Experimental data is collected from across several publications [56, 57, 58, 59, 60], leading to lattice constants of 32 compounds, band gaps of 31 compounds, and power conversion efficiency (PCE) values of 19 compounds, which are compared against corresponding values from the multi-phase HaP datasets of 146 compounds.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Dataset** & **Chemical Space** & **Functional** & **Data Points** & **Properties** \\ \hline Multi-phase HaPs & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & PBE & 146 & a, b, c, \(\Delta\)H, E\({}_{y}\), SLME \\ Multi-phase HaPs & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & PBEsol & 146 & a, b, c, \(\Delta\)H, E\({}_{y}\), SLME \\ Multi-phase HaPs & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & PBE-D3 & 146 & a, b, c, \(\Delta\)H, E\({}_{y}\), SLME \\ Multi-phase HaPs & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & PBE-D3 & 146 & a, b, c, \(\Delta\)H, E\({}_{y}\), SLME \\ \hline Quaternaries \& Quinaries & MA(Pb-Sn-Be-Sr-Ca)I\({}_{3}\) & PBE & 125 & a, b, c, \(\Delta\)H, E\({}_{y}\) \\ \hline Lattice strain & (MA-Cs)(Pb)(I-Br-Cl)\({}_{3}\) & PBE & 65 & a, b, c, \(\Delta\)H, E\({}_{y}\) \\ Octahedral distortion & (MA-Cs)(Pb)(I-Br-Cl)\({}_{3}\) & PBE & 677 & a, b, c, \(\Delta\)H, E\({}_{y}\) \\ \hline Experimental Lattice Constants & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & - & 32 & a, b, c \\ Experimental Band Gaps & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & - & 31 & E\({}_{y}\) \\ Experimental PV Efficiencies & (MA-FA-Cs)(Pb-Sn-Ge)(I-Br-Cl)\({}_{3}\) & - & 19 & PCE \\ \hline \end{tabular}
\end{table}
Table 1: Description of all the datasets used in this work, in terms of the chemical space, DFT functionals used, number of data points, and computed properties. a, b, and c refer to the optimized or experimental lattice constants. Experimental data is collected from across several publications [56, 57, 58, 59, 60]. PCE stands for power conversion efficiency.
## Results and Discussion
### Tracking Computed Properties Against Phase and Functional
**Fig. 2** shows \(\Delta\)H vs E\({}_{g}\) and E\({}_{g}\) vs SLME plots for the multi-phase HaP dataset of 146 points, from PBE and HSE-PBE+SOC (henceforth referred to as HSE). Corresponding plots for PBEsol, PBE-D3, and PBEsol-D3 are presented in **Fig. S1**. The general property distributions are very similar from all 5 functionals. We note here that while the PBE-based functionals are clearly not intended to estimate band gaps and PV efficiencies, our purpose
Figure 2: Visualization of the DFT dataset of multi-phase HaPs: (a) PBE \(\Delta\)H vs E\({}_{g}\), (b) PBE E\({}_{g}\) vs SLME (PV efficiency), (c) HSE-PBE+SOC \(\Delta\)H vs E\({}_{g}\), and (d) HSE-PBE+SOC E\({}_{g}\) vs SLME. Different shapes of the scatter points represent hybrid organic-inorganic HaPs and purely inorganic HaPs, and the different colors represent different perovskite phases.
is to establish a baseline of computation accuracy from these cheaper semi-local functionals. Across the plots in **Fig. 2** and **Fig. S1**, the scatter points are distinguished in terms of the perovskite phase and whether a compound is a hybrid organic-inorganic perovskite (MA- or FA-based) or a purely inorganic perovskite (Cs-based). As seen from **Fig. 2(a)** and **(c)**, a majority of the compounds have \(\Delta\)H < 0 eV p.f.u. from both PBE and HSE, indicating a strong resistance to decomposition for nearly all chemistries and phases, except for a few inorganic compounds where \(\Delta\)H \(\sim\) 1.5 eV p.f.u. The unstable compounds are primarily Cs-based chlorides whereas the most stable compounds (with very negative \(\Delta\)H) are usually FA-based iodides. There is no clear phase-specific stability preference, with all phases seemingly distributed across the range of \(\Delta\)H values from \(\sim\) -1.5 eV p.f.u. to 1.5 eV p.f.u.
E\({}_{g}\) ranges from a low of \(\sim\) 0.5 eV (0.2 eV) from PBE (HSE) for cubic CsSnI\({}_{3}\) to a high of \(\sim\) 3.5 eV (4 eV) from PBE (HSE) for hexagonal MAGeCl\({}_{3}\). SLME (labeled as PV efficiency in % in the plots) values range from 0 for the highest band gap compounds to a high of 16% from both PBE and HSE for compounds with E\({}_{g}\) in the vicinity of 1 eV. The SLME vs E\({}_{g}\) plot shows the characteristic shape that has been explored in past works.[20, 61, 62] Interestingly, many of the highest SLME values are shown by cubic, tetragonal, or orthorhombic phases of Cs-based compounds, which also show some of the lowest E\({}_{g}\) values--with the caveat that some of these compounds also have very large \(\Delta\)H and are thus unstable. The outer boundary of the SLME vs E\({}_{g}\) plots is dominated by hexagonal phase hybrid perovskites because of their tendency to show larger E\({}_{g}\). Very similar ranges of values for all three properties are observed in the PBEsol, PBE-D3, and PBEsol-D3 datasets as well, as shown in **Fig. S1**, with PBEsol-D3 showing a tendency for broadening both the \(\Delta\)H and E\({}_{g}\) ranges. **Fig. S2** shows the PBE-computed \(\Delta\)H, E\({}_{g}\), and PV efficiency plotted against the corresponding values from the remaining four functionals, namely PBEsol, PBE-D3, PBEsol-D3, and HSE. We find that the \(\Delta\)H values track very linearly except for some compounds which are predicted to be more stable from HSE than from PBE. E\({}_{g}\) and PV efficiency also track linearly
except for when D3 corrections are used; many of the inorganic compounds show strange behavior, seemingly arising from the fact that vdW corrections are unnecessary for Cs-based compounds although they should be incoporated for FA- and MA-based compounds.
### Benchmarking Computed Optoelectronic Properties
To further examine the phase-dependent properties of different hybrid and inorganic HaPs, we plotted the PBE and HSE properties of 16 MA-based compounds in **Fig. 3**, using individual plots with compound labels on the x-axis. Corresponding PBE and HSE data for 12
Figure 3: Multi-phase property visualization for 16 MA-based hybrid HaPs: (a) PBE \(\Delta\)H, (b) PBE E\({}_{g}\), (c) PBE SLME, (d) HSE \(\Delta\)H, (e) HSE E\({}_{g}\), and (f) HSE SLME.
FA-based compounds and 17 Cs-based compounds are presented in **Figs. S3** and **S4**. Plots for the PBEsol, PBE-D3, and PBEsol-D3 data are presented across **Figs. S5**, S6, and **S7**. Known experimental values of E\({}_{g}\) and PCE are also shown in all such plots. We find that the tetragonal phase is largely the most stable one for MA compounds with the hexagonal phase also showing low energies in many cases, from both PBE and HSE. Hexagonal phase compounds show the largest E\({}_{g}\) and the lowest SLME, whereas the orthorhombic phase displays the opposite behavior. There is also an impressive match between the PBE-computed E\({}_{g}\) of tetragonal phase compounds and the corresponding measured E\({}_{g}\) values, which comes from the aforementioned cancelation of errors and accidental accuracy of GGA-PBE when SOC is not included [20]. The HSE E\({}_{g}\) for the same compounds end up being under-estimated because of SOC reducing the band gap to a larger extent than desired, and likely because of a need for tuning the mixing parameter \(\alpha\) in the HSE calculation. The measured PCE values on average do not match well with PBE or HSE computed SLME values, although some qualitative trends are captured.
Similarly, **Fig. S3** shows that the hexagonal or cubic phase are most preferred for FA-based compounds, there is a smaller range of E\({}_{g}\) values across the four phases and both PBE and HSE match very well with experiments, and the PV efficiency values from both functionals also match reasonably well with experiments. The best match with experiments for E\({}_{g}\) and SLME of FA compounds comes from the hexagonal phase. **Fig. S4** shows that the orthorhombic or cubic phase are most preffered for Cs compounds from both PBE and HSE, and there is little match between computed E\({}_{g}\) and SLME and corresponding measured values across the 17 compounds. PBE-computed E\({}_{g}\) and SLME show pretty good accuracy for CsPbX\({}_{3}\) compounds (where X is some combination of I, Br, and Cl) but falter for most of the Sn-containing compounds. Finally, the DFT-computed E\({}_{g}\) from different functionals are plotted against experimental values for 31 known compounds in **Fig. S8(a)**. Surprisingly, PBE shows the lowest root mean square error (RMSE, DFT vs experiment) of 0.57 eV, with
Figure 4: Accuracy of different DFT functionals compared to experiments: PBE and HSE-PBE+SOC E\({}_{g}\) for (a) 31 MA-, FA-, and Cs-based compounds, (b) 17 hybrid HaPs (MA- and FA-based), and (c) 14 inorganic Cs-based HaPs, and (d) pseudo-cubic lattice constants of 32 MA-, FA-, and Cs-based compounds from PBE, PBEsol, PBE-D3, and PBEsol-D3.
the remaining functionals, including HSE, showing RMSE values between 0.81 eV and 1.06 eV. These errors are highly dependent on perovskite type and can certainly be improved using advanced functionals, including tuning the HSE mixing parameter.
To improve the DFT-experiment correspondence, we applied some corrections to the PBE and HSE E\({}_{g}\) values and observed a marked reduction in RMSE. **Fig. 4(a)** shows PBE E\({}_{g}\) shifted up by 0.4 eV and HSE E\({}_{g}\) shifted up by 0.8 eV plotted against measured E\({}_{g}\) values for 31 compounds, showing RMSE values of 0.47 eV and 0.35 eV respectively for PBE and HSE. This data is plotted for 17 hybrid HaPs in **Fig. 4(b)** and for 14 inorganic HaPs in **Fig. 4(c)**. As observed earlier, PBE E\({}_{g}\) without any correction shows a very low RMSE of 0.23 eV for hybrid HaPs, while HSE E\({}_{g}\) shifted up by 0.5 eV also shows an RMSE of 0.23 eV. PBE E\({}_{g}\) shifted up by 0.7 eV improves the inorganic RMSE to 0.44 eV and the HSE E\({}_{g}\) shifted up by 1 eV for inorganic compounds shows an RMSE of 0.30 eV. Thus, based on the observations so far, we posit here that (a) PBE E\({}_{g}\) can be used with reasonable accuracy for screening across MA- and FA-based HaPs, (b) HSE+SOC E\({}_{g}\) (using a default mixing parameter of \(\alpha\)=0.25) shifted up by 1 eV can serve as an accurate estimate for Cs-based HaPs, and (c) it would very likely be possible to accurately learn the experiment-level E\({}_{g}\) of all important HaP compositions by combining PBE, HSE, and experimental data and performing multi-fidelity learning,[63, 64] as we plan to do in future work. **Table 2** lists the RMSE values for E\({}_{g}\) from all DFT functionals compared against experiments, with and without corrections, for different datasets.
### Benchmarking Computed Lattice Constants
Finally, we examine the accuracy and inter-dependence of lattice constant values from the four PBE-based functionals. **Fig. S9** shows the PBE-computed effective lattice parameter (a\({}_{eff}\)) plotted against the corresponding values from PBEsol, PBE-D3, and PBEsol-D3, for
the entire multi-phase HaP dataset of 146 compounds. In general, a\({}_{eff}\) values decrease from PBE to PBE-D3 to PBEsol to PBEsol-D3, showing how the lattice becomes more closely packed upon the inclusion of vdW interactions and the use of PBEsol. There is a clear linear correlation between a\({}_{eff}\) values from different functionals, and equations listed in the plots in **Fig. S9** could be trivially applied to calculate lattice parameters for any functional using the PBE values, with 99% accuracy. It is noted here that during geometry optimization using any functional, the prototype cubic/tetragonal/orthorhombic/hexagonal structure is used as the starting point, with necessary SQS-based ionic mixing, but the cell size and shape are allowed to change a small amount for lowering the energy, meaning a lot of the compounds have slight deviations from ideal prototype structures. Lattice constants will further change if full geometry optimization were to be performed with HSE06, but as shown in our past work [20], HSE06-relaxation is often unnecessary for HaPs.
**Fig. 4(d)** shows the computed lattice constants for 32 selected compounds plotted against their corresponding experimentally measured values. For better visibility, the longer a/b/c lattice parameters from tetra/ortho/hex phases are moved to a separate plot in **Fig.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Functional** & **Dataset** & **Property** & **RMSE** \\ \hline PBE & 32 HaPs & Lattice Constant (Å) & 0.16 \\ PBEsol & 32 HaPs & Lattice Constant (Å) & 0.06 \\ PBE-D3 & 32 HaPs & Lattice Constant (Å) & 0.05 \\ PBEsol-D3 & 32 HaPs & Lattice Constant (Å) & 0.14 \\ \hline PBE & 45 HaPs & Band Gap (eV) & 0.57 \\ PBEsol & 45 HaPs & Band Gap (eV) & 0.86 \\ PBE-D3 & 45 HaPs & Band Gap (eV) & 0.81 \\ PBEsol-D3 & 45 HaPs & Band Gap (eV) & 1.06 \\ HSE-PBE+SOC & 45 HaPs & Band Gap (eV) & 0.83 \\ \hline PBE-corrected & 45 HaPs & Band Gap (eV) & 0.47 \\ HSE-corrected & 45 HaPs & Band Gap (eV) & 0.35 \\ \hline PBE & 28 Hybrid HaPs & Band Gap (eV) & 0.23 \\ HSE-corrected & 28 Hybrid HaPs & Band Gap (eV) & 0.23 \\ \hline PBE-corrected & 17 Inorganic HaPs & Band Gap (eV) & 0.44 \\ HSE-corrected & 17 Inorganic HaPs & Band Gap (eV) & 0.30 \\ \hline \end{tabular}
\end{table}
Table 2: Root mean square errors between DFT-computed lattice constants and band gaps and corresponding experimental values collected from the literature.
**S8(b)**. We find that all 4 functionals match reasonably well with experiments, but PBEsol and PBE-D3 show the lowest RMSE values of 0.06 A and 0.05 A respectively whereas PBE and PBEsol-D3 show higher RMSE values of 0.16 A and 0.14 A respectively. It tracks that the PBEsol corrections are desired for inorganic compounds and the D3 corrections are suitable for hybrid compounds, but both together may be unnecessary and lead to under-predicted lattice constants. PBE alone clearly over-predicts the lattice constants which motivates the inclusion of the necessary functional modifications. **Table 2** further lists the lattice constant RMSE values for all PBE-related functionals compared against experiments.
Figure 5: Pearson correlation coefficients between DFT-computed properties (\(\Delta\)H, E\({}_{g}\), and SLME) and 49-dimensional input descriptors, for (a) PBE, and (b) HSE-PBE+SOC.
### Correlation Between Properties and Material Descriptors
Next, the DFT dataset is mined to obtain some qualitative insights into the physical and chemical factors that contribute to the HaP properties of interest, namely \(\Delta\)H, E\({}_{g}\), and SLME, computed at all levels of theory. For this purpose, we utilize a strategy applied by us in multiple previous studies, of converting every ABX\({}_{3}\) compound into unique composition-based vectorial representations [20, 38]. The "descriptor" for any compound is a 49-dimension vector, where the first 9 dimensions encode the fraction of any species (Cs, MA, FA, Ge, Sn, Pb, Cl, Br, I) in the compound using a value between 0 and 1, the next 4 dimensions provide a 1 or 0 score based on the phase of the compound, and the next 36 dimensions represent weight-averaged elemental properties of species at the A, B, and X sites, using well-known properties such as ionic radii, electronegativity, and electron affinity. **Fig. 5(a)** and **(b)** show heatmaps capturing the Pearson coefficients of linear correlation [65] between all 49 descriptor dimensions and the three properties from PBE and HSE, respectively; corresponding plots for PBEsol, PBE-D3, and PBEsol-D3 are pictured in **Fig. S10**.
It is seen that from both PBE and HSE, the Cs-content, A-site boiling point, and A-site electron affinity have a strong positive correlation with \(\Delta\)H, while the FA-content, A-site ionic radius, and A-site atomic number (approximated for MA and FA using an artificial continuation of Group I, as described in past work [38]) are strongly negatively correlated with \(\Delta\)H. This is consistent with the earlier observation that many Cs compounds have large \(\Delta\)H and are thus unstable, while the most stable compound with low \(\Delta\)H are FA-based, with MA lying somewhere in the middle. Further, the X-site properties such as ionic radius, melting point, electron affinity, and electronegativity, have the strongest correlation with both E\({}_{g}\) and SLME. Increasing the size, the boiling/melting point, and the heat of fusion/vaporization of the X-site species helps decrease the gap and increase the PV efficiency, while increasing the electron affinity, ionization energy, and electronegativity has the opposite effect. This is consistent the iodides or iodide-bromide compounds having the most desirable optoelectronic properties, and the corresponding \(\Delta\)H-site properties are consistent with the \(\Delta\)H-site properties.
tronic properties and the chlorides lying on the other end of the spectrum. The Cl-content (I-content) also shows strong negative (positive) correlation with the SLME, as shown in **Fig. 5(a)** and **(b)**. Although many B-site and A-site properties also show notable correlation with E\({}_{g}\) and SLME, the contributions are dominated by X-site species. Most of the qualitative correlations remain the same from the PBEsol, PBE-D3, and PBEsol-D3 datasets as well, with the inclusion of D3 corrections leading to some interesting changes in the stability trends.
### Effect of Ionic Ordering on HaP Alloy Properties
Every alloy composition investigated so far was simulated using the SQS approach in a 2\(\times\)2\(\times\)2 or 2\(\times\)2\(\times\)1 cubic or non-cubic supercell. While this is a fine representation of what the alloy would look like on average, larger supercell sizes provide the opportunity to explore
Figure 6: Effect of ionic ordering and clustering on alloy properties: PBE-computed \(\Delta\)H plotted against E\({}_{g}\) for 20 structures each of 4 quaternary compositions and 25 structures of a quinary composition.
different types of ordered and disordered arrangements. For this purpose, we performed additional PBE computations on a series of "high-entropy" HaP alloys belonging to the chemical space MA(Pb-Sn-Ba-Sr-Ca)I\({}_{3}\), as shown in **Table 1** and discussed in the Methodology section. Equimolar quinary (5 ions mixed in 20% fractions each at the B-site) or quaternary (4 ions mixed in 25% fractions each at the B-site) compositions would exhibit the largest mixing entropy contributions to the decomposition energy (k\({}_{B}\)T(\(\sum_{i}\)x\({}_{i}\)ln(x\({}_{i}\))) in **Eqn. 1**), and are thus referred to as high-entropy perovskite alloys here. These compositions are chosen because of the general interest in MA-based iodides and in partially or completing replacing Pb in such compounds, via alloying at the B-site. We consider 4\(\times\)4\(\times\)4 cubic supercells, starting from the optimized MAPbI\({}_{3}\) geometry, and perform random mixing of ions to obtain 20 structures each for the five possible quaternaries and 25 structures for the quinary.
**Fig. 6** shows the PBE E\({}_{g}\) plotted against the PBE \(\Delta\)H for all 125 systems, with different colors and shapes representing different compositions. It can be seen that while the Sn-free composition MAPb\({}_{0.25}\)Ba\({}_{0.25}\)Sr\({}_{0.25}\)Ca\({}_{0.25}\)I\({}_{3}\) shows the largest E\({}_{g}\) between 2.6 eV and 2.8 eV, all other compositions show E\({}_{g}\) spread between \(\sim\) 1.75 eV and \(\sim\) 2.5 eV. The E\({}_{g}\) values in
Figure 7: Pearson correlation coefficients between PBE-computed properties (\(\Delta\)H and E\({}_{g}\)) and nearest neighbor pairs of B-site cations across 125 quinary and quaternary alloy structures.
MAPb\({}_{0.25}\)Sn\({}_{0.25}\)Sr\({}_{0.25}\)Ca\({}_{0.25}\)I\({}_{3}\), for instance, range from 1.75 eV to 2.2 eV, with \(\Delta\)H between -30 meV p.f.u. to -18 meV p.f.u., meaning that that band gap could be changed by nearly 0.5 eV while keeping the material robustly stable against decomposition, simply by altering the ionic ordering in the system. It stands to reason that the properties displayed by the polymorphs of this material would emerge from some kind of an ensemble average over all these configurations. MAPb\({}_{0.25}\)Sn\({}_{0.25}\)Sr\({}_{0.25}\)Ba\({}_{0.25}\)I\({}_{3}\) also shows negative \(\Delta\)H across its 20 structures as E\({}_{g}\) ranges from \(\sim\) 1.8 eV to \(\sim\) 2.1 eV, implying that Pb-Sn-Sr combinations mixed with either Ba or Ca are good for achieving stability and lower gaps.
The MAPb\({}_{0.25}\)Sn\({}_{0.2}\)Ba\({}_{0.25}\)Sr\({}_{0.25}\)Ca\({}_{0.25}\)I\({}_{3}\) quinary also shows a wide range of E\({}_{g}\) between \(\sim\) 1.8 eV and \(\sim\) 2.3 eV, but \(\Delta\)H values become slightly positive. The same is true for the MAPb\({}_{0.25}\)Ba\({}_{0.25}\)Sn\({}_{0.25}\)Ca\({}_{0.25}\)I\({}_{3}\) and MASn\({}_{0.25}\)Ba\({}_{0.25}\)Sr\({}_{0.25}\)Ca\({}_{0.25}\)I\({}_{3}\) quaternaries, implying that Sn-Ba-Ca combinations are less desirable when it comes to the HaP stability. To further understand the effect of specific types of ionic clustering on the computed properties, we generated descriptors for all 125 structures based on the number of Pb-Pb, Pb-Sn, Sn-Sn, etc. pairs that occur in them; any B1-B2 pair is defined based on the existence of B1-I-B2 combinations with B1 and B2 connected via a bridging I anion; example structures are pictured in **Fig. 1(b)**. The matrix in **Table 3** covers all possible B1-B2 pairs, resulting in 15 possible combinations. **Fig. 7** shows a heatmap of Pearson correlation coefficients between the 15 types of B-cation pairs and the PBE computed \(\Delta\)H and E\({}_{g}\), across the dataset of 125 quinaries and quaternaries. Interestingly, we find that Pb-Pb, Pb-Sr, and Sr-Sr pairs
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Species** & **Pb** & **Sn** & **Ba** & **Sr** & **Ca** \\ \hline
**Pb** & Pb-Pb & Pb-Sn & Pb-Ba & Pb-Sr & Pb-Ca \\ \hline
**Sn** & & Sn-Sn & Sn-Ba & Sn-Sr & Sn-Ca \\ \hline
**Ba** & & & Ba-Ba & Ba-Sr & Ba-Ca \\ \hline
**Sr** & & & & Sr-Sr & Sr-Ca \\ \hline
**Ca** & & & & Ca-Ca \\ \hline \end{tabular}
\end{table}
Table 3: A matrix of possible B1-B2 pairs in all MA(Pb-Sn-Ba-Sr-Ca)I\({}_{3}\) alloys simulated in 4\(\times\)4\(\times\)4 cubic supercells.
are least desirable for improving the perovskite stability, whereas Ba-Ba, Sn-Ba, and Ba-Sr pairs are helpful for making \(\Delta\)H more negative. Furthermore, Sn-Sn and Pb-Sn pairs are most responsible for reducing E\({}_{g}\), which is consistent with most Sn-based compounds showing lower gaps than Sn-free compounds, and pairs such as Ba-Ba, Ca-Ca, Ba-Sr, and Sr-Ca help increase the E\({}_{g}\). Overall, this analysis reveals that (a) by changing the ionic ordering in a given HaP composition, E\({}_{g}\) could be drastically reduced or increased while keeping the material stable, and (b) certain B-site cations clustering together may be helpful or harmful to the stability and desired E\({}_{g}\).
Figure 8: Effect of lattice strain and octahedral distortion/rotation for 6 selected compounds: (a) PBE \(\Delta\)H and (b) E\({}_{g}\) plotted against average lattice strain, and (c) PBE \(\Delta\)H and (d) E\({}_{g}\) plotted against average octahedral distortion. CsPbCl\({}_{3}\) is not pictured in some plots due to very large \(\Delta\)H values.
### Effect of Lattice Strain and Octahedral Distortion and Rotation on HaP Properties
Finally, we investigate how intentional distortions applied upon the perovskite lattice and/or on the corner-shared BX\({}_{6}\) octahedra can change the stability and band gap. For this analysis, 6 compounds given by the chemical space (MA-Cs)(Pb)(I-Br-Cl)\({}_{3}\) are considered, as listed in **Table 1** and described in the Methodology section. **Fig. 8(a)** and **(b)** show the PBE \(\Delta\)H and PBE E\({}_{g}\) respectively plotted against the average lattice strain in A (over x, y, and z dimensions), for 5 compounds. CsPbCl\({}_{3}\) structures are missing in these plots because of their much higher \(\Delta\)H. It can be seen from **Fig. 8(a)** that \(\Delta\)H shows an expected trend, becoming more positive for both negative (compression) and positive (elongation) lattice strain, with the lowest energy shown by the no-strain structures as expected. Interestingly, E\({}_{g}\) generally shows a monotonically increasing behavior all the way from large negative to large positive lattice strain. Once again, it should be noted that compounds such as CsPbBr\({}_{3}\) and MAPbBr\({}_{3}\) can be kept metastable with small amounts of strain, as shown by the negative \(\Delta\)H values, while changing the E\({}_{g}\) by nearly 0.3 eV.
**Fig. 8(c)** and **(d)** show the PBE \(\Delta\)H and PBE E\({}_{g}\) respectively plotted against the average octahedral distortion in A. It can be seen that there are many configurations of CsPbBr\({}_{3}\), MAPbBr\({}_{3}\), and MAPbCl\({}_{3}\) with small amounts of octahedral distortion that maintain \(\Delta\)H below 0 eV p.f.u. and change E\({}_{g}\) by as much as 0.5 eV in some cases. Distorted CsPbCl\({}_{3}\) has a high E\({}_{g}\)\(>\) 2.5 eV which does not change by a great amount as the average distortion increases from \(\sim\) 0.15 A to 0.3 A, and also shows large positive \(\Delta\)H. Comparing plots in **Fig. 8(c)** and **(d)** to plots in **Fig. 8(a)** and **(b)**, it is clear that is a non-trivial correlation between octahedral distortion and the properties as compared to lattice strain vs the properties. Ideally, one would express the octahedral distortion in terms of how every atom
in a BX\({}_{6}\) octahedral unit is displaced relative to all other BX\({}_{6}\) units it is connected to, and correlations would be sought between an "octahedral distortion vector" and the properties of interest. With the current representation, a range of property values are often observed for the same average distortion, which arises from different types of distortions resulting in lowering or raising \(\Delta\)H or E\({}_{gap}\). To wrap up this discussion, we plotted the entire dataset of lattice/octahedra strained/distorted structures for all compounds in **Fig. 9**, in terms of the PBE E\({}_{g}\) against the PBE \(\Delta\)H. By drawing a cut-off at a low enough \(\Delta\)H value, dozens of possible structures could be obtained for the same composition, showing a range of E\({}_{g}\) values that may be suitable for different applications.
Figure 9: PBE-computed E\({}_{g}\) plotted against \(\Delta\)H for the dataset of strained and octahedrally manioulated HaPs.
## Perspective and Future Work
Our systematic first principles investigation reveals that the lattice parameters, energetic stability, and optoelectronic properties of ABX\({}_{3}\) halide perovskites depend heavily on the identity and mixing of atoms at the A, B, or X sites, the perovskite phase, the level of theory being used, the type of ionic ordering in an alloy, and on possible lattice strains and octahedral distortion or rotation. Ideally, a framework for predicting the properties of HaPs, and designing novel HaP compositions/structures with multiple targeted properties, must take each of these factors into account, potentially in addition to important experimental conditions and past experimental measurements. Generating large DFT datasets as in the present work is invaluable for meaningful correlations and inter-dependencies between different properties and different levels of theory, as well as for establishing a reliable benchmark for computations against experiments. An important question that will be raised is in the usefulness of different PBE and HSE functionals going forward, when applying them to other related HaP compositions and structures. Other than applying simple linear corrections like pictured in **Fig. 4**, the best way to improve DFT-level predictions is to continue applying higher levels of theory and tuning necessary parameters in a compound-by-compound case, which is clearly not conducive to a high-throughput treatment.
As an example, we performed new HSE+SOC computations for 5 compounds by tuning the mixing parameter \(\alpha\) all the way from 0.20 to 0.50; these results are shown in **Fig. 10**, along with known experimental values. We find that \(\alpha\)=0.50 reproduces the experimental E\({}_{g}\) perfectly for cubic FAPbI\({}_{3}\), and \(\alpha\)=0.48 work best for orthorhombic CsPbI\({}_{3}\) and CsPbBr\({}_{3}\). The same values of \(\alpha\) might work for cubic CsPbI\({}_{3}\) and CsPbBr\({}_{3}\) as well. Such an analysis may be enormously useful for these particular compounds, as HSE+SOC computations could now be performed using ideal \(\alpha\) values for optical absorption or defect calculations. Of course, it must be noted that the input structure used for these static HSE+SOC computations is itself a huge factor; for **Fig. 10**, we utilized the FAPbI\({}_{3}\) and CsPb(I/Br)\({}_{3}\) structures that
matched best with experiments, from across the different PBE-based functionals discussed earlier. Once again, it should be noted that HSE+SOC with default \(\alpha\) provides means for uniform evaluation of properties across a large chemical space, whereas tuning would need to be performed independently for every compound. Machine learning (ML) models could potentially be trained in the future that combine data from different PBE and HSE functionals to yield the ideal \(\alpha\) values for any given HaP composition/structure.
Currently, all the DFT data presented in this work is being utilized for training a myriad of ML predictive models, using both composition-based descriptors [38] explained earlier in the article and used extensively in the past, as well as using state-of-the-art crystal graph-based neural network (GNN) models. While the former are elegant and simple models, they cannot typically be applied for multiple polymorphs of the same composition, whereas GNN models can appropriately represent entire crystal structures as graphs and use them as input to train
Figure 10: Band gaps computed from HSE-PBE+SOC for 5 compounds, plotted as a function of the HSE06 mixing parameter (\(\alpha\)). The dotted horizontal lines represent known experimental values.
predictive NN models that yield multiple properties as output [66, 67, 68]. A crystal graph representation automatically takes into account the identity, mixing, and bond-lengths between atoms, any lattice or octahedral distortion, ionic ordering, perovskite phase, etc. Furthermore, in addition to material \(\rightarrow\) property forward predictive models, inverse design models could also be trained using techniques such as genetic algorithm [69], Bayesian optimization [70], generative neural networks [71], and variational autoencoders [72], to design novel HaP atom-composition-structure combinations that show the desired mix of negative \(\Delta\)H, PV-suitable E\({}_{g}\), and highest possible PV efficiency.
## 4 Conclusion
In conclusion, we performed a series of first principles-based DFT computations to investigate polymorphism in halide perovskites, in terms of changing the perovskite composition and phase, ionic ordering in alloys, and strain/distortions applied to the lattice and interconnected octahedral units. Our work shows that the stability, band gap, and PV efficiency can vary quite a lot for the same composition when the phase or ionic ordering is changed, or when distortions are applied. Different semi-local and non-local DFT functionals are explored, revealing that PBEsol or PBE-D3 may be necessary for estimating correct lattice parameters, whereas PBE and/or HSE+SOC computed band gaps can match very well with experiments if some corrections are applied. We further find that different types or ordering or distortion could be used to keep materials stable while drastically altering their band gaps. Our linear correlation analyses further show the positive or negative effect of different cations/anions, their elemental properties, and their clustering, on the bulk properties of interest. All computational data is made available to the community and is currently being utilized for trained multiple machine learning models and for guiding collaborative experimental synthesis and characterization.
## Conflicts of Interest
There are no conflicts to declare.
## Data Availability
All tabulated data and scripts used to analyze the DFT computed properties can be accessed from [https://github.com/mannodiarun/perovs_mfml_ga/tree/polymorph_data](https://github.com/mannodiarun/perovs_mfml_ga/tree/polymorph_data).
Crystal structure files for all datasets discussed in this article are attached with the Supplementary Information.
This work was performed at Purdue University, under startup account F.10023800.05.002 from the Materials Engineering department. This research used resources of the National Energy Research Scientific Computing Center (NERSC), the Laboratory Computing Resource Center (LCRC) at Argonne National Laboratory, and the Rosen Center for Advanced Computing (RCAC) clusters at Purdue.
|
2302.14296 | Discrete-time Optimal Covariance Steering via Semidefinite Programming | This paper addresses the optimal covariance steering problem for stochastic
discrete-time linear systems subject to probabilistic state and control
constraints. A method is presented for efficiently attaining the exact solution
of the problem based on a lossless convex relaxation of the original non-linear
program using semidefinite programming. Both the constrained and the
unconstrained versions of the problem with either equality or inequality
terminal covariance boundary conditions are addressed. We first prove that the
proposed relaxation is lossless for all of the above cases. A numerical example
is then provided to illustrate the method. Finally, a comparative study is
performed in systems of various sizes and steering horizons to illustrate the
advantages of the proposed method in terms of computational resources compared
to the state of the art. | George Rapakoulias, Panagiotis Tsiotras | 2023-02-28T04:04:36Z | http://arxiv.org/abs/2302.14296v3 | # Discrete-time Optimal Covariance Steering via
###### Abstract
This paper addresses the optimal covariance steering problem for stochastic discrete-time linear systems subject to probabilistic state and control constraints. A method is presented for efficiently attaining the exact solution of the problem based on a lossless convex relaxation of the original non-linear program using semidefinite programming. Both the constrained and the unconstrained versions of the problem with either equality or inequality terminal covariance boundary conditions are addressed. We first prove that the proposed relaxation is lossless for all of the above cases. A numerical example is then provided to illustrate the method. Finally, a comparative study is performed in systems of various sizes and steering horizons to illustrate the advantages of the proposed method in terms of computational resources compared to the state of the art.
## I Introduction
The Covariance Control (CC) problem for linear systems was initially posed by A. Hotz and R. Skelton in [1]. It was studied in an infinite horizon setting for both continuous and discrete-time systems and the authors provided a parametrization for all linear state feedback controllers that achieve a specified system covariance. Later, the authors in [2] provided analytical solutions for a minimum effort controller that achieves a specified steady-state system covariance in the same setting.
Its finite horizon counterpart, the Covariance Steering (CS) problem, gained attention only recently. Although similar ideas can be traced back in the Stochastic Model Predictive Control literature [3, 4], in the sense that these methods also try to address constraints in the system covariance, they achieve this objective by using conservative approximations or by solving computationally demanding non-linear programs. Covariance Steering theory, on the other hand, offers a more direct approach, often providing tractable algorithms for the solution in real time.
The first formal treatment of the CS problem was made by the authors of [5, 6] for continuous-time systems, by studying the minimum-effort finite horizon covariance steering problem in continuous time. Later, in [7, 8] the author provided a numerical approach for solving the discrete version of the problem with a relaxed terminal covariance boundary condition using semidefinite programming. Later, [9] introduced a constrained version of the original problem where the state and control vectors are required to stay within specified bounds in a probabilistic sense and finally its connections to Stochastic Model Predictive control were cemented in [10].
Ever since then, the newly developed covariance steering theory has been applied to a variety of problems ranging from path planning for linear systems under uncertainty [11], control of linear systems with multiplicative noise [12], distributed robot control [13], as well as for control of non-linear [14, 15, 16] and non-Gaussian [17, 18] systems. In our previous work [19], we presented a new method of solving the optimal covariance steering problem in discrete time based on an exact convex relaxation of the original non-linear programming formulation of the problem. This method did not use the lifted form of the dynamic equations as most previous works on the subject, but rather involved the system covariance matrices of each time step as optimization variables while adding the covariance dynamics as a non-linear constraint. An exact convex relaxation was proposed to efficiently solve this problem using linear semidefinite programming (SDP). At the same time, but independently, the authors of [20] used the same relaxation for solving the optimal covariance steering problem with an inequality terminal boundary condition for a system with multiplicative noise.
The contributions of this paper are two-fold. First, we extend our previous results and prove that the proposed lossless convex relaxation presented in [19] also holds under state and control chance constraints, as well as for the case of inequality terminal boundary covariance constraint. The motivation for this extension is straightforward; many practical applications of the covariance steering theory require probabilistic constraints to characterize the feasible part of state space or limit the control effort applied to the system. Furthermore, the inequality terminal covariance boundary condition might better reflect the desire to limit the uncertainty of the state, rather than driving it to an exact value. In this paper, we establish that the proposed method can handle all variants of the optimal covariance steering problem for linear systems encountered in the literature. Finally, we show that it outperforms other approaches for solving the CS problem, such as [8] and [10], by over an order of magnitude in terms of run-times while also having much better scaling characteristics with respect to the steering horizon and model size.
## II Problem Statement
Let a stochastic, discrete, time-varying system be described by the state space model
\[x_{k+1}=A_{k}x_{k}+B_{k}u_{k}+D_{k}w_{k}, \tag{1}\]
where \(k=0,1,\ldots,N-1\) denotes the time step, \(A_{k}\in\mathbb{R}^{n\times n}\) is the system matrix, \(B_{k}\in\mathbb{R}^{n\times p}\) is the input matrix and \(D_{k}\in\mathbb{R}^{n\times q}\) is the disturbance matrix. The system's state, input, and stochastic disturbance are denoted by \(x_{k},\ u_{k}\) and \(w_{k}\), respectively. The first two statistical moments of the state vector are denoted by \(\mu_{k}=\mathbb{E}[x_{k}]\in\mathbb{R}^{n}\) and \(\Sigma_{k}=\mathbb{E}[(x_{k}-\mu_{k})(x_{k}-\mu_{k})^{\dagger}]\in\mathbb{R}^{ n\times n}\). We assume that the process noise \(w_{k}\) has zero mean and unitary covariance. The discrete-time finite horizon optimal covariance steering problem can be expressed as the following optimization problem:
\[\min_{x_{k},u_{k}} J=\mathbb{E}[\sum_{k=0}^{N-1}x_{k}^{\dagger}Q_{k}x_{k}+u_{k}^{ \dagger}R_{k}u_{k}],\] (2a) such that, for all \[k=0,1,\ldots,N-1\], \[x_{k+1}=A_{k}x_{k}+B_{k}u_{k}+D_{k}w_{k}, \tag{2b}\] \[x_{0}\sim\mathcal{N}(\mu_{i},\Sigma_{i}),\] (2c) \[x_{N}\sim\mathcal{N}(\mu_{f},\Sigma_{f}),\] (2d) \[\mathbb{P}(x_{k}\in\mathcal{X})\geq 1-\epsilon_{1},\] (2e) \[\mathbb{P}(u_{k}\in\mathcal{U})\geq 1-\epsilon_{2}. \tag{2f}\]
For the rest of this paper, we will assume that \(R_{k}\succ 0,\ Q_{k}\succeq 0\) and that \(A_{k}\) is invertible for all \(k=0,1,\ldots,N-1\). The last condition is met in most practical problems where the system dynamics are derived through the discretization of a continuous-time state-space model.
The decision variables for problem (2) are stochastic random variables, rendering it hard to solve using numerical optimization methods. As shown in [19], in the absence of (2e), (2f) this problem is solved optimally with a linear state feedback law of the form
\[u_{k}=K_{k}(x_{k}-\mu_{k})+v_{k}, \tag{3}\]
where \(K_{k}\in\mathbb{R}^{p\times n}\) is a feedback gain that controls the covariance dynamics and \(v_{k}\in\mathbb{R}^{p}\) a feedforward term controlling the system mean. The cost function can be written, alternatively, in terms of the first and second moments of the state as follows
\[\mathbb{E}[\sum_{k=0}^{N-1}x_{k}^{\dagger}Qx_{k}+u_{k}^{\dagger}Ru _{k}]=\sum_{k=0}^{N-1}\text{tr}(Q\Sigma_{k})+\text{tr}(RK_{k}\Sigma_{k}K_{k}^ {\dagger})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mu_{k}^{ \dagger}Q\mu_{k}+v_{k}^{\dagger}R_{k}v_{k}.\]
If the initial distribution of the state is Gaussian and a linear feedback law as in (3) is used, the state distribution remains Gaussian. This allows us to write the constraints (2c), (2d) as
\[\mu_{0}=\mu_{i},\quad\Sigma_{0}=\Sigma_{i},\quad\mu_{N}=\mu_{f},\quad\Sigma_{ N}=\Sigma_{f}.\]
In contrast to previous works such as [8] and [11], we choose to keep the intermediate states in the steering horizon as decision variables, handling them in terms of their first and second moments. To this end, we replace (2b) with the mean and covariance propagation equations
\[\mu_{k+1}=A_{k}\mu_{k}+B_{k}v_{k}, \tag{4a}\] \[\Sigma_{k+1}=(A_{k}+B_{k}K_{k})\Sigma_{k}(A_{k}+B_{k}K_{k})^{ \intercal}+D_{k}D_{k}^{\intercal}. \tag{4b}\]
Omitting the chance constraints (2f) and (2e) for the moment, the problem is recast as a standard non-linear program
\[\min_{\Sigma_{k},K_{k},\mu_{k},v_{k}}J=\sum_{k=0}^{N-1}\text{tr} (Q\Sigma_{k})+\text{tr}(RK_{k}\Sigma_{k}K_{k}^{\intercal})\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\mu_{k}^{\intercal}Q\mu_{k} +v_{k}^{\intercal}R_{k}v_{k},\] (5a) such that, for all \[k=0,1,\ldots,N-1\], \[\Sigma_{k+1}=A_{k}\Sigma_{k}A_{k}^{\intercal}+B_{k}K_{k}\Sigma_{k} A_{k}^{\intercal}+A_{k}\Sigma_{k}K_{k}^{\intercal}B_{k}^{\intercal},\] \[\qquad\qquad\qquad\quad+B_{k}K_{k}\Sigma_{k}K_{k}^{\intercal}B_{k }^{\intercal}+D_{k}D_{k}^{\intercal}, \tag{5b}\] \[\Sigma_{0}=\Sigma_{i},\] (5c) \[\Sigma_{N}=\Sigma_{f},\] (5d) \[\mu_{k+1}=A_{k}\mu_{k}+B_{k}v_{k},\] (5e) \[\mu_{0}=\mu_{i},\] (5f) \[\mu_{N}=\mu_{f}. \tag{5g}\]
In the following sections, we will convert this problem to an equivalent convex one.
## III Unconstrained Covariance Steering
It is well established in the covariance steering control literature [9] that under no coupled mean-covariance constraints, problem (5) can be decoupled into the mean steering problem and the covariance steering problem as
\[\min_{\Sigma_{k},K_{k}} J_{\Sigma}=\sum_{k=0}^{N-1}\text{tr}\big{(}Q_{k}\Sigma_{k}\big{)}+ \text{tr}\big{(}R_{k}U_{k}\Sigma_{k}^{-1}U_{k}^{\intercal}\big{)},\] (6) subject to ( 5b ) - ( 5d ), and \[\min_{\mu_{k},v_{k}} J_{\mu}=\sum_{k=0}^{N-1}\mu_{k}^{\intercal}Q_{k}\mu_{k}+v_{k}^{ \intercal}R_{k}v_{k},\] (7) subject to ( 5e )- ( 5g ).
Problem (7) is trivial and can even be solved analytically [9]. We, therefore, focus solely on problem (6). Using the change of variables \(U_{k}=K_{k}\Sigma_{k}\) and the convex relaxation proposed in [19] one can transform Problem (6) into a linear semidefinite program
\[\min_{\Sigma_{k},U_{k},Y_{k}} J_{\Sigma}=\sum_{k=0}^{N-1}\text{tr}\big{(}Q_{k}\Sigma_{k}\big{)}+ \text{tr}\big{(}R_{k}Y_{k}\big{)}\] (8a) such that, for all \[k=0,1,\ldots,N-1\], \[C_{k}\triangleq U_{k}\Sigma_{k}^{-1}U_{k}^{\intercal}-Y_{k}\preceq 0, \tag{8b}\] \[G_{k}\triangleq A_{k}\Sigma_{k}A_{k}^{\intercal}+B_{k}U_{k}A_{k}^{ \intercal}+A_{k}U_{k}^{\intercal}B_{k}^{\intercal}+B_{k}Y_{k}B_{k}^{\intercal},\] \[\qquad+D_{k}D_{k}^{\intercal}-\Sigma_{k+1}=0,\] (8c) \[\Sigma_{N}-\Sigma_{f}=0, \tag{8d}\]
where the constraint (8b) can be expressed as an LMI using the Schur complement as
\[\begin{bmatrix}\Sigma_{k}&U_{k}^{\intercal}\\ U_{k}&Y_{k}\end{bmatrix}\succeq 0.\]
**Theorem 1**.: _The optimal solution to the relaxed problem (8) satisfies \(C_{k}=0\) for all \(k=0,1,\ldots,N-1\) and therefore also optimally solves (6)._
Proof.: See [19].
**Remark**.: _A different approach that results in the same formulation is that of a randomized feedback control policy presented in [21]. Therein, the injected randomness on the control policy can be interpreted as a slack variable converting (8b) to equality. In [21] it is shown that for the soft-constrained version of the problem, the value of this slack variable is zero. In our work, we tackle the hard-constrained version, instead, with equality or inequality terminal covariance constraints as well as chance constraints. In this case, strong duality is not apparent and the technique of the proof of [21] is not directly applicable._
Next, consider Problem (6) but with an inequality terminal covariance boundary condition instead, and its corresponding relaxed version, namely,
\[\min_{\Sigma_{k},U_{k},Y_{k}}\quad J_{\Sigma}=\sum_{k=0}^{N-1} \text{tr}\big{(}Q_{k}\Sigma_{k}\big{)}+\text{tr}\big{(}R_{k}Y_{k}\big{)},\] (9a) such that, for all \[k=0,1,\ldots,N-1\], \[C_{k}\triangleq U_{k}\Sigma_{k}^{-1}U_{k}^{\intercal}-Y_{k} \preceq 0, \tag{9b}\] \[\Sigma_{N}-\Sigma_{f}\preceq 0,\] (9c) \[G_{k}\triangleq A_{k}\Sigma_{k}A_{k}^{\intercal}+B_{k}U_{k}A_{k}^ {\intercal}+A_{k}U_{k}^{\intercal}B_{k}^{\intercal}+B_{k}Y_{k}B_{k}^{\intercal},\] \[\qquad+D_{k}D_{k}^{\intercal}-\Sigma_{k+1}=0. \tag{9d}\]
**Theorem 2**.: _Assuming that the exact covariance steering problem (6) is feasible, problem (9) satisfies \(C_{k}=0\) for all \(k=0,1,\ldots,N-1\) and therefore also optimally solves (6) with an inequality terminal covariance boundary condition, instead of (5d)._
Proof.: Using matrix Lagrange multipliers \(M_{k}^{(1)},\,M^{(2)},\,\Lambda_{k}\) for the constraints (9b), (9c), (9d), respectively, we define the Lagrangian function
\[\mathcal{L}(\Sigma_{k},U_{k},Y_{k},M_{k}^{(1)},M^{(2)},\Lambda_{k })=J_{\Sigma}\] \[+\text{tr}\big{(}M^{(2)}(\Sigma_{N}-\Sigma_{f})\big{)}+\sum_{k=0} ^{N-1}\text{tr}\big{(}M_{k}^{(1)}C_{k}\big{)}+\text{tr}\big{(}\Lambda_{k}G_{k} \big{)} \tag{10}\]
The relevant first-order optimality conditions are [22]:
\[\frac{\partial\mathcal{L}}{\partial U_{k}}=2M_{k}^{(1)}U_{k} \Sigma_{k}^{-1}+2B_{k}^{\intercal}\Lambda_{k}A_{k}=0 \tag{11a}\] \[\frac{\partial\mathcal{L}}{\partial Y_{k}}=R_{k}-M_{k}^{(1)}+B_ {k}^{\intercal}\Lambda_{k}B_{k}=0\] (11b) \[\text{tr}\big{(}M_{k}^{(1)}C_{k}\big{)}=0, \tag{11c}\]
where \(k=0,1,\ldots N-1\). Note that we can choose \(\Lambda_{k}\) to be symmetric because of the symmetry of the constraint (9d), while \(M_{k}^{(1)}\) and \(M^{(2)}\) are symmetric by definition. We will prove that the optimal solution to problem (9) satisfies \(C_{k}=0\) for all \(k=0,1,\ldots,N-1\). To this end, assume that \(C_{k}\) has at least one nonzero eigenvalue for some \(k\). Equation (11c) then yields that \(M_{k}^{(1)}\) has to be singular [19]. The optimality condition (11a) can then be rewritten as \(B_{k}^{\intercal}\Lambda_{k}=-M_{k}^{(1)}U_{k}\Sigma_{k}^{-1}A_{k}^{-1}\). Substituting to (11b) yields
\[R_{k}=M_{k}^{(1)}\big{(}I_{p}+U_{k}\Sigma_{k}^{-1}A_{k}^{-1}B_{k}\big{)}. \tag{12}\]
Calculating the determinants of both sides of (12), we obtain
\[\text{det}(R_{k})=\text{det}(M_{k}^{(1)})\,\text{det}\big{(}I_{p}+U_{k} \Sigma_{k}^{-1}A_{k}^{-1}B_{k}\big{)}=0.\]
This clearly contradicts the fact that \(R_{k}\succ 0\). Therefore, at the optimal solution, the matrix \(C_{k}\) has all its eigenvalues equal to zero. This, along with the fact that \(C_{k}\) is symmetric, yields that \(C_{k}=0\) for all \(k=0,1,\ldots,N-1\). The final step to conclude this proof is to show that the KKT conditions (11) for the relaxed problem (9) are sufficient for the optimal solution, or in other words, the duality gap for the relaxed problem is zero. We have already proved that strong duality holds for the exact covariance steering problem in [19]. Since the relaxed terminal boundary condition problem (9) has a domain at least as big as the exact problem (8) and strong duality holds for the exact problem, from Slater's condition strong duality holds for the relaxed problem as well.
## IV Constrained Covariance Steering
Many real-world applications require additional constraints of the form (2f), (2e) to be imposed on the problem to reflect the physical limitations of the system or some other desired behavior. These may include constraints on the total control effort \(u_{k}\) on each time step or physical limits on the state vector \(x_{k}\). In this work, we assume polytopic state and control constraints of the form
\[\mathbb{P}(\alpha_{x}^{\intercal}x_{k} \leq\beta_{x})\geq 1-\epsilon_{x}, \tag{13a}\] \[\mathbb{P}(\alpha_{u}^{\intercal}u_{k} \leq\beta_{u})\geq 1-\epsilon_{u}, \tag{13b}\]
where \(\alpha_{x}\in\mathbb{R}^{n},\,\,\alpha_{u}\in\mathbb{R}^{p},\,\,\beta_{x}, \beta_{u}\in\mathbb{R}\) and \(\epsilon_{x},\epsilon_{u}\in[0,0.5]\) reflects the violation probability of each constraint. To convert the probabilistic constraints (13) into deterministic constraints on the decision variables note that \(\alpha_{x}^{\intercal}x_{k}\) and \(\alpha_{u}^{\intercal}u_{k}\) are univariate random variables with first and second moments given by
\[\mathbb{E}(\alpha_{x}^{\intercal}x_{k})=\alpha_{x}^{\intercal}\mu_ {k}, \tag{14a}\] \[\mathbb{E}(\alpha_{u}^{\intercal}u_{k})=\alpha_{u}^{\intercal}v_{k},\] (14b) \[\mathbb{E}(\alpha_{x}^{\intercal}(x_{k}-\mu_{k})(x_{k}-\mu_{k})^ {\intercal}\alpha_{x})=\alpha_{x}^{\intercal}\Sigma_{k}\alpha_{x},\] (14c) \[\mathbb{E}(\alpha_{u}^{\intercal}K_{k}(x_{k}-\mu)(x_{k}-\mu)^{ \intercal}K_{k}^{\intercal}\alpha_{u})=\alpha_{u}^{\intercal}U_{k}\Sigma_{k}^{-1 }U_{k}^{\intercal}\alpha_{u}. \tag{14d}\]
To this end, according to [9], equations (13) are converted to
\[\Phi^{-1}(1-\epsilon_{x})\sqrt{\alpha_{x}^{\intercal}\Sigma_{k} \alpha_{x}}+\alpha_{x}^{\intercal}\mu_{k}-\beta_{x}\leq 0, \tag{15a}\] \[\Phi^{-1}(1-\epsilon_{u})\sqrt{\alpha_{u}^{\intercal}U_{k}\Sigma_{ k}^{-1}U_{k}^{\intercal}\alpha_{u}}+\alpha_{u}^{\intercal}v_{k}-\beta_{u}\leq 0, \tag{15b}\]
where \(\Phi^{-1}(\cdot)\) is the inverse cumulative distribution function of the normal distribution. If the Gaussian assumptions for
the disturbances is dropped, then \(\Phi^{-1}(\cdot)\) can be conservatively replaced using Cantelli's concentration inequality with \(Q(1-\epsilon)=\sqrt{\epsilon/(1-\epsilon)}\)[18].
Using the same relaxation as before to handle the non-linear term \(U_{k}\Sigma_{k}^{-1}U_{k}^{\mathsf{T}}\), equation (15b) is further relaxed to
\[\Phi^{-1}(1-\epsilon_{u})\sqrt{\alpha_{u}^{\mathsf{T}}Y_{k}\alpha_{u}}+\alpha_ {u}^{\mathsf{T}}v_{k}-\beta_{u}\leq 0. \tag{16}\]
Unfortunately, due to the square root on the decision variables \(\Sigma_{k}\) and \(Y_{k}\) neither of (15a), (16) are convex. One conservative option to overcome this issue is to linearize these constraints around some reasonable value of \(\alpha_{x}^{\mathsf{T}}\Sigma_{k}\alpha_{x}\) and \(\alpha_{u}^{\mathsf{T}}Y_{k}\alpha_{u}\), respectively, for a given problem. Because the square root is a strongly concave function, the tangent line can serve as a linear global overestimator [23], yielding
\[\sqrt{x}\leq\frac{1}{2\sqrt{x_{0}}}x+\frac{\sqrt{x_{0}}}{2},\quad\forall x,x_{ 0}>0.\]
This is illustrated in Figure 1. The constraints in (15) can therefore be conservatively approximated as
\[\Phi^{-1}(1-\epsilon_{x}) \frac{1}{2\sqrt{\alpha_{x}^{\mathsf{T}}\Sigma_{r}\alpha_{x}}} \alpha_{x}^{\mathsf{T}}\Sigma_{k}\alpha_{x}+\alpha_{x}^{\mathsf{T}}\mu_{k}\] \[-\left(\beta_{x}-\frac{1}{2}\sqrt{\alpha_{x}^{\mathsf{T}}\Sigma_ {r}\alpha_{x}}\right)\leq 0,\] (17a) and \[\Phi^{-1}(1-\epsilon_{u}) \frac{1}{2\sqrt{\alpha_{u}^{\mathsf{T}}Y_{r}\alpha_{u}}}\alpha_{ u}^{\mathsf{T}}Y_{k}\alpha_{u}+\alpha_{u}^{\mathsf{T}}v_{k}\] \[-\left(\beta_{u}-\frac{1}{2}\sqrt{\alpha_{u}^{\mathsf{T}}Y_{r} \alpha_{u}}\right)\leq 0, \tag{17b}\]
where \(\Sigma_{r},\ Y_{r}\) are some reference values. The linearized constraints now form a convex set, as illustrated in Figure 2. For notational simplicity, next, we consider the more general constraint form of
\[\ell^{\mathsf{T}}\Sigma_{k}\ell+\alpha_{x}^{\mathsf{T}}\mu_{k}- \beta_{x}\leq 0, \tag{18a}\] \[e^{\mathsf{T}}Y_{k}e+\alpha_{u}^{\mathsf{T}}v_{k}-\beta_{u}\leq 0. \tag{18b}\]
Given the additional constraints in (18) the fundamental question is whether the relaxation proposed in (8) remains lossless. To this end, consider the constrained Covariance Steering problem
\[\min_{\Sigma_{k},U_{k},Y_{k},\mu_{k},v_{k}}J=J_{\Sigma}+J_{\mu},\] (19a) such that, for all \[k=0,1,\ldots,N-1\], \[\mu_{k+1}=A_{k}\mu_{k}+B_{k}v_{k}, \tag{19b}\] \[C_{k}(\Sigma_{k},Y_{k},U_{k})\preceq 0,\] (19c) \[G_{k}(\Sigma_{k+1},\Sigma_{k},Y_{k},U_{k})=0,\] (19d) \[\ell^{\mathsf{T}}\Sigma_{k}\ell+\alpha_{x}^{\mathsf{T}}\mu_{k}- \beta_{x}\leq 0,\] (19e) \[e^{\mathsf{T}}Y_{k}e+\alpha_{u}^{\mathsf{T}}v_{k}-\beta_{u}\leq 0. \tag{19f}\]
Note that an equality terminal covariance condition is implied, by excluding \(\Sigma_{N}\) from the optimization variables and treating it as constant.
**Theorem 3**.: _The optimal solution to the problem (19) satisfies \(C_{k}=0\) for all \(k=0,1,\ldots,N-1\)._
Proof.: Define again the problem Lagrangian as
\[\mathcal{L}_{a}(\cdot)=J +\sum_{k=0}^{N-1}\text{tr}\big{(}M_{k}^{\mathsf{T}}C_{k}\big{)}+ \text{tr}\big{(}\Lambda_{1,k}^{\mathsf{T}}G_{k}\big{)}\] \[+\lambda_{1,k}^{\mathsf{T}}(\mu_{k+1}-A_{k}\mu_{k}-B_{k}v_{k})\] \[+\lambda_{2,k}\big{(}\ell^{\mathsf{T}}\Sigma_{k}\ell+\alpha_{x}^{ \mathsf{T}}\mu_{k}-\beta_{x}\big{)}\] \[+\lambda_{3,k}\big{(}e^{\mathsf{T}}Y_{k}e+\alpha_{u}^{\mathsf{T} }v_{k}-\beta_{u}\big{)}.\]
The relevant first-order optimality conditions for this problem are
\[\frac{\partial\mathcal{L}_{a}}{\partial U_{k}} =2M_{k}U_{k}\Sigma_{k}^{-1}+2B_{k}^{\mathsf{T}}\Lambda_{k}A_{k}=0, \tag{20a}\] \[\frac{\partial\mathcal{L}_{a}}{\partial Y_{k}} =R_{k}-M_{k}+B_{k}^{\mathsf{T}}\Lambda_{k}B_{k}+\lambda_{3,k}ee^{ \mathsf{T}}=0,\] (20b) \[\frac{\partial\mathcal{L}_{a}}{\partial\Lambda_{k}} =G_{k}=0,\] (20c) \[\text{tr}\big{(}M_{k}C_{k}\big{)}=0. \tag{20d}\]
Following the same steps as in the proof of Theorem 1, let \(C_{k}\) have at least one nonzero eigenvalue. From (20d), \(M_{k}\) has to be singular. Solving for \(B_{k}^{\mathsf{T}}\Lambda_{k}\) in (20a) and substituting in (20b) we get
\[R_{k}+\lambda_{3,k}ee^{\mathsf{T}}=M_{k}\big{(}I_{p}+U_{k}\Sigma_{k}^{-1}A_{k} ^{-1}B_{k}\big{)}. \tag{21}\]
Since \(\lambda_{3,k}\geq 0\) by definition, and \(ee^{\mathsf{T}}\succeq 0\), it follows that \(R_{k}+\lambda_{2,k}ee^{\mathsf{T}}\succ 0\). Therefore, taking again the determinant of both sides of (21) leads to a contradiction.
Fig. 1: Global overestimator of the square root function
Fig. 2: Example of a convexified domain for a 1-dimensional system
## V Numerical example and run-time analysis
To illustrate our method, we consider the problem of path planning for a quadrotor in a 2D plane. To this end, we use a 2D triple integrator model to generate smooth jerk paths, which can then be translated into low-level motor commands through differential flatness-based controllers [24]. To this end, consider the triple integrator model
\[A=\begin{bmatrix}I_{2}&\Delta TI_{2}&0_{2}\\ 0_{2}&I_{2}&\Delta TI_{2}\\ 0_{2}&0_{2}&I_{2}\end{bmatrix},\quad B=\begin{bmatrix}0_{2}\\ 0_{2}\\ \Delta TI_{2}\end{bmatrix},\quad D=0.1I_{6},\]
for time step \(\Delta T=0.1\) sec and a horizon of \(N=60\), yielding \(61\) total time steps. In this system, the first two states represent position, the second two velocity, and the last two the acceleration of the quad. The boundary conditions are
\[\Sigma_{i}=I_{6},\quad\Sigma_{f}=0.1I_{4},\] \[\mu_{i}=\begin{bmatrix}20&0_{1\times 5}\end{bmatrix}^{\intercal}, \quad\mu_{f}=0_{6\times 1}.\]
The feasible state space is characterized by a bounding box expressed in the form of (17a) with parameters
\[\alpha_{x}=\Big{\{}\begin{bmatrix}\ \pm 1&0&0_{1\times 4}\end{bmatrix}^ {\intercal},\begin{bmatrix}\ 0&\pm 1&0_{1\times 4}\end{bmatrix}^{\intercal}\Big{\}},\] \[\beta_{x}=\{22,\ -3,\ 7,\ -7\},\]
To account for the maximum allowable bank rate of the quadrotor, the control inputs are probabilistically restricted to lie inside the set \(\mathcal{U}=\{u_{k}\in\mathbb{R}^{2}:\|u_{k}\|_{\infty}\leq 25\}\). This constraint can be cast in the form of (17b) using four affine functions with parameters
\[\alpha_{u}=\Big{\{}\begin{bmatrix}\ \pm 1&0\end{bmatrix}^{\intercal},\begin{bmatrix} \ 0&\pm 1\end{bmatrix}^{\intercal}\Big{\}},\quad\beta_{u}=\{\pm 25,\pm 25\}.\]
Apart from the initial and terminal boundary conditions, two position waypoints are implemented by constraining the first two components of the state at time steps 20 and 40 of the steering horizon. For all constraints a violation probability of \(\epsilon_{x}=\epsilon_{u}=0.1\%\) was used for all \(k=0,1,\dots N-1\). The vectors \(\alpha_{x},\ \alpha_{u}\) are selected to have a unitary second norm for an easier selection of the linearization point, which is performed around \(\Sigma_{r}=1.2I_{4}\ Y_{r}=15I_{2}\). Further tuning of the linearization point parameters can be done using an iterative approach, where the problem is resolved sequentially and the linearization points at each time step are calculated from the last iteration's optimal trajectory. This produces less conservative results, but it was observed to have a small impact overall and was therefore not included in this example. It was observed, however, that overestimating the values of the linearization points is preferable when compared to underestimating them. This can be interpreted by inspection of equation (18) and with the help of Figure 1. Equation (18) shows that constraining a stochastic signal is equivalent to constraining a weighted sum of its mean and uncertainty. Returning to Figure 1, when the true value of the uncertainty is above the linearization point, the weight of the uncertainty of the signal is unboundedly increasing, potentially causing the violation of the total constraint budget. On the other hand, when the uncertainty is below the linearization point, this weight is lower bounded by the \(y\)-intercept of the affine approximation to the square root function preventing potential infeasibilities. All optimization problems are solved in Matlab using MOSEK [25]. The resulting optimal steering is illustrated in Figure 3, while the required control effort is shown in Figure 4. The feasible set in each figure is denoted with green lines and the mean of each signal with a dashed black line. For Figure 3 the 3-sigma confidence level bounds are represented with blue ellipses, while for Figure 4 by the light-blue area around the mean signal. Initial and terminal values for the state confidence ellipses as well as the waypoints are denoted with red.
Next, we present a run-time comparison between different methods for solving the unconstrained covariance steering problem. Although different methods result in different control policies and potentially more or less conservative control sequences, this comparison only studies run-times and the resulting optimization problem sizes in terms of the number of decision variables involved in each program. To evaluate the performance of each algorithm, random state space models of various sizes were generated using Matlab's drss() command. For each system, we use as many noise channels as state variables and half as many input channels as state variables. The analysis was performed for systems
Fig. 4: Resulting control effort
Fig. 3: Covariance Steering
of varying size and a fixed steering horizon of 32 time steps, as well as for varying time horizons for an \(8\times 8\) system. The results are summarized in Tables I and II respectively. The empty cells are due to the program running out of memory. The simulations were carried out in Matlab 2022 running on an 11th Gen. Intel Core i7-11800H and 16 GB of RAM.
It is clear that the proposed approach outperforms the state-of-the-art algorithms significantly, by over an order of magnitude for almost all cases. Also, it is worth noting that problem (8) is a linear semidefinite program, while the formulations of [8] and [10] result in quadratic semidefinite programs, which need to be converted to linear ones using suitable relaxations, increasing further the number of decision variables needed as well as the complexity of the problem. Finally, problem (8) involves \(N-1\) LMIs of dimensions \(p\times p\) as opposed to a single large LMI of dimensions \((N+2)n\times(N+2)n\) for the terminal covariance constraint used in methods [8, 10]. As suggested in [25], multiple smaller LMIs can be solved more efficiently compared to a single larger one due to the resulting sparsity of the constraints. This also explains why although Approach 2 of [10] results in smaller problem sizes compared to the proposed approach, still has significantly larger solution times.
## Acknowledgment
The authors would like to sincerely thank Dr. Fengjiao Liu for her constructive comments and discussion on the paper, and Ujjwal Gupta for his help with the quadrotor numerical example. This work was supported by NASA ULI award 80NSSC20M0163 and ONR award N00014-18-1-2828.
|
2309.10382 | Krylov Complexity of Fermionic and Bosonic Gaussian States | The concept of \emph{complexity} has become pivotal in multiple disciplines,
including quantum information, where it serves as an alternative metric for
gauging the chaotic evolution of a quantum state. This paper focuses on
\emph{Krylov complexity}, a specialized form of quantum complexity that offers
an unambiguous and intrinsically meaningful assessment of the spread of a
quantum state over all possible orthogonal bases. Our study is situated in the
context of Gaussian quantum states, which are fundamental to both Bosonic and
Fermionic systems and can be fully described by a covariance matrix. We show
that while the covariance matrix is essential, it is insufficient alone for
calculating Krylov complexity due to its lack of relative phase information.
Our findings suggest that the relative covariance matrix can provide an upper
bound for Krylov complexity for Gaussian quantum states. We also explore the
implications of Krylov complexity for theories proposing complexity as a
candidate for holographic duality by computing Krylov complexity for the
thermofield double States (TFD) and Dirac field. | Kiran Adhikari, Adwait Rijal, Ashok Kumar Aryal, Mausam Ghimire, Rajeev Singh, Christian Deppe | 2023-09-19T07:32:04Z | http://arxiv.org/abs/2309.10382v3 | # Krylov Complexity of Fermionic and Bosonic Gaussian States
###### Abstract
The concept of _complexity_ has become pivotal in multiple disciplines, including quantum information, where it serves as an alternative metric for gauging the chaotic evolution of a quantum state. This paper focuses on _Krylov complexity_, a specialized form of quantum complexity that offers an unambiguous and intrinsically meaningful assessment of the spread of a quantum state over all possible orthogonal bases. Our study is situated in the context of Gaussian quantum states, which are fundamental to both Bosonic and Fermionic systems and can be fully described by a covariance matrix. We show that while the covariance matrix is essential, it is insufficient alone for calculating Krylov complexity due to its lack of relative phase information. Our findings suggest that the relative covariance matrix can provide an upper bound for Krylov complexity for Gaussian quantum states. We also explore the implications of Krylov complexity for theories proposing complexity as a candidate for holographic duality by computing Krylov complexity for the thermofield double States (TFD) and Dirac field.
## 1 Introduction
Over the years, multiple disciplines have undertaken efforts to articulate the complexity of various entities. These disciplines range from computer science and chaotic systems to emergent phenomena in many-body systems and black holes [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Notably, the concept of _complexity_ is also of significance in the domain of Quantum Information. In this context, complexity serves as an alternative metric for measuring the chaotic behavior resulting from the time evolution of a quantum state. A particular quantum state complexity metric, motivated by the operator growth hypothesis [19], has been introduced in [17]. Here, the complexity of a final quantum state is gauged by the degree to which the initial state disperses across all potential orthogonal bases over time. Interestingly, the Krylov basis
is the unique basis where this minimum dispersion is achieved [17]. Hence, this form of complexity, known as _spread complexity_, is also termed Krylov complexity [14; 17; 20].
Krylov complexity provides an unambiguous definition of quantum complexity, allowing for a genuinely intrinsic assessment of operator and complexity growth. This clarity significantly distinguishes Krylov complexity from other quantum complexity definitions [21; 22; 23; 17; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Regarding Krylov complexity, the only degree of freedom lies in choosing the inner product within the operator space. However, this freedom is usually constrained by the specific physical context in which it is applied. Once the inner product is determined, the construction of the Krylov space proceeds using the Lanczos algorithm, also known as the recursion method. Recent studies have explored Krylov complexity across a range of intriguing domains, and a review of its applications in these areas can be found in [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35].
The exploration of quantum mechanics frequently commences with an examination of harmonic oscillators. Gaussian states are intimately linked to these systems, as the ground state or the state with the lowest energy level described by their Hamiltonian is a Gaussian state. An alternative perspective considers any Gaussian state as the ground state of a Hamiltonian representing an ensemble of harmonic oscillators. In our investigations of both Bosonic and Fermionic systems, the solutions to the Hamiltonians characterizing them can invariably be expressed within a complete basis set of Gaussian eigenstates. Additionally, Gaussian states serve as simplified models extensively employed across various branches of physics [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46].
Various representations can depict the quantum state of a system, one of which is the Wigner function [36]. For Gaussian states, this Wigner function simplifies to a multivariate Gaussian function. A Gaussian state can be fully specified by a vector of expectation values and a particular matrix termed the covariance matrix, denoted as \(\sigma\). In this representation, the covariance matrix is a real, symmetric, and positive-definite entity (elaborated further in sec. 2). Throughout this paper, we have extensively utilized the covariance matrix to represent Gaussian states.
The investigation of Krylov complexity in the context of Fermionic and Bosonic Gaussian states is inspired by its ability to quantify the growth of quantum states during their evolution, irrespective of the gate choices. Moreover, recent theories have proposed complexity as a potential candidate for holographic duality, encapsulated by phrases like _Complexity = Action_[6; 47] and _Complexity = Volume_[48], among others. It is anticipated that insights into such areas could be further illuminated through the study of Krylov complexity [49; 50; 51; 52; 53].
We chose to focus on Gaussian states because they often serve as the foundational basis for studying more complex systems and quantum states. One intriguing aspect of Gaussian states is that the action on the covariance matrix can characterize the transition from one Gaussian state to another [54; 55; 56]. For Bosonic Gaussian states, the non-trivial components of this matrix are symmetric, while they are anti-symmetric for Fermionic states. However, we found that the covariance matrix alone is insufficient for calculating Krylov complexity, as it lacks information on the relative phase. Nevertheless, we demonstrated that the relative covariance matrix can serve as an upper bound for Krylov complexity, a point elaborated further in the paper.
Preliminaries
We begin this section with a short review of Gaussian states for Bosons and Fermions and Krylov complexity. A more comprehensive overview on Bosonic and Fermionic Gaussian states can be found in [37; 56].
### Gaussian states: Bosonic and Fermionic
In this paper, we focus on studying Krylov complexity for both Bosonic and Fermionic Gaussian states, as well as the unitary transformations that map one Gaussian state to another. Gaussian states are derived from the ubiquitous function (\(e^{-x^{2}}\)), commonly encountered in probability theory, statistics, and other scientific fields. Bosonic Gaussian states earn their name because their Wigner functions are multivariate Gaussian functions. As for Fermionic Gaussian states, their relevance arises from the ability to calculate higher-order correlation functions based on the two-point function, commonly referred to as the covariance matrix. Gaussian states offer a natural starting point for studying an array of physical systems, given that each Gaussian state can be interpreted as the ground state of a Hamiltonian representing a set of harmonic oscillators.
We will explore both Bosonic and Fermionic Gaussian systems through the lens of their respective covariance matrices. For Bosonic systems, the two-point function manifests as symmetric, and transformations mapping one Gaussian state to another can be wholly articulated by the corresponding action on the covariance matrix. In the case of Fermionic systems, the meaningful components of the covariance matrix are characterized by an antisymmetric part.
#### 2.1.1 Bosons
We introduce Hermitian position (\(q_{k}\)) and momentum (\(p_{k}\)) operators for each mode, respectively as
\[q_{k}=\frac{1}{\sqrt{2}}(a_{k}+a_{k}^{\dagger})\,,\qquad p_{k}= \frac{1}{i\sqrt{2}}(a_{k}-a_{k}^{\dagger})\,, \tag{1}\]
with the canonical commutation relation between position and momentum as \([q_{k},p_{l}]=i\,\delta_{kl}\). To encapsulate this relationship, we introduce a vector of operators, \(R^{a}\equiv(q_{1},p_{1},....,q_{N},p_{N})^{T}\), allowing us to rewrite the canonical commutation relation as
\[\big{[}R,R^{T}\big{]}=i\,\Omega=i\,\left[\bigoplus_{k=1}^{2n} \begin{bmatrix}0&1\\ -1&0\end{bmatrix}\otimes\mathbb{1}_{\mathbb{N}}\right]\,, \tag{2}\]
where \(\Omega\) is a symplectic matrix and \(\mathbb{1}_{\mathbb{N}}=n\times n\) is an identity matrix. By different grouping of operators \(S^{a}\equiv(q_{1},....,q_{N},\,p_{1},....,p_{N})\), we can express canonical commutation as
\[\big{[}S,S^{T}\big{]}=i\,J=i\,\begin{bmatrix}\mathbb{0}_{\mathbb{ N}}&\mathbb{1}_{\mathbb{N}}\\ -\mathbb{1}_{\mathbb{N}}&\mathbb{0}_{\mathbb{N}}\end{bmatrix}\,, \tag{3}\]
where \(\mathbb{1}_{\mathbb{N}}\) and \(\mathbb{0}_{\mathbb{N}}\) are the \(N\times N\) identity and null matrices, respectively, and \(J\) is the symplectic form in the re-ordered basis. These two vectors of operations \(R\) and \(S\) are
related by a simple \(2n\times 2n\) permutation matrix, \(S=PR\). Then, for a quantum state of \(N\) Bosons, the covariance matrices can be written as
\[\begin{split}\sigma^{kl}&\equiv[\sigma]^{kl}=\frac{1 }{2}\langle\{R^{k},R^{l}\}\rangle-\langle R_{l}\rangle\langle R_{k}\rangle\,, \\ V^{kl}&\equiv[V]^{kl}=\frac{1}{2}\langle\{S^{k},S^{ l}\}\rangle-\langle S_{l}\rangle\langle S_{k}\rangle\,,\end{split} \tag{4}\]
where \(\{\cdot,\cdot\}\) denotes the anti-commutator and \(\langle\hat{A}\rangle=\text{Tr}[\hat{A}\rho]\) with \(\rho\) being the density matrix of the system. The covariance matrix is a real, symmetric, and positive-definite entity, which adheres to the inequalities \(\sigma+\frac{i}{2}\Omega\geq 0\text{and }V-\frac{i}{2}J\geq 0\). Gaussian states are fully characterized by the first and second moments of the quadrature operators \((q,p)\). Specifically, they can be described by the vector of expectation values \(\bar{R}=\langle R\rangle\) and the covariance matrix \(\sigma\). We denote \(\bar{R}\) as the displacement state vector. A unitary transformation maps any Bosonic Gaussian state to another Bosonic Gaussian state if, and only if, it is generated by a second-order Hamiltonian, \(\hat{H}\), generally consisting of both linear and quadratic terms in the canonical operators. We can thus delineate a set of Gaussian states as encompassing all ground and thermal states corresponding to a second-order Hamiltonian with a positive-definite Hamiltonian matrix \(H>0\). The condition of positivity ensures that the Hamiltonian operators have a lower bound. Thus, any Gaussian states (\(\rho_{G}\)) are expressed as
\[\rho_{G}=\lim_{\beta\rightarrow\infty}\frac{e^{-\beta\bar{H}}}{\text{Tr}\left[ e^{-\beta\bar{H}}\right]}\,, \tag{5}\]
with \(\beta\in R^{+}\). By construction, all states of this form are mixed; however, in the limiting case for \(\beta\), \(\rho_{G}\) are pure Gaussian states. The parameter \(\beta\) is the inverse temperature. In the scope of this study, our emphasis will be solely on pure states, with the intention of addressing mixed states in future investigations. A unitary Gaussian operation precisely correlates with the symplectic transformation of both the displacement vector and the covariance matrix, detailed as follows
\[\bar{R}\to M\bar{R}+d\,,\qquad\sigma\to M\sigma M^{T}\,, \tag{6}\]
In this formulation, \(d\) represents a \(2n\)-dimensional real vector of displacements, and the matrix \(M\) satisfies the condition \(M\Omega M^{T}=\Omega\). Consequently, pure Gaussian states can be expressed in the limit \(\beta\rightarrow\infty\) as \(\lim_{\beta\rightarrow\infty}\rho_{G}\). Thus, all pure Gaussian states can be derived by applying unitary operations generated by a second-order Hamiltonian acting on the pure state. Before delving into the topic of Fermions, it's important to highlight that both Bosonic and Fermionic Gaussian states can be parameterized in terms of their respective covariance matrices, as follows
\[\langle\psi|\,S^{a}S^{b}\,|\psi\rangle=\frac{1}{2}\left(V^{ab}+i\,\Omega^{ab} \right)\,, \tag{7}\]
where \(S^{a}\equiv(q_{1},....,q_{N},\,p_{1},....,p_{N})\) describes \(N\) degrees of freedom, which can be Bosonic or Fermionic. \(V^{ab}\) is the symmetric part while \(\Omega^{ab}\) denotes the anti-symmetric part.
#### 2.1.2 Fermions
We now turn our attention to Fermionic Gaussian states. Initially, we define the Hermitian Fermionic operators, commonly referred to as Majorana modes
\[q_{i}=\frac{1}{\sqrt{2}}\left(a_{i}^{\dagger}+a_{i}\right)\,,\qquad\text{and} \qquad p_{i}=\frac{i}{\sqrt{2}}\left(a_{i}^{\dagger}-a_{i}\right)\,. \tag{8}\]
These operators obey the anti-commutation relations: \(\{q_{i},q_{j}\}=\delta_{ij}=\{p_{i},p_{j}\}\) and \(\{q_{i},p_{j}\}=0\). For a Fermionic system, we find that the symmetric term
\[V^{ab}=\left\langle\psi\right|\{S^{a},S^{b}\}\left|\psi\right\rangle=\delta^{ ab}\,, \tag{9}\]
is fixed by the canonical anti-commutation relations which are preserved by the Bogoliubov transformations. But the non-trivial component is the anti-symmetric term
\[\Omega^{ab}=-i\left\langle\psi\right|\left[S^{a},S^{b}\right]\left|\psi\right\rangle\,, \tag{10}\]
that completely characterizes the corresponding Fermionic Gaussian state \(\left|\psi\right\rangle\). Here, we need to note that for a Bosonic system, \(\Omega^{ab}\) is trivial, and \(V^{ab}\) characterizes the corresponding Gaussian state. The matrix \(\Omega\) is evaluated using the Hermitian Fermionic operators for the state \(\left|\psi\right\rangle\) annihilated by \(a_{i}\) as
\[\Omega\equiv\begin{pmatrix}\mathbb{0}&\mathbb{1}\\ -\mathbb{1}&\mathbb{0}\end{pmatrix}\,, \tag{11}\]
where \(\mathbb{0}\) and \(\mathbb{1}\) are \(N\crosscross N\) zero and identity matrices, respectively. Here, we note that \(\Omega\) is similar in form to the symplectic form for the Bosons.
We aim to explore the group of transformations that map Fermionic Gaussian states onto one another, scrutinizing the Bogoliubov transformations acting on the Fermionic creation and annihilation operators. We employ the Majorana basis, denoted by \(\bar{S^{a}}\equiv(\bar{q}_{i},\bar{p}_{i})\) and \(S^{a}\equiv(q_{i},p_{i})\). In this setting, Bogoliubov transformations operate as linear transformations. We express \(\bar{S^{a}}=M_{b}^{a}S^{b}\), where \(M\) represents the inverse transformation. The condition that maintains the anti-commutation relations can then be written as follows
\[\left(MGM^{T}\right)^{ab}=M_{c}^{a}G^{cd}\left(M^{T}\right)_{d}^{b}=G^{ab}\,. \tag{12}\]
Additionally, it's established that \(V^{ab}\equiv\delta^{ab}\)within the Majorana basis, indicating the \(O(2N)\)group structure. The transformation of the states is thus captured by the transformation of the anti-symmetric two-point correlator,
\[\bar{\Omega}^{ab}=\left(M\Omega M^{T}\right)^{ab}=M_{c}^{a}\Omega^{cd}\left( M^{T}\right)_{d}^{b}\,. \tag{13}\]
### Krylov spread complexity
In this section, we introduce Krylov spread complexity, a concept delineated in [17]. Krylov spread complexity is a natural concept of complexity based on the spread of a quantum
state within the Hilbert space. To do this, let's consider the time evolution of a quantum state \(\left|\psi(t)\right\rangle\) governed by a time-independent Hamiltonian \(H\)
\[\left|\psi(t)\right\rangle=e^{-iHt}\left|\psi_{0}\right\rangle\,. \tag{14}\]
To measure the extent to which \(\left|\psi(t)\right\rangle\) spreads across the Hilbert space, we introduce a cost function with respect to a complete, orthonormal, and ordered basis \(\mathcal{B}=\{\left|B_{n}\right\rangle:n=0,1,2,\ldots\}\) for that space
\[C_{\mathcal{B}}(t)=\sum_{n}c_{n}|\langle\psi(t)|B_{n}\rangle|^{2}=\sum_{n}c_{ n}\,p_{B}(n,t)\,, \tag{15}\]
In this context, \(c_{n}\) is part of an increasing sequence of positive real numbers. Given that \(\sum p_{B}(n,t)=1\), the values \(p_{B}(n,t)\) can be understood as the probabilities of the quantum state \(\left|\psi(t)\right\rangle\) being projected onto each vector in the basis \(\mathcal{B}\). We define the spread complexity as the minimum of this cost function over all bases \(\mathcal{B}\)
\[C(t)=\min_{\mathcal{B}}C_{\mathcal{B}}(t)\,. \tag{16}\]
We find that a complete Krylov basis \(\mathcal{K}\), minimizes the cost function near \(t=0\) and thus the spread complexity is
\[C(t)=C_{\mathcal{K}}(t)=\min_{\mathcal{B}}C_{\mathcal{B}}(t)\,. \tag{17}\]
To evaluate the spread complexity, it's essential to ascertain the Krylov basis. We employ the Lanczos algorithm for this purpose. This algorithm relies on the Gram-Schmidt orthogonalization process to recursively construct an orthonormal Krylov basis, symbolized as \(\left|K_{n}\right\rangle\). The algorithm unfolds as follows
\[\left|A_{n+1}\right\rangle=(H-a_{n}\left|K_{n}\right\rangle)-b_{n}\left|K_{n- 1}\right\rangle\,, \tag{18}\]
where \(\left|K_{n}\right\rangle=b_{n}^{-1}\left|A_{n}\right\rangle\), \(b_{0}=0\), and \(\left|K_{0}\right\rangle=\left|\psi(0)\right\rangle\). Here, \(a_{n}\) and \(b_{n}\) are called Lanczos coefficients and are given as
\[a_{n}=\left\langle K_{n}\right|H\left|K_{n}\right\rangle\,,\qquad b_{n}= \left\langle A_{n}\right|A_{n}\right\rangle^{\frac{1}{2}}. \tag{19}\]
This algorithm implies that
\[H\left|K_{n}\right\rangle=a_{n}\left|K_{n}\right\rangle+b_{n+1}\left|K_{n+1} \right\rangle+b_{n}\left|K_{n-1}\right\rangle\,, \tag{20}\]
which reveals that the Hamiltonian \(H\) manifests as a tri-diagonal matrix. This is commonly called the Hamiltonian's 'Hessenberg form in finite-dimensional systems.' The Lanczos coefficients can be straightforwardly extracted from this representation; \(a_{n}\) values correspond to the diagonal elements, while \(b_{n}\) values are the off-diagonal elements.
Now we expand the state \(\left|\psi(t)\right\rangle\) in the Krylov basis
\[\left|\psi(t)\right\rangle=\sum_{n}\psi_{n}(t)\left|K_{n}\right\rangle\,, \tag{21}\]
and using the Schrodinger equation, we get
\[i\partial_{t}\psi_{n}(t)=a_{n}\psi_{n}(t)+b_{n}\psi_{n-1}(t)+b_{n+1}\psi_{n+1}(t)\,. \tag{22}\]
We take \(c_{n}=n\) and define the spread complexity as
\[C(t)=\sum_{n}n|\psi_{n}(t)|^{2}\,. \tag{23}\]
Here, we outline a more generalized approach for calculating the Lanczos coefficients, which remains applicable even for systems with infinite dimensions. This method relies on the survival amplitude \(S(t)\), which is defined as follows
\[S(t)=\langle\psi(t)|\psi(0)\rangle=\langle\psi(0)|\,e^{iHt}\,|\psi(0)\rangle\, \tag{24}\]
and the moments are
\[\mu_{n}=\frac{d^{n}}{dt^{n}}S(t)\bigg{|}_{t=0}=\langle\psi(0)|\,(iH)^{n}\,| \psi(0)\rangle\, \tag{25}\]
We show that the moments can be characterized by an unnormalized Markov chain, where the Lanczos coefficients give the transition weights. We can subsequently derive equations for these moments using the Lanczos coefficients. For instance,
\[\mu_{1}=ia_{0},\qquad\mu_{2}=-a_{0}^{2}-b_{1}^{2},\qquad\mu_{3}=-i\left(a_{0} ^{3}+2a_{0}b_{1}^{2}+a_{1}b_{1}^{2}\right)\,, \tag{26}\]
and so on. We observe that the coefficients \(a_{n}\) are determined from the odd moments, while the coefficients \(b_{n}\) are derived from the even moments. Therefore, with the survival amplitude at hand, we can ascertain the moments of the Hamiltonian in the initial state. We can compute the Lanczos coefficients from these moments, leading us to calculate the amplitudes \(\psi_{n}(t)\). Ultimately, this allows us to evaluate the Krylov spread complexity.
## 3 Upper bound on Krylov complexity for pure Gaussian states
Gaussian states are mathematically appealing due to their unique structure that enables the analytical computation of various results [36; 37]. In this section, we aim to leverage these structures to determine the bounds of the Krylov complexity specific to Gaussian states. Our focus will be solely on pure Gaussian states, reserving the study of mixed states for future investigations.
### Growth order of Krylov spread complexity for pure Gaussian states
Utilizing techniques from complex analysis, we will demonstrate that the Krylov complexity for pure Gaussian states exhibits a growth of second order or lower. To accomplish this, we'll revisit some essential concepts from complex analysis [57; 58]. Let's consider \(F(z)\) as a holomorphic function within the complex domain \(K(0;R)\). Let
\[M(r;F)=\max_{|z|=r}\lvert F(z)\rvert\,, \tag{27}\]
which is the maximum of \(|F(z)|\) for \(|z|\leq r\) and applicable for all \(r\) in the range \(0\leq r<R\), \(M(r;F)\) will be simplified to \(M(r)\) for convenience. By its definition, \(M(r)\) is a non-decreasing function. The rate at which \(M(r)\) grows as \(r\) approaches infinity provides valuable insights into the behavior of the function \(F(z)\). To better understand this, we can evaluate \(M(r)\) in relation to another function
\[M(r)\leq e^{r^{\rho}}\,, \tag{10}\]
where \(\rho\) is called an order of the entire function \(M(r)\) expressed as
\[\rho=\lim_{r\to\infty}\sup\frac{\log\log M(r)}{\log r}\,. \tag{11}\]
Several intriguing properties exist concerning the order of a function, as discussed in [58]. One key result we will employ is that if \(f\) and \(g\) are entire functions with finite orders \(\rho_{f}\) and \(\rho_{g}\) respectively, then the product \(fg\) will have an order that is at most \(\max\{\rho_{f},\rho_{g}\}\). Additionally, it's noteworthy that the order for \(F(z)\) and \(|F(z)|\) is the same.
To generalize the argument comprehensively, we consider an arbitrary complete, orthonormal, and ordered basis \(\mathcal{B}=\{|B_{n}):n=0,1,2,\ldots\}\). The pure Gaussian state \(|\psi(t)\rangle\) can be expanded in terms of this basis as
\[|\psi(t)\rangle=\sum_{n}b_{n}\,|B_{n}\rangle. \tag{12}\]
Then, complexity is defined as
\[C_{\mathcal{B}}(t)=\sum_{n}c_{n}|\langle\psi(t)|B_{n}\rangle|^{2}\,, \tag{13}\]
where \(c_{n}\) is a positive, increasing sequence of real numbers. When limited to pure Gaussian states created by Gaussian unitary transformations, the basis \(|B_{n}\rangle\) consists entirely of Gaussian states. Additionally, the overlap of Gaussian states, \(\langle\psi(t)|B_{n}\rangle\), is also Gaussian in nature. As a consequence, Hadamard's theorem can be applied, indicating that the growth rate of the analytic Gaussian function is constrained to an order of two or less.
Consequently, the growth of \(\langle\psi(t)|B_{n}\rangle\) is confined to an order of two or less. Likewise, the square of the magnitude of this overlap, \(|\langle\psi(t)|B_{n}\rangle|^{2}\), also cannot exceed an order of two. Summing functions with a maximum order of two still retains this upper limit on the order. Hence, \(C_{\mathcal{B}}(t)\) is similarly bounded to a growth of order two or less. This naturally leads to the conclusion that the Krylov complexity for pure Gaussian states, \(C_{\mathcal{K}}(t)\), which is defined over the Krylov basis, is also limited to a growth of order two or less.
Let us consider two unitary operators: \(U_{1}\) transforms the initial state \(|\psi(0)\rangle\) into \(|\psi(t_{1})\rangle\) according to \(|\psi(t_{1})\rangle=U_{1}\,|\psi(0)\rangle\), and \(U_{2}\) takes the state \(|\psi(t_{1})\rangle\) to \(|\psi(t_{2})\rangle\) as \(|\psi(t_{2})\rangle=U_{2}\,|\psi(t_{1})\rangle\). The Krylov complexity, denoted by \(C(U)\), for the composite unitary operation \(U=U_{2}U_{1}\) is then subject to the following condition
\[C(U)\leq C(U_{2})+C(U_{1})\,. \tag{14}\]
Given that both \(C(U_{1})\) and \(C(U_{2})\) have a growth order restricted to two or less, it follows that the composite Krylov complexity \(C(U)\), representing the unitary operation \(U=U_{2}U_{1}\), also exhibits a growth order of two or less.
### Bound for pure Gaussian states
Even in the case of Gaussian states, the Krylov complexity \(C(t)\) cannot be directly calculated from the covariance matrix alone. This is because the covariance matrix lacks information about the relative phase between the initial and final states, making it impossible to compute the survival amplitude solely from this matrix. Therefore, the explicit algorithm must be employed to calculate \(C(t)\). Nonetheless, one can still derive meaningful bounds for this complexity measure.
For Gaussian states, the number operator \(\hat{n}=a^{\dagger}a\) holds particular importance. The eigenstates of \(\hat{n}\) constitute a basis known as the Fock basis, also referred to as the number basis
\[\mathcal{F}=\left\{\left|n\right\rangle;n=0,1,2....\right\}, \tag{19}\]
that satisfies
\[\hat{n}\left|n\right\rangle=n\left|n\right\rangle\,,\quad\left\langle n|n \right\rangle=1\,,\quad a\left|n\right\rangle=\sqrt{n}\left|n-1\right\rangle \,,\quad a^{\dagger}\left|n\right\rangle=\sqrt{n+1}\left|n+1\right\rangle\,. \tag{20}\]
The Fock basis forms an orthonormal complete ordered set as
\[\left\langle n|n^{\prime}\right\rangle=\delta_{nn^{\prime}}\,,\qquad\sum_{n=0 }^{\infty}\left|n\right\rangle\left\langle n\right|=\mathbb{1}\,. \tag{21}\]
For the positive increasing sequence of real numbers \(c_{n}\), we can define Fock spread complexity in the following way
\[C_{\mathcal{F}}(t)=\sum_{n}c_{n}|\langle\psi(t)|n\rangle|^{2}\,, \tag{22}\]
and Krylov complexity \(C(t)\) minimizes overall choice of basis, resulting a bound for \(C(t)\) as
\[0<C(t)\leq C_{\mathcal{F}}(t)\,. \tag{23}\]
For \(c_{n}=n\), \(C_{\mathcal{B}}(t)\) quantifies the average dispersion across the basis \(\mathcal{B}\). When applied to the Krylov basis, this is termed Krylov spread complexity. In contrast, \(C_{\mathcal{F}}(t)\) is the expectation value of the total number operator and measures the average number of particles in a system comprising \(n\) Bosons or Fermions. Henceforth, we will exclusively consider \(c_{n}=n\). Subsequently, we will demonstrate that the upper bound for Krylov complexity can be formulated solely in terms of the spectrum of the relative covariance matrix. Assume that for a particular Gaussian state, an \(n\times n\) covariance matrix with an eigenvalue spectrum denoted by \(\{\lambda_{i}\}\) exists.
For Gaussian states, \(C_{\mathcal{F}}(t)\) can be written explicitly in terms of the relative covariance matrix [37]
\[C(t)\leq C_{\mathcal{F}}(t)=\begin{cases}-\frac{1}{4}\left(\operatorname{Tr}( \mathbf{I}_{n\times n}-\Delta)\right)&\text{ for Bosons }\,,\\ +\frac{1}{4}\left(\operatorname{Tr}(\mathbf{I}_{n\times n}-\Delta)\right)& \text{ for Fermions }\,,\end{cases} \tag{24}\]
where \(\Delta\) is the relative covariance matrix. The trace of a matrix is the sum of its eigenvalues. Thus, the bound can entirely be derived from the spectrum of the relative covariance matrix.
\[C(t)\leq C_{\mathcal{F}}(t)=\begin{cases}-\frac{1}{4}\left(n-\sum\lambda_{i} \right)&\text{ for Bosons }\,,\\ +\frac{1}{4}\left(n-\sum\lambda_{i}\right)&\text{ for Fermions }\,.\end{cases} \tag{25}\]
Each iteration step in the survival amplitude method for establishing the lower bound contributes to the total sum \(\sum_{n}n|\langle\psi(t)|K_{n}\rangle|^{2}\). However, for large \(n\), obtaining this contribution becomes challenging since the number of terms required for computing the moment via Lanczos coefficients increases following the Catalan numbers \(C_{n}\). For instance, when \(a_{n}=0\), we have \(\mu_{n}=0\) for all odd \(n\). The number of terms contributing to \(\mu_{n}\) for each even \(n\) is determined by the Catalan numbers \(C_{\frac{n}{2}}\), which grow exponentially, \(\mathcal{O}\left(4^{n}\right)\), as \(n\) becomes large
\[\text{Catalan number }C_{k}=\frac{1}{k+1}\begin{bmatrix}2k\\ k\end{bmatrix}\,. \tag{3.14}\]
For example, for \(k=0,1,2,3,\ldots\), the first few Catalan numbers \(C_{k}\) are \(1,1,2,5,14,42,\ldots\). Hence, we can only feasibly compute a limited number of terms in the iteration series using the survival amplitude method. Assume we have calculated up to \(r\) iterations using this algorithm. Here, \(r\) must satisfy \(r\leq|\mathcal{K}|\), where \(|\mathcal{K}|\) is the dimension of the Krylov basis. Then, the contribution to the Krylov complexity can be expressed as
\[C^{r}(t)=\sum_{n}^{r}n|\langle\psi(t)|K_{n}\rangle|^{2}\,. \tag{3.15}\]
From this, we get the bound for true Krylov complexity, \(C(t)\),
\[0<C^{r}(t)<C(t)\leq C_{\mathcal{F}}(t)<\infty\,. \tag{3.16}\]
In summary, obtaining a more accurate bound is possible with higher values of \(r\). In the subsequent section dedicated to Bosonic and Fermionic Gaussian states, we will employ \(C_{\mathcal{F}}(t)\) to derive an explicit upper bound. For Gaussian states that are relatively simple, we will also directly calculate \(C(t)\). We can only estimate the lower bound \(C^{r}(t)\) for more intricate Gaussian states. Computing this lower bound becomes increasingly complex as the number of terms required grows exponentially, dictated by the Catalan numbers.
## 4 Single mode Bosons
We focus on a single Bosonic degree of freedom, and the transformation group that preserves the canonical commutation relations is denoted as \(SP(2,R)\). The initial state \(|\psi\rangle\) is such that \(a\,|\psi\rangle=0\). Upon transformation, we get a new state \(|\tilde{\psi}\rangle\) characterized by \(\tilde{S}^{a}=(\tilde{q},\tilde{p})\), which satisfies \(\tilde{a}|\tilde{\psi}\rangle=0\). The Bogoliubov transformations of \(\tilde{S}^{a}\) from \(S^{a}\) is then given by
\[\tilde{a}=\alpha a+\beta a^{\dagger}\,,\qquad\tilde{a}^{\dagger}=\alpha^{*}a+ \beta^{*}a\,. \tag{4.1}\]
Since \(\left[a,a^{\dagger}\right]=\left[\tilde{a},\tilde{a}^{\dagger}\right]=1\,,\) the coefficients \(\alpha\) and \(\beta\) satisfy \(|\alpha|^{2}-|\beta|^{2}=1\) and a most general Bogoliubov transformation is for \(\alpha=e^{i\varphi}\cosh\left(r\right)\) and \(\beta=e^{i\theta}\sinh\left(r\right)\). The symplectic matrix for this transformation is given by
\[M=\begin{bmatrix}\cos\varphi\cosh\left(r\right)+\cos\theta\sinh \left(r\right)&\sin\theta\sinh\left(r\right)-\sin\varphi\cosh\left(r\right)\\ \sin\varphi\cosh\left(r\right)+\sin\theta\sinh\left(r\right)&\cos\varphi \sinh\left(r\right)-\cos\theta\sinh\left(r\right)\end{bmatrix}\,. \tag{4.2}\]
For Krylov complexity, we can always pick initial state \(\ket{K_{0}}\) whose covariance matrix is \(V=\mathbb{1}\). Once we fix this, the final state \(\ket{\tilde{\psi}}\), according to the transformation 20, has covariance matrix
\[\tilde{V}^{ab}=\begin{bmatrix}\cosh 2r+\cos\left(\theta+\varphi\right)\sinh 2r&\sin \left(\theta+\varphi\right)\sinh 2R\\ \sin\left(\theta+\varphi\right)\sinh 2r&\cosh 2r-\cos\left(\theta+\varphi \right)\sinh 2r\end{bmatrix}\,, \tag{21}\]
which is also the relative covariance matrix between \(\ket{K_{0}}\) and \(\ket{\tilde{\psi}}\)
\[\Delta_{b}^{a}=\tilde{V}^{ac}g_{cb}=\begin{bmatrix}\cosh 2r+\cos\left( \theta+\varphi\right)\sinh 2r&\sin\left(\theta+\varphi\right)\sinh 2R\\ \sin\left(\theta+\varphi\right)\sinh 2r&\cosh 2r-\cos\left(\theta+\varphi \right)\sinh 2r\end{bmatrix}\,. \tag{22}\]
The upper bound for Krylov complexity is calculated as \(-(1/4)\text{Tr}(\mathbb{1}-\Delta)\) with
\[\text{Tr}\left(\mathbb{1}-\Delta\right) =\text{Tr}\begin{bmatrix}1-\cosh 2r-\cos\left(\theta+\varphi \right)\sinh 2r&-\sin\left(\theta+\varphi\right)\sinh 2R\\ -\sin\left(\theta+\varphi\right)\sinh 2r&1-\cosh 2r+\cos\left(\theta+ \varphi\right)\sinh 2r\end{bmatrix}\,,\] \[=-4\sinh^{2}r\,, \tag{23}\]
leading to
\[C(t)\,\leq \,-\frac{1}{4}\text{Tr}(\mathbb{1}-\Delta)=\sinh^{2}r\,. \tag{24}\]
To continue, it's important to note that this outcome is not dependent on the phase parameters \(\theta\) and \(\varphi\), since \(\tilde{V}^{ab}\) is independent of the relative phase \((\theta-\varphi)\). It is also possible to get this bound directly from the spectrum of the covariance matrix \(\Delta\), \(\text{spec}(\Delta)=(e^{2r},e^{-2r})\)
\[C(t)\leq C_{\mathcal{F}}(t)=-\frac{1}{4}\left(n-\sum\lambda_{i} \right)=-\frac{1}{4}\left(2-e^{2r}-e^{-2r}\right)=\sinh^{2}r\,. \tag{25}\]
In subsequent sections, we will often employ the infinite sum of a geometric progression to arrive at a compact expression for the Krylov complexity
\[\sum_{j=0}^{\infty}z^{j}=\frac{1}{1-z}\,,\quad\text{ for }\quad|z|<1\,. \tag{26}\]
Consequently, after one or two differentiations, followed by additional straightforward calculations, we obtain other useful infinite sums
\[\begin{split}\sum_{j=0}^{\infty}jz^{j}&=\frac{z}{(1 -z)^{2}}\,,\quad\text{ for }\quad|z|<1\,,\\ \sum_{j=0}^{\infty}j^{2}z^{j}&=\frac{z(z+1)}{(1-z)^{ 3}}\,,\quad\text{ for }\quad|z|<1\,.\end{split} \tag{27}\]
### Initial state: Vacuum state
We have observed that the Krylov complexity is unique for a fixed initial state, as it is the minimum over all possible bases. In our specific case, we choose the initial state to be the eigenstate with a zero eigenvalue of the annihilation operator. We refer to this state as the vacuum state, denoted by \(\ket{0}\), which satisfies \(a\ket{0}=0\). The identity matrix \(V=1\) is the vacuum state's covariance matrix. Therefore, the complexity of the initial state itself is zero, as the relative covariance matrix \(\Delta=1\).
### Coherent states
The first non-trivial state we would like to focus on is the coherent state \(\ket{z}\), \(z=i\alpha t\in C\). They are defined as displaced vacuum state, \(\ket{z}=D(z)\ket{0}\), where
\[D(z)=\exp\Bigl{(}za^{\dagger}-z^{*}a\Bigr{)}\,, \tag{4.10}\]
with the Hamiltonian, generating the coherent states,
\[H=\alpha\left(a^{\dagger}+a\right)\,. \tag{4.11}\]
Now, we shall compute Krylov complexity using two different techniques: the Lanczos algorithm and survival amplitude.
#### 4.2.1 Krylov complexity via Lanczos algorithm
Starting with the initial state \(\ket{K_{0}}=\ket{0}\)and \(b_{0}=0\), we find \(a_{0}=\bra{0}H\ket{0}=0\). Applying the Hamiltonian operator \(H\) to the initial state \(\ket{K_{0}}\), we have \(\ket{A_{1}}=(H-a_{0})\ket{K_{0}}\). Given that \(H=\alpha(a^{\dagger}+a)\), the action of \(H\) on \(\ket{0}\) results in \(\alpha\ket{1}\). Next, we compute \(b_{1}\), the square root of the inner product \(\bra{A_{1}}A_{1}\). Since \(\ket{A_{1}}=\alpha\ket{1}\), the norm of \(\ket{A_{1}}\)is simply \(\alpha\). Therefore, we find \(b_{1}=\alpha\), and,
\[\ket{K_{1}}=b_{1}^{-1}\ket{A_{1}}=\ket{1}\,. \tag{4.12}\]
However, for the next iteration, to find \(\ket{K_{2}}\), we start with \(H\ket{K_{1}}=\alpha(a^{\dagger}+a)\ket{1}=\alpha\sqrt{2}\,\ket{2}+\alpha\ket{0}\) that arises to \(a_{1}=\bra{K_{1}}H\ket{K_{1}}=0\). Furthermore,
\[\ket{A_{2}}=(H-a_{1})\ket{K_{1}}-b_{1}\ket{K_{0}}=\alpha\sqrt{2}\,\ket{2}\,, \tag{4.13}\]
and using \(b_{2}=\sqrt{\langle A_{2}|A_{2}\rangle}=\alpha\sqrt{2}\), we have
\[\ket{K_{2}}=b_{2}^{-1}\ket{A_{2}}=\ket{2}\,. \tag{4.14}\]
In fact,
\[\ket{A_{n+1}}=H\ket{K_{n}}-b_{n}\ket{K_{n-1}}=\alpha\sqrt{n+1}\ket{n+1}\,, \tag{4.15}\]
resulting \(b_{n+1}=\sqrt{\langle A_{n+1}|A_{n+1}\rangle}=\alpha\sqrt{n+1}\), \(b_{n}=\alpha\sqrt{n}\), and \(\ket{K_{n}}=b_{n}^{-1}\ket{A_{n}}=\ket{n}\). Thus, from the Lanczos algorithm, we get Krylov basis, same as number basis, \(\ket{K_{n}}=\ket{n}\) and Lanczos coefficients \(a_{n}=0\) and \(b_{n}=\alpha\sqrt{n}\). Now we can expand the coherent state \(\ket{z}\) in the Krylov basis as
\[\ket{z}=\exp\left[-\frac{1}{2}\lvert z\rvert^{2}\right]\sum_{n=0}^{\infty} \frac{z^{n}}{\sqrt{n!}}\ket{n}\,, \tag{4.16}\]
where
\[\psi_{n}=\langle z\rvert n\rangle=\exp\left[-\frac{1}{2}\lvert z\rvert^{2} \right]\frac{\lvert z\rvert^{n}}{\sqrt{n!}}\,. \tag{4.17}\]
Then,
\[p_{n}=\lvert\psi_{n}\rvert^{2}=\exp\left[-\lvert z\rvert^{2}\right]\frac{ \lvert z\rvert^{2n}}{n!}\,. \tag{4.18}\]
From this, we arrive at the expression of complexity to be
\[C(t)=\sum_{n}n\,p_{n}=\lvert z\rvert^{2}=\alpha^{2}t^{2}\,. \tag{4.19}\]
where we used equation 4.8 to do the summation and \(z=i\alpha t\).
#### 4.2.2 Krylov complexity via survival amplitude method
We start by expressing the survival amplitude as
\[S(t)=\langle\psi(t)|\psi(0)\rangle\,. \tag{4.20}\]
For the case of coherent states, we have
\[|\psi(0)\rangle=|0\rangle\,\qquad|\psi(t)\rangle=|z\rangle=\exp\left[-\frac{1}{2} \alpha^{2}t^{2}\right]\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\,|n \rangle\, \tag{4.21}\]
therefore, we arrive at
\[\psi_{0}(t)=S(t)=e^{-\frac{1}{2}\alpha^{2}t^{2}}\,. \tag{4.22}\]
Using survival amplitude, moments can be computed as follows
\[\mu_{n}=\frac{d^{n}}{dt}S(t)\bigg{|}_{t=0}\,, \tag{4.23}\]
which leads to
\[\mu_{n}=\begin{cases}0&\text{for odd }n\,,\\ (i\alpha)^{n}\sqrt{(n-1)!!}&\text{for even }n\,.\end{cases} \tag{4.24}\]
The Lanczos coefficients can be determined, using the moments, as
\[a_{n}=0\,,\qquad b_{n}=\alpha\sqrt{n}\,. \tag{4.25}\]
Using the formula for the Lanczos coefficients \(b_{n}=\alpha\sqrt{n}\) and the survival amplitude \(\psi_{0}(t)=S(t)=e^{-\frac{1}{2}\alpha^{2}t^{2}}\), we can determine the Krylov amplitude \(\psi_{n}(t)\) through a recursive method
\[\frac{i}{\alpha}\partial_{t}\psi_{n}(t)=\sqrt{n+1}\psi_{n+1}(t)+\sqrt{n}\psi_ {n-1}(t)\,. \tag{4.26}\]
For \(n=0\), \(b_{0}=0\), and we are left with the equation \(\frac{i}{\alpha}\partial_{t}\psi_{0}(t)=\psi_{1}(t)\) to solve. This allows us to find the Krylov amplitude \(\psi_{1}(t)\) as follows
\[\psi_{1}(t)=-i\alpha t\psi_{0}(t)\,, \tag{4.27}\]
however, for \(n=1\), we have the equation
\[\frac{i}{\alpha}\partial_{t}\psi_{1}(t)=\sqrt{2}\,\psi_{2}(t)+\psi_{0}(t)\,, \tag{4.28}\]
which leads to the Krylov amplitude \(\psi_{2}(t)\) as
\[\psi_{2}(t)=-\frac{1}{\sqrt{2}}\alpha^{2}t^{2}\psi_{0}(t)\,. \tag{4.29}\]
Similarly, for \(n=3\), we have
\[\frac{i}{\alpha}\partial_{t}\psi_{3}(t)=\sqrt{4}\psi_{4}(t)+\sqrt{3}\psi_{2}( t)\,, \tag{4.30}\]
resulting the following expression for Krylov amplitude \(\psi_{4}(t)\)
\[\psi_{4}(t)=\frac{1}{\sqrt{4!}}\alpha^{4}t^{4}\psi_{0}(t)\,. \tag{4.31}\]
As \(\psi_{0}(t)=e^{-\frac{1}{2}\alpha^{2}t^{2}}\), the expression for general \(|\psi_{n}(t)|^{2}\) is given as
\[|\psi_{n}(t)|^{2}=\frac{(\alpha^{2}t^{2})^{n}}{n!}e^{-\alpha^{2}t^{2}}\,, \tag{4.32}\]
with Krylov complexity \(C(t)\) expressed as
\[C(t)=\sum_{n=0}^{\infty}n|\psi_{n}(t)|^{2}=e^{-\alpha^{2}t^{2}} \sum_{n=0}^{\infty}n\frac{(\alpha^{2}t^{2})^{n}}{n!}=\alpha^{2}t^{2}\,. \tag{4.33}\]
Interestingly, we get a similar expression for Krylov complexity computed using the Lanczos algorithm and survival amplitude method. Furthermore, this saturates the bound 7.1 as \(C(t)=C_{\mathcal{F}}(t)=\alpha^{2}t^{2}\). In Figure, 1, we have plotted the Krylov spread complexity for coherent states as a function of time. Krylov complexity grows quadratically with time, indicating that the final quantum state becomes more and more different from the initial vacuum state as time increases.
### Single-mode Squeezing and squeezed states
The single-mode squeezed state \(|r\rangle\) is generated by applying the single-mode squeezing operator to the vacuum state, i.e., \(S(r=\eta t)=\exp\Bigl{[}r(a^{2}-{a^{\dagger}}^{2})/2\Bigr{]}\). Consequently, the Hamiltonian contains a term \({a^{\dagger}}^{2}\), which is responsible for generating photon pairs in quantum optical experiments, as well as \(a^{2}\) to ensure hermiticity. Thus, the Hamiltonian can be expressed as \(H=\eta({a^{2}+{a^{\dagger}}^{2}})/2\). To calculate the Krylov complexity using the Lanczos algorithm, it's worth noting that
\[H\left|n\right\rangle=\frac{\eta}{2}\left({a^{2}+{a^{\dagger}}^{2 }}\right)\left|n\right\rangle=\frac{n}{2}\left(\sqrt{n\left(n-1\right)}\left| n-2\right\rangle+\sqrt{(n+1)(n+2)}\left|n+2\right\rangle\right)\,, \tag{4.34}\]
which allows Lanczos coefficients to be
\[a_{m}=0\,,\qquad b_{m}=\frac{\sqrt{n(n-1)}}{2}\,, \tag{4.35}\]
and Krylov basis to be
\[\left|K_{m}\right\rangle=\left|2n\right\rangle\,,\quad\text{for} \quad n=0,1,2,... \tag{4.36}\]
which is equal to an even number basis only. In \(\left|K_{m}\right\rangle\) basis, squeezed state \(\left|r\right\rangle\) is expressed as
\[\left|r\right\rangle=\frac{1}{\sqrt{\cosh\left(r\right)}}\sum_{n =0}^{\infty}\frac{\sqrt{(2n)!}}{2^{n}n!}\tanh^{n}r\,\left|2n\right\rangle\,, \tag{4.37}\]
and Krylov complexity is
\[C(t)=\sum_{n}nP_{n}=\sinh^{2}r=\sinh^{2}\eta t\,. \tag{4.38}\]
where we parametrized \(r\) as \(r=\eta t\) and used Eq. 4.8 for simplifying the summation.
We can also compute the Krylov complexity using the survival amplitude technique. The moments \(\mu_{n}=\frac{d^{n}}{dt}S(t)\bigg{|}_{t=0}\) can be computed using the survival amplitude
\[S(t)=\langle r|0\rangle=\frac{1}{\sqrt{\cosh\eta t}} \tag{4.39}\]
such as,
\[\mu_{1} = 0,\quad\mu_{2}=\frac{(i\eta)^{2}}{2},\qquad\qquad\mu_{3}=0,\quad \mu_{4}=28\left(\frac{i\eta}{2}\right)^{4},\] \[\mu_{5} = 0,\quad\mu_{6}=1112\left(\frac{i\eta}{2}\right)^{6},\quad\mu_{7} =0,\quad\mu_{8}=87568\left(\frac{i\eta}{2}\right)^{8}\,. \tag{4.40}\]
Figure 1: Krylov spread complexity for coherent states, \(\alpha=100\), as a function of time.
From this, we arrive at the Lanczos coefficients
\[\mu_{2} =i^{2}b_{1}^{2} \Rightarrow b_{1}=\frac{\eta}{2}\sqrt{2(2-1)}\] \[\mu_{4} =i^{4}(b_{1}^{4}+b_{1}^{2}b_{2}^{2}) \Rightarrow b_{2}=\frac{\eta}{2}\sqrt{4(4-1)}\] \[\mu_{6} =i^{6}(b_{1}^{6}+2b_{1}^{4}b_{2}^{2}+b_{1}^{2}b_{2}^{4}+b_{1}^{2} b_{2}^{2}b_{3}^{2}) \Rightarrow b_{3}=\frac{\eta}{2}\sqrt{6(6-1)}\,, \tag{4.41}\]
which results to
\[a_{n}=0\,,\qquad b_{n}=\frac{\eta}{2}\sqrt{2n(2n-1)}\,. \tag{4.42}\]
Then, one can compute Krylov amplitude \(\psi_{n}(t)\) via recursion method
\[\partial_{t}\psi_{n}(t)=a_{n}\psi_{n}(t)+b_{n+1}\psi_{n+1}(t)+b_{n}\psi_{n-1}( t)\,, \tag{4.43}\]
which, for \(n=0\), gives the expression for Krylov amplitude \(\psi_{1}(t)\) as
\[\psi_{1}(t)=-\frac{i\sinh{(\eta t)}}{\sqrt{2}\cosh^{\frac{3}{2}}\eta t}\,. \tag{4.44}\]
In a similar fashion, for \(n=1,2\), and \(3\), the Krylov amplitudes \(\psi_{2}(t)\), \(\psi_{3}(t)\), and \(\psi_{4}(t)\) can be derived as shown below
\[\psi_{2}(t) = -\frac{\sqrt{3}\,\sinh^{2}\eta t}{\sqrt{2}\cosh^{\frac{5}{2}}\eta t }\,, \psi_{3}(t) = i\frac{\sqrt{5!!}\sinh^{3}\eta t}{\sqrt{6!!}\cosh^{\frac{7}{2}} \eta t}\,,\] \[\psi_{4}(t) = \frac{\sqrt{7!!}\sinh^{4}\eta t}{\sqrt{8!!}\cosh^{\frac{9}{2}} \eta t}\,. \tag{4.45}\]
From these calculations, a general formula for \(|\psi_{n}(t)|^{2}\) emerges
\[|\psi_{n}(t)|^{2}=\frac{(2n-1)!!(\sinh^{2}\eta t)}{(2n)!!(\cosh^{2n+1}\eta t) }=\frac{(2n-1)!!}{(2n)!!}\frac{1}{\cosh\eta t}\tanh^{2n}\eta t\,, \tag{4.46}\]
along with the associated Krylov complexity
\[C(t)=\sum_{n=0}^{\infty}n|\psi_{n}(t)|^{2}=\sum_{n=0}^{\infty}n\frac{(2n-1)!! }{(2n)!!}\frac{1}{\cosh\eta t}\tanh^{2n}\eta t=\sinh^{2}\eta t\,. \tag{4.47}\]
Interestingly, we get a similar expression for Krylov complexity using the Lanczos algorithm and survival amplitude method. Furthermore, this saturates the bound 7.1 as \(C(t)=C_{\mathcal{F}}(t)=\sinh^{2}\eta t\). In Figure, 2, we have plotted the Krylov spread complexity for single-mode squeezed states as a function of time.
### Displaced Squeezing States
A generic squeezed state can be generated by applying the displacement operator to a squeezed vacuum state [38]
\[\left|z,r\right\rangle=D(z)S(r)\left|0\right\rangle\,. \tag{4.48}\]
he expectation value for the product of annihilation and creation operators in the case of a displaced squeezed state is given by the following expression
\[\langle a^{\dagger}a\rangle=|z|^{2}+\sinh^{2}r\,. \tag{4.49}\]
It's important to note that when \(r=0\), we obtain a coherent state, and for \(z=0\), a squeezed vacuum state is produced. Using what we've learned from the squeezed vacuum state, the displaced squeezed state is expanded in terms of the number states
\[|z,r\rangle=\sum_{n=0}^{\infty}C_{n}\,|n\rangle\, \tag{4.50}\]
and an Ansatz is made accordingly
\[C_{0}=\frac{N}{\sqrt{\cosh{(r)}}}\,, \tag{4.51}\]
and we note that
\[C_{0}=\langle 0|z,r\rangle=\langle 0|\,D(z)S(r)\,|0\rangle=\langle-z|r\rangle\, \tag{4.52}\]
Figure 2: Krylov spread complexity for squeezed states, \(\eta=3\), as a function of time.
with
\[\langle-z|r\rangle=\exp\left[-\frac{1}{2}|z|^{2}\right]\sum_{n=0}^{\infty}(z^{*})^{2 n}\left[2n!\right]^{-1/2}C_{2n}\,, \tag{108}\]
where
\[C_{2n}=\frac{(-1)^{n}}{\sqrt{\cosh\left(r\right)}}\frac{\sqrt{(2n)!}}{2^{n}n!} (e^{\iota\theta}\tanh\left(r\right))^{n}\,, \tag{109}\]
is obtained from a squeezed vacuum state. Thus we arrive at
\[N=\langle-z|r\rangle\,\sqrt{\cosh\left(r\right)}=\exp\!\left[-\frac{1}{2}|z|^{ 2}-\frac{1}{2}z^{*2}e^{\iota\theta}\tanh\left(r\right)\right]. \tag{110}\]
Putting Eq. (110) in Eq. (105), we get
\[C_{0}=\frac{1}{\sqrt{\cosh\left(r\right)}}\,\exp\!\left[-\frac{1}{2}|z|^{2}- \frac{1}{2}z^{*2}e^{\iota\theta}\tanh\left(r\right)\right]. \tag{111}\]
From this, we can deduce the survival amplitude \(S(t)=C_{0}\). The analysis from this point onwards follows the substitution; \(\theta=0\), \(z=i\alpha t\), and \(r=\eta t\), and we get the expression for survival amplitude from Eq. (111)
\[S(t)=\frac{1}{\sqrt{\cosh\left(\eta t\right)}}\exp\!\left[-\frac{1}{2}\alpha^ {2}t^{2}+\frac{1}{2}\alpha^{2}t^{2}\tanh\left(\eta t\right)\right]. \tag{112}\]
In the limit where \(\eta\) approaches zero, the survival amplitude reduces to that of single-mode coherent states (104). Conversely, when \(\alpha\) approaches zero, the survival amplitude corresponds to that of squeezed states (105). The moments \(\mu_{n}=\frac{d^{n}}{dt}S(t)\big{|}_{t=0}\) for \(n=1,\ldots,6\) are
\[\mu_{1} =0\,,\] \[\mu_{2} =-\alpha^{2}-\frac{\eta^{2}}{2}\,,\] \[\mu_{3} =3\alpha^{2}\eta\,,\] \[\mu_{4} =3\alpha^{4}+3\alpha^{2}\eta^{2}+\frac{7\eta^{4}}{4}\,,\] \[\mu_{5} =-15\alpha^{2}\eta^{3}-\frac{\alpha^{2}\left(960\alpha^{2}\eta+6 40\eta^{3}\right)}{32}\,,\] \[\mu_{6} =-\frac{45\alpha^{4}\eta^{2}}{2}-\frac{105\alpha^{2}\eta^{4}}{4} +\frac{\alpha^{2}\left(-960\alpha^{4}+5760\alpha^{2}\eta^{2}\right)}{64}-\frac {139\eta^{6}}{8}\,, \tag{113}\]
which allows to have the following Lanczos coefficients
\[a_{0} =0\,,\qquad b_{1}=\sqrt{\alpha^{2}+\frac{\eta^{2}}{2}}\,,\qquad a _{1}=\frac{3i\alpha^{2}\eta}{\alpha^{2}+\frac{\eta^{2}}{2}}\,,\] \[b_{2} =\frac{\sqrt{9\alpha^{4}\eta^{2}+\left(-\alpha^{2}-\frac{\eta^{2} }{2}\right)^{3}+\left(\alpha^{2}+\frac{\eta^{2}}{2}\right)\left(3\alpha^{4}+3 \alpha^{2}\eta^{2}+\frac{7\eta^{4}}{4}\right)}}{\left(\alpha^{2}+\frac{\eta^ {2}}{2}\right)}\,, \tag{114}\]
and the expressions for survival amplitudes and its absolute value, respectively, are
\[\psi_{0}(t) =S(t)=\frac{1}{\sqrt{\cosh{(\eta t)}}}\exp\biggl{[}-\frac{1}{2} \alpha^{2}t^{2}+\frac{1}{2}\alpha^{2}t^{2}\tanh{(\eta t)}\biggr{]}\,, \tag{4.60}\] \[|\psi_{0}(t)|^{2} =\frac{\exp\left[\alpha^{2}t^{2}\tanh{(\eta t)}-\alpha^{2}t^{2} \right]}{\cosh{(\eta t)}}\,.\]
However, the expressions for Krylov amplitude, \(\psi_{1}(t)\), and its absolute, \(|\psi_{1}(t)|^{2}\), are
\[\psi_{1}(t) =-\frac{\sqrt{2}i\left(-\frac{\alpha^{2}\eta t^{2}}{2}-\frac{ \alpha^{2}t\sinh{(2\eta t)}}{2}+\frac{\alpha^{2}t\cosh{(2\eta t)}}{2}+\frac{ \alpha^{2}t}{2}+\frac{\eta\sinh{(2\eta t)}}{4}\right)e^{\frac{\alpha^{2}t^{2} \left(\tanh{(\eta t)}-1\right)}{2}}}{\sqrt{2\alpha^{2}+\eta^{2}}\cosh^{\frac{5 }{2}}{(\eta t)}}\,,\] \[|\psi_{1}(t)|^{2} =\frac{\left(\alpha^{2}t\left(\frac{\eta t}{\cosh{(\eta t)}}+2 \sinh{(\eta t)}-2\cosh{(\eta t)}\right)-\eta\sinh{(\eta t)}\right)^{2}e^{ \alpha^{2}t^{2}\left(\tanh{(\eta t)}-1\right)}}{\left(4\alpha^{2}+2\eta^{2} \right)\cosh^{3}{(\eta t)}}\,.\]
The expression for \(\psi_{2}(t)\) is quite lengthy, so we introduce new variables to express it compactly, such as
\[A =\frac{\sqrt{9\alpha^{4}\eta^{2}-\left(\alpha^{2}+\frac{\eta^{2} }{2}\right)^{3}+\left(\alpha^{2}+\frac{\eta^{2}}{2}\right)\left(3\alpha^{4}+3 \alpha^{2}\eta^{2}+\frac{7\eta^{4}}{4}\right)}}{\left(\alpha^{2}+\frac{\eta^{2 }}{2}\right)}\,,\] \[B =\frac{3\alpha^{2}\eta\left(-\frac{\eta\sinh{(\eta t)}}{2\cosh^{ \frac{5}{2}}{(\eta t)}}+\frac{\left(\alpha^{2}\eta t^{2}\,\text{sech}^{2}\,{( \eta t)}+\alpha^{2}t\tanh{(\eta t)}-\alpha^{2}t\right)}{\sqrt{\cosh{(\eta t)}} }\right)}{\left(\alpha^{2}+\frac{\eta^{2}}{2}\right)^{\frac{3}{2}}}\,,\] \[C =\sqrt{\frac{\alpha^{2}+\frac{\eta^{2}}{2}}{\cosh{(\eta t)}}}\,, \quad D1=\frac{3\eta^{2}\sinh^{2}{(\eta t)}}{4\cosh^{\frac{5}{2}}{(\eta t)}} \,,\quad D2=\frac{\eta^{2}}{2\sqrt{\cosh{(\eta t)}}}\,,\] \[D3 =\frac{\eta\left(\frac{\alpha^{2}\eta t^{2}\,\text{sech}^{2}\,{( \eta t)}}{2}+\alpha^{2}t\tanh{(\eta t)}-\alpha^{2}t\right)\sinh{(\eta t)}}{ \cosh^{\frac{3}{2}}{(\eta t)}}\,,\] \[D4 =\frac{\left(\frac{\alpha^{2}\eta t^{2}\,\text{sech}^{2}\,{( \eta t)}}{2}+\alpha^{2}t\tanh{(\eta t)}-\alpha^{2}t\right)^{2}}{\sqrt{\cosh{( \eta t)}}}\,,\] \[D5 =\frac{-\alpha^{2}\eta^{2}t^{2}\,\text{sech}^{2}\,{(\eta t)} \tanh{(\eta t)}+2\alpha^{2}\eta t\,\,\text{sech}^{2}\,{(\eta t)}+\alpha^{2} \tanh{(\eta t)}-\alpha^{2}}{\sqrt{\cosh{(\eta t)}}}\,,\]
which we then combine and obtain the expression for \(\psi_{2}(t)\) as
\[\psi_{2}(t)=\frac{1}{A}\left(B-C-\frac{D1+D2+D3+D4+D5}{\sqrt{\alpha^{2}+\frac{ \eta^{2}}{2}}}\right)\exp\biggl{[}\frac{\alpha^{2}t^{2}\tanh{(\eta t)}}{2}- \frac{\alpha^{2}t^{2}}{2}\biggr{]}\,. \tag{4.62}\]
The formula for \(|\psi_{2}(t)|^{2}\) is highly complex. To simplify, we examine the case for small \(\eta\) where \(\eta\ll 1\), leading to approximations \(\sinh(\eta t)\approx\eta t\), \(\cosh(\eta t)\approx 1\), and \(\tanh(\eta t)\approx\eta t\).
Under these conditions, the expressions for Krylov complexity \(\psi_{2}(t)\) and \(|\psi_{2}(t)|^{2}\) become
\[\psi_{2}(t) =\frac{\alpha t\left(-\alpha^{2}t\left(3\eta t-2\right)^{2}+6\eta \left(3\eta t-2\right)-12\eta\right)\exp\!\left[\frac{\alpha^{2}t^{2}\left(\eta t -1\right)}{2}\right]}{4\sqrt{2\alpha^{2}+9\eta^{2}}}\,, \tag{4.63}\] \[|\psi_{2}(t)|^{2} =\frac{\alpha^{2}t^{2}\left(\alpha^{2}t\left(3\eta t-2\right)^{2 }-6\eta\left(3\eta t-2\right)+12\eta\right)^{2}\exp\!\left[\alpha^{2}t^{2} \left(\eta t-1\right)\right]}{16(2\alpha^{2}+9\eta^{2})}\,.\]
We are now in a position to plot \(|\psi_{0}(t)|^{2},|\psi_{1}(t)|^{2}\), and \(|\psi_{2}(t)|^{2}\) for better understanding in Figure 3. Given these three contributions, we can plot the Krylov complexity with these three terms, \(C_{K}^{3}=\sum_{n=1}^{2}n|\psi_{n}(t)|^{2}=|\psi_{1}(t)|^{2}+2|\psi_{2}(t)|^{2}\) in Figure 4. As expected, \(C_{K}^{3}\) is upper bounded by \(C_{\mathcal{F}}(t)=\alpha^{2}t^{2}+\sinh^{2}\eta t\) as shown in Figure 5.
## 5 Multi-mode Bosons
We start this section by introducing two mode squeeze operator as
\[\hat{S}_{2}(\xi)=e^{(\xi^{*}a\hat{b}-\xi\hat{a}^{\hat{a}}\hat{b}^{\hat{\dagger }})}\,, \tag{5.1}\]
Figure 3: Probability of displaced squeezing states \(\alpha=100,\eta=3\) to be in Krylov basis \(\psi_{0}(t)\), \(\psi_{1}(t)\) and \(\psi_{2}(t)\)
here \(\xi=rte^{i\theta}\), and \(\hat{a}\) and \(\hat{b}\) are the operators for two modes satisfying the relation \([\hat{a},\hat{b}^{\dagger}]=0\). Observe that \(\hat{S}_{2}(\xi)\) cannot be separated into a product of single-mode squeeze operators for individual modes. We introduce two two-mode squeezed vacuum states via the action of \(\hat{S}_{2}(\xi)\) on the two-mode vacuum state \(\ket{0}_{a}\ket{0}_{b}=\ket{0,0}\)
\[\ket{\xi}_{2}=\hat{S}_{2}(\xi)\ket{0,0}=e^{(\xi^{*}\hat{a}\hat{b}-\xi\hat{a}^{ \dagger}\hat{b}^{\dagger})}\ket{0,0}\,. \tag{5.2}\]
To explore the squeezing attributes of our state, we proceed as follows
\[\hat{S}_{2}^{\dagger}(\xi)\hat{a}\hat{S}_{2}(\xi) =\hat{a}\cosh{(rt)}-e^{i\theta}\hat{b}^{\dagger}\sinh{(rt)}\,, \tag{5.3}\] \[\hat{S}_{2}^{\dagger}(\xi)\hat{b}\hat{S}_{2}(\xi) =\hat{b}\cosh{(rt)}-e^{i\theta}\hat{a}^{\dagger}\sinh{(rt)}\,. \tag{5.4}\]
We aim to express our state \(\ket{\xi}_{2}\) in terms of two-mode number states denoted by \(\ket{n}_{a}\bigotimes\ket{m}_{b}\equiv\ket{n,m}\). We initiate this with the relation
\[\hat{a}\ket{0,0}=0\,. \tag{5.5}\]
Figure 4: \(C_{K}^{3}\) with first three Krylov basis for displaced squeezing states with parameters \(\alpha=100,\eta=3\)
The survival amplitude \(S(t)=1/\cosh{(rt)}\) is fixed by the normalization. The moments \(\mu_{n}=\left.\frac{d^{n}}{dt}S(t)\right|_{t=0}\) up to \(\mu_{6}\) are specified as \(\mu_{1}=0\), \(\mu_{2}=-r^{2}\), \(\mu_{3}=0\), \(\mu_{4}=5r^{4}\), \(\mu_{5}=0\), and \(\mu_{6}=-61r^{6}\). It becomes evident that all odd moments vanish, while only even moments contribute. Consequently, \(a_{n}=0\) for every \(n\), and thus, we arrive at the Lanczos coefficients
\[\mu_{2} = i^{2}b_{1}^{2}=-r^{2}\qquad\qquad\qquad\qquad\qquad\qquad\qquad \Rightarrow b_{1}=r\] \[\mu_{4} = i^{4}(b_{1}^{4}+b_{1}^{2}b_{2}^{2})=5r^{4}\qquad\qquad\qquad \qquad\qquad\Rightarrow b_{2}=2r\] \[\mu_{6} = i^{6}(b_{1}^{6}+2b_{1}^{4}b_{2}^{2}+b_{1}^{2}b_{2}^{4}+b_{1}^{2 }b_{2}^{2}b_{3}^{2})=-61r^{6}\qquad\Rightarrow b_{3}=3r\,, \tag{5.6}\]
which allows to have the pattern for Lanczos coefficients as \(a_{n}=0\) and \(b_{n}=nr\). We can compute Krylov amplitude \(\psi_{n}(t)\) via the recursion method as
\[\partial_{t}\psi_{n}(t)=b_{n+1}\psi_{n+1}(t)+b_{n}\psi_{n-1}(t)\,. \tag{5.7}\]
Starting with \(n=0\) and utilizing the iterative equation (4.43), we obtain the expressions for the Krylov amplitude \(\psi_{1}(t)\) and \(|\psi_{1}(t)|^{2}\) as
\[\psi_{1}(t)=-\frac{i\sinh\left(rt\right)}{\cosh^{2}\left(rt\right)}\,,\qquad|\psi _{1}(t)|^{2}=\frac{\sinh^{2}\left(rt\right)}{\cosh^{4}\left(rt\right)}\,. \tag{5.8}\]
For \(n=1\), the expression for \(|\psi_{2}(t)|^{2}\) is derived as
\[\psi_{2}(t)=-\frac{\sinh^{2}\left(rt\right)}{\cosh^{3}\left(rt\right)}\,,\qquad |\psi_{2}(t)|^{2}=\frac{\sinh^{4}\left(rt\right)}{\cosh^{6}\left(rt\right)}\,. \tag{5.9}\]
Generally, we find the expression for \(|\psi_{n}(t)|^{2}\) as follows
\[|\psi_{n}(t)|^{2}=\frac{\tanh^{2n}rt}{\cosh^{2}rt}\,. \tag{5.10}\]
The Krylov complexity is then given by a subsequent formula
\[C(t)=\sum_{n}n|\psi_{n}(t)|^{2}=\sum_{n}n\frac{\tanh^{2n}rt}{\cosh^{2}rt}= \sinh^{2}rt\,, \tag{5.11}\]
where we employ the sum for an infinite geometric progression as indicated in Eq (7.1).
To make a comparative analysis, let's consider the complexity calculated in the context of number states. We express the two-mode squeezed vacuum states in terms of these number states as
\[\left|\xi\right\rangle_{2}=\frac{1}{\cosh\left(rt\right)}\sum_{n=0}^{\infty}(- 1)^{n}e^{in\theta}\left(\tanh\left(rt\right)\right)^{n}\left|n,n\right\rangle\,. \tag{5.12}\]
We then define \(P_{n_{1},n_{2}}\) as the joint probability of observing \(n_{1}\) particles in mode \(a\) and \(n_{2}\) particles in mode \(b\)
\[P_{n_{1},n_{2}}=|\left\langle n_{1},n_{2}|\xi\right\rangle_{2}|^{2}=(\cosh \left(rt\right))^{-2}(\tanh\left(rt\right))^{2n}\delta_{n_{1},n}\delta_{n_{2},n}\,. \tag{5.13}\]
The term \(|\psi_{n}(t)|^{2}\) derived through the Krylov complexity algorithm precisely matches with \(P_{n_{1},n_{2}}\). Hence, the Krylov complexity saturates the bound established in Eq. (3.16). It can be readily shown that the average particle number in each mode is identical
\[\langle\hat{n}_{a}\rangle=\langle\hat{n}_{b}\rangle=\sinh^{2}rt\,. \tag{5.14}\]
which is exactly the expression of Krylov complexity in (5.11). This shows that, for the multi-mode system, if Krylov complexity for each mode is computable, then the overall Krylov complexity \(C(t)\) for the \(l\) multi-mode Bosons \(a_{1},...a_{l}\) would be constrained by the maximum value of the complexities of the individual modes. Furthermore, the Krylov complexity for each individual mode is bounded by the average particle number in that specific mode. In the context of two-mode squeezing, it is observed that the bound
\[C(t)\leq\max\{\langle\hat{n}_{a}\rangle,\langle\hat{n}_{b}\rangle\}\,, \tag{5.15}\]
is saturated at a specific Krylov complexity
\[C(t)=\langle\hat{n}_{a}\rangle=\langle\hat{n}_{b}\rangle=\sinh^{2}rt\,. \tag{111}\]
Unlike entropy [59], where summing over individual modes is logical, doing so for complexity is not apt. Rather, summing the complexities provides an upper bound for the true complexity. As observed, taking the maximum complexity of individual modes offers a more operationally meaningful and superior quantity compared to summing all complexities.
The simplicity in calculating Krylov complexity arises from the entangled nature of these states (108) and strong inter-mode correlations. We can observe that \(\ket{\xi}_{2}\) serves as an eigenstate of the number difference operator \(\hat{n}_{a}-\hat{n}_{b}\), where \(\hat{n}_{a}=\hat{a}^{\dagger}\hat{a}\) and \(\hat{n}_{b}=\hat{b}^{\dagger}\hat{b}\). Its eigenvalue is zero, i.e., \(\hat{n}_{a}-\hat{n}_{b}\ket{\xi}_{2}=0\), indicating the presence of strong correlations and symmetry between the two modes.
Moreover, the two-mode entangled state \(\ket{\xi}_{2}\) from Eq. 108 shares structural similarities with a pivotal quantum state in holography known as the thermofield double states (TFD) [60; 61]. In the context of boundary theory, these states are generated by entangling two copies of a conformal field theory (CFT) in such a way that tracing out one copy results in the thermal density matrix at the inverse temperature \(\beta\) for the other, i.e.,
\[\ket{TFD(t_{L},t_{R})}=\frac{1}{\sqrt{Z_{\beta}}}\sum_{n}e^{-\frac{\beta E_{n }}{2}}e^{-iE_{n}(t_{L}+t_{R})}\ket{E_{n}}_{L}\ket{E_{n}}_{R}\,, \tag{112}\]
where \(\ket{E_{n}}_{L,R}\) and \(t_{L,R}\) are the energy eigenstate and times of the left/right CFTs, respectively, and \(Z_{\beta}\) is the canonical partition function at the inverse temperature \(\beta\). The TFD state plays a special role in holography because it is dual to an eternal black hole in AdS [60]. Hence, it provides a particularly well-controlled setup for studying entanglement, black holes, and quantum information e.g., time-evolution of entanglement entropy, scrambling and quantum chaos, firewalls, \(ER=EPR\), and emergent spacetime [61; 62].
The TFD state at \(t_{L}=0=t_{R}\) can be constructed from two copies of the vacuum state by acting with creation operators in the following manner
\[\ket{TFD} = \left(1-e^{-\beta\omega}\right)^{\frac{1}{2}}\sum_{n=0}^{n=\infty }e^{-\frac{n\beta\omega}{2}}\ket{n}_{L}\ket{n}_{R}\,, \tag{113}\] \[= \left(1-e^{-\beta\omega}\right)^{\frac{1}{2}}\sum_{n=0}^{n=\infty }\frac{e^{-\frac{n\beta\omega}{2}}}{n!}\left(a_{L}^{\dagger}a_{R}^{\dagger} \right)^{n}\ket{0}_{L}\ket{0}_{R}\,,\] \[= \left(1-e^{-\beta\omega}\right)^{\frac{1}{2}}e^{e^{-\frac{\beta \omega}{2}\left(a_{L}^{\dagger}a_{R}^{\dagger}\right)}\ket{0}_{L}\ket{0}_{R}}\,,\] \[= e^{\alpha\left(a_{L}^{\dagger}a_{R}^{\dagger}-a_{L}a_{R}\right)} \ket{0}_{L}\ket{0}_{R}\,,\]
where \(\tanh\alpha=e^{-(\beta\omega)/2}\) and \(E_{n}=\omega\left(n+\frac{1}{2}\right)\). For later purposes, it is convenient to express \(\alpha\) in the form \(\alpha=(1/2)\log\left(\frac{1+e^{-\frac{\beta\omega}{2}}}{1-e^{-\frac{\beta \omega}{2}}}\right)\). To obtain a time-dependent case, we can use a common convention in holography and set \(t_{L}=t_{R}=t/2\). The time-dependent THD state
is
\[\left|TFD(t)\right\rangle = \left(1-e^{-\beta\omega}\right)^{\frac{1}{2}}\sum_{n=0}^{n=\infty} \frac{e^{-\frac{n\beta\omega}{2}}e^{-i\left(n+\frac{1}{2}\right)\omega t}}{n!} \left|n\right\rangle_{L}\left|n\right\rangle_{R}\,, \tag{119}\] \[= e^{-\frac{i}{2}\omega t}\left(1-e^{-\beta\omega}\right)^{\frac{1 }{2}}\sum_{n=0}^{n=\infty}\frac{e^{-\frac{n\beta\omega}{2}}e^{-in\omega\beta} }{n!}\left(a_{L}^{\dagger}a_{R}^{\dagger}\right)^{n}\left|0\right\rangle_{L} \left|0\right\rangle_{R}\,,\] \[= e^{-\frac{i}{2}\omega t}\left(1-e^{-\beta\omega}\right)^{\frac{1 }{2}}e^{e^{-\frac{\beta\omega}{2}}e^{-i\omega\beta}\left(a_{L}^{\dagger}a_{R} ^{\dagger}\right)}\left|0\right\rangle_{L}\left|0\right\rangle_{R}\,.\]
We can safely ignore the phase \(e^{-i\omega t/2}\) as it doesn't have the physical consequence and express it as a unitary operator acting on a vacuum state. Therefore,
\[\left|TFD(t)\right\rangle=e^{Za_{L}^{\dagger}a_{R}^{\dagger}-Z^{*}a_{L}a_{R}} \left|0\right\rangle_{L}\left|0\right\rangle_{R}\,, \tag{120}\]
where \(Z=\alpha e^{-i\omega t}\).
To investigate the complexity of multi-mode Bosons and also quantum field theories, we adopt a similar approach as found in previous works [2; 61], where the theory is regulated on a lattice and assumes the form of coupled harmonic oscillators. The Hamiltonian governing a single particle oscillator can be expressed as follows
\[H=\frac{1}{2M}P^{2}+\frac{1}{2}M\omega^{2}Q^{2}\,, \tag{121}\]
where \(M\) signifies the mass of the oscillator, \(\omega\) its frequency, \(Q\) and \(P\) denote the position and momentum operators that satisfy commutation relation
\[a=\sqrt{\frac{M\omega}{2}}\left(Q+i\frac{P}{M\omega}\right),\quad a^{\dagger} =\sqrt{\frac{M\omega}{2}}\left(Q-i\frac{P}{M\omega}\right)\,. \tag{122}\]
Subsequently, the expression \(a_{L}^{\dagger}a_{R}^{\dagger}-a_{L}a_{R}=-i(Q_{R}P_{L}+Q_{L}P_{R})\), using TFD state,
\[Q_{\pm}=\frac{1}{\sqrt{2}}(Q_{L}\pm Q_{R}),\quad P_{\pm}=\frac{1}{\sqrt{2}}(P _{L}\pm P_{R})\,, \tag{123}\]
can be reformulated as \(a_{L}^{\dagger}a_{R}^{\dagger}-a_{L}a_{R}=-i(Q_{+}P_{+}-Q_{-}P_{-})\). This allows us to express the generators in terms of scaling operators of the individual diagonal modes and eq (118) can be written as
\[\left|TFD\right\rangle=e^{-\frac{i\alpha}{2}(Q_{+}P_{+}+P_{+}Q_{+})}\left|0 \right\rangle_{+}\bigotimes e^{\frac{i\alpha}{2}(Q_{-}P_{-}+P_{-}Q_{-})}\left| 0\right\rangle_{-}\,, \tag{124}\]
where \(\left|0\right\rangle_{\pm}\) denotes the vacuum of the Hamiltonian of each diagonal mode. To accurately compute the Krylov complexity, the wave function for both the reference and target states must be determined. Additionally, having access to the wave function enables us to calculate the covariance matrix, thereby providing a bound for the Krylov complexity. Initially, we started with two independent harmonic oscillators, and the total Hamiltonian can thus be written as
\[H_{total} =\frac{1}{2M}\left(P_{L}^{2}+P_{R}^{2}+M^{2}\omega^{2}\left(Q_{L }^{2}+Q_{R}^{2}\right)\right)\,,\] \[=\frac{1}{2M}\left(P_{+}^{2}+P_{-}^{2}+M^{2}\omega^{2}\left(Q_{+} ^{2}+Q_{-}^{2}\right)\right)\,. \tag{125}\]
In the second line, we have used the diagonal basis. The ground state wave function for this Hamiltonian takes the form
\[\psi_{0}(Q_{+}Q_{-})=\psi_{0}(Q_{+}\psi_{0}(Q_{-}))\simeq e^{-\frac{M\omega}{2}(Q _{+}^{2}+Q_{-}^{2})}\,, \tag{101}\]
where the reference state is given by
\[\psi_{R}(Q_{+}Q_{-})\simeq e^{-\frac{M\omega}{2}(Q_{+}^{2}+Q_{-}^{2})}\,, \tag{102}\]
which can be thought of as a ground state of Hamiltonian (100), but the frequency is fixed to be \(\mu\). Then, the wave function of TFD state (100), at time \(t=0\), is
\[\begin{split}\psi_{TFD}&=\exp\biggl{[}\frac{-M \omega}{2}\left(e^{-2\alpha}Q_{+}^{2}+e^{-2\alpha}Q_{-}^{2}\right)\biggr{]}\,, \\ &=\exp\biggl{[}\frac{-M\omega}{2}\left(\cosh 2\alpha(Q_{L}^{2}+Q_{R }^{2})-2\sinh 2\alpha Q_{L}Q_{R}\right)\biggr{]}\,.\end{split} \tag{103}\]
However, the time-dependent TFD state (101) is
\[\left|TFD(t)\right\rangle=e^{-i\alpha\hat{O}_{+}(t)}\left|O\right\rangle_{+} \bigotimes e^{i\alpha\hat{O}_{-}(t)}\left|O\right\rangle_{-}\,, \tag{104}\]
where
\[\hat{O}_{\pm}(t)=\frac{1}{2}\cos\omega t\left(Q_{\pm}P_{\pm}+P_{\pm}Q_{\pm} \right)+\frac{1}{2}\sin\omega t\left(M\omega Q_{\pm}^{2}-\frac{1}{M\omega}P_{ \pm}^{2}\right)\,, \tag{105}\]
which act separately in the '\(+\)' or '\(-\)' Hilbert spaces. The target state in the time-dependent framework, \(\left|TFD(t)\right\rangle\), can technically be delineated as a Gaussian wave function, albeit a complex one. To simplify, we opt to represent it using a covariance matrix, thereby streamlining the computation of complexity.
To handle the dimension-full nature of operators \(Q\) and \(P\), we introduce dimensionless position and momentum variables
\[q_{a}:=\omega_{g}Q_{a},p_{a}:=\frac{P_{a}}{\omega_{g}}\,, \tag{106}\]
so that the expression for the Krylov complexity becomes dimensionless.
Initially, we consider a time-independent TFD state. Employing these dimensionless variables, the reference state in (102), the ground state in (101), and the time-independent TFD state in Eq (103) can be respectively, articulated as
\[\begin{split}\psi_{R}(q_{+},q_{-})&=\sqrt{\frac{ \lambda_{R}}{\pi}}\exp\biggl{[}-\frac{\lambda_{R}}{2}(q_{+}^{2}+q_{-}^{2}) \biggr{]}\,,\\ \psi_{0}(q_{+},q_{-})&=\sqrt{\frac{\lambda}{\pi}} \exp\biggl{[}-\frac{\lambda}{2}(q_{+}^{2}+q_{-}^{2})\biggr{]}\,,\\ \psi_{TFD}(q_{+},q_{-})&=\sqrt{\frac{\lambda}{\pi}} \exp\biggl{[}-\frac{\lambda}{2}\left(e^{-2\alpha}q_{+}^{2}+e^{2\alpha}q_{-}^{ 2}\right)\biggr{]}\,,\end{split} \tag{107}\]
where \(\lambda_{R}:=M\mu/\omega_{g}^{2}\) and \(\lambda:=M\omega/\omega_{g}^{2}\) are dimensionless ratios. These wave functions can be written as a covariance matrices. For example, given a wave function \(\psi(q)=\langle q|\psi\rangle=(a/\pi)^{1/4}e^{-\frac{1}{2}(a+ib)q^{2}}\), it can be reformulated into a covariance matrix
\[V=\begin{bmatrix}\frac{1}{a}&-\frac{b}{a}\\ -\frac{b}{a}&\frac{a^{2}+b^{2}}{a}\end{bmatrix}\,. \tag{108}\]
Proceeding with this methodology, we can extract explicit expressions for covariance matrices corresponding to wave functions in equation (107). For the \((+)\) mode, the covariance matrices are can be described as
\[V_{R}=\begin{bmatrix}\frac{1}{\lambda_{R}}&0\\ 0&\lambda_{R}\end{bmatrix},\quad V_{0}=\begin{bmatrix}\frac{1}{\lambda}&0\\ 0&\lambda\end{bmatrix},\quad V_{TFD}^{+}=\begin{bmatrix}\frac{e^{2\alpha}}{ \lambda}&O\\ 0&e^{-2\alpha}\lambda\end{bmatrix}\,. \tag{109}\]
Likewise, covariance matrices for the \((-)\) mode can be obtained by replacing \(\alpha\) with \(-\alpha\). Krylov complexity of time independent thermofield double state \(\psi_{TFD}(q_{+},q_{-})\) can then be specified as
\[C(\alpha)=\max\{C^{+}(\alpha),C^{-}(\alpha)\}\,. \tag{110}\]
We have considered Krylov complexity as a function of \(\alpha\) rather than time as this is a time-independent state. To compute the Krylov complexity of time-independent thermofield double state \(\psi_{TFD}(q_{+},q_{-})\) with respect to the reference state \(\psi_{R}(q_{+},q_{-})\), we need to compute the survival amplitude
\[S^{+}(\alpha)=\langle\psi_{TFD}(q_{+})|\psi_{R}(q_{+})\rangle =\left(\frac{\lambda_{R}\lambda e^{-2\alpha}}{\pi^{2}}\right)^{ \frac{1}{4}}\int\,dq_{+}e^{-\frac{\lambda_{r}+\lambda e^{-2\alpha}}{2}q_{+}^{2 }}\,,\] \[=\frac{\sqrt{2}(\frac{\lambda}{\lambda_{R}}e^{-2\alpha})^{\frac{ 1}{4}}}{\sqrt{1+\frac{\lambda}{\lambda_{R}}e^{-2\alpha}}}\,. \tag{111}\]
Similarly, the expression for \(S^{-}(\alpha)\) can be obtained by replacing \(\alpha\) with \(-\alpha\), which then allows us to compute the Lanczos coefficients, Krylov amplitudes, and Krylov complexity. This procedure would be rather complicated, instead we shall get the bound for Krylov complexity simply from the covariance matrix. To get the bound for Krylov complexity, we require the relative covariance matrix between reference state \(V_{R}\) and the time-independent TFD state \(V_{TFD}^{+}\), which is
\[\Delta(V_{TFD}^{+},V_{R})=V_{TFD}^{+}V_{R}^{-1}=\begin{bmatrix}\frac{\lambda_ {R}}{\lambda}e^{2\alpha}&0\\ 0&\frac{\lambda}{\lambda_{R}}e^{-2\alpha}\end{bmatrix}\,. \tag{112}\]
The Krylov complexities, \(C^{+}(\alpha)\) and \(C^{-}(\alpha)\), are
\[C^{+}(\alpha)\leq =-\frac{1}{4}\left(\text{Tr}(\mathbf{I}_{n\times n}-\Delta(V_{ TFD}^{+},V_{R}))\right)=-\frac{1}{4}\left(2-\frac{\lambda_{R}^{2}e^{2\alpha}+ \lambda^{2}e^{-2\alpha}}{\lambda\lambda_{R}}\right)\,, \tag{113}\] \[C^{-}(\alpha)\leq =-\frac{1}{4}\left(2-\frac{\lambda_{R}^{2}e^{-2\alpha}+\lambda^{2 }e^{2\alpha}}{\lambda\lambda_{R}}\right)\,,\]
whereas the Krylov complexity for time independent thermofield double state, \(C(\alpha)\), is
\[\begin{split} C(\alpha)&=\max\Bigl{\{}C^{+}(\alpha),C^ {-}(\alpha)\Bigr{\}}\,,\\ &\leq\max\Biggl{\{}-\frac{1}{4}\left(2-\frac{\lambda_{R}^{2}e^{2 \alpha}+\lambda^{2}e^{-2\alpha}}{\lambda\lambda_{R}}\right),\;-\frac{1}{4} \left(2-\frac{\lambda_{R}^{2}e^{-2\alpha}+\lambda^{2}e^{2\alpha}}{\lambda \lambda_{R}}\right)\Biggr{\}}\,,\end{split} \tag{102}\]
which, for \(\lambda_{R}=\lambda=1\), simplifies to
\[C(\alpha)\leq\max\{\sinh^{2}(\alpha),\sinh^{2}(\alpha)\}=\sinh^{2}(\alpha)\,. \tag{103}\]
The above inequality matches exactly with the result obtained for two-mode squeezing in Eq. (100). For other values of \(\lambda_{R}\) and \(\lambda\), \(C^{+}(\alpha)\neq C^{-}(\alpha)\). In Refs. [2; 3; 61], the approach to calculating complexity involved summing the complexity for each individual mode. We will designate this particular sum of complexities across individual modes as _summed Krylov complexity_, denoted by \(C_{\Sigma}(\alpha)\). For the TFD state, this can be expressed as follows
\[C_{\Sigma}(\alpha)=C^{+}(\alpha)+C^{-}(\alpha)\leq-1+\frac{\cosh(2\alpha)( \lambda^{2}+\lambda_{R}^{2})}{2\lambda\lambda_{R}}\,. \tag{104}\]
Substituting \(\lambda=\lambda_{R}=1\) we obtain
\[C_{\Sigma}(\alpha)\leq 2\sinh^{2}\alpha=2C(\alpha)\,. \tag{105}\]
We can compute Krylov complexity of formation, a quantity to compare the complexity of the entangled TFD state of the two oscillators with that of the disentangled vacuum state, \(\alpha\xrightarrow{}0\). For the \((+)\) mode, Krylov complexity of formation would be
\[\Delta C^{+}(\alpha)\leq C^{+}(\alpha)-C^{+}(\alpha\xrightarrow{}0)=-\frac{1 }{4\lambda\lambda_{R}}\left(\lambda^{2}(1-e^{-2\alpha})+\lambda_{R}^{2}(1-e^{ 2\alpha})\right)\,, \tag{106}\]
and similarly for \((-)\) mode by replacing \(\alpha\) with \(-\alpha\). For \(\lambda=\lambda_{R}=1\), we get \(\Delta C^{+}(\alpha)=\Delta C^{-}(\alpha)=\sinh^{2}\alpha\) which is also a measure of entanglement. Similarly, summed Krylov complexity of formation is
\[\Delta C_{\Sigma}(\alpha)\leq\frac{\sinh^{2}\alpha(\lambda^{2}+\lambda_{R}^{2 })}{\lambda\lambda_{R}}\,, \tag{107}\]
which for \(\lambda=\lambda_{R}=1\) is \(\Delta C_{\Sigma}(\alpha)\leq 2\sinh^{2}\alpha\). It is also interesting to compare the Krylov complexity with circuit complexity obtained via Nielsen's geometric techniques. For TFD state, it was computed to be [61]
\[C_{G}^{+}(\alpha)=\alpha+\frac{1}{2}\log\lambda,\quad C_{G}^{-}(\alpha)=- \alpha+\frac{1}{2}\log\lambda\,, \tag{108}\]
where \(\lambda_{R}=1\). Reference [61] defined complexity by adding the complexity for individual modes
\[C_{G}(\alpha)=\left|\alpha+\frac{1}{2}\log\lambda\right|+\left|-\alpha+\frac{1 }{2}\log\lambda\right|\,, \tag{109}\]
and obtained the complexity of formation to be \(2\alpha\). This comparison of geometric with Krylov complexity is given in Figure 6. For lower values of \(\alpha\), Krylov complexity of formation is less than the geometric complexity but it is exponentially larger than geometric complexity at larger values.
Now, we will use this technique to find the bound of the Krylov complexity for the time-dependent TFD state. The covariance matrix of the time-dependent thermofield double state can be obtained by time evolving the thermofield double state at time \(t=0\) with unitary
\[\hat{U}_{+}(t)=e^{-i\frac{t}{2}H_{+}}, \tag{100}\]
where
\[H_{+}=\frac{1}{2M}P_{+}^{2}+\frac{1}{2}M\omega^{2}Q_{+}^{2}=\frac{p_{+}^{2}}{ 2}\frac{\omega}{\lambda}+\lambda\omega\frac{q_{+}^{2}}{2}\,. \tag{101}\]
Expressing it in terms of matrix operators, the covariance matrix for the time-dependent
Figure 6: Geometric vs. Krylov complexity for Thermofield double state with parameter \(\alpha=3\).
TFD state is expressed as
\[V^{+}_{TFD}(t) =U(t)V^{+}_{TFD}U^{T}(t) \tag{112}\] \[=\begin{bmatrix}\frac{1}{\lambda}(\cosh 2\alpha+\sinh 2\alpha\cos \omega t)&-\sinh 2\alpha\sin\omega t\\ -\sinh 2\alpha\sin\omega t&\lambda(\cosh 2\alpha-\sinh 2\alpha\cos\omega t) \end{bmatrix}\,.\]
We now compute the Krylov complexity for the time-dependent TFD state with respect to the reference state 111. The relative covariance matrix is
\[\Delta\left(V^{+}_{TFD}(t),V_{R}\right) =V^{+}_{TFD}(t)V^{-1}_{R} \tag{113}\] \[=\begin{bmatrix}\frac{\lambda_{R}}{\lambda}(\cosh 2\alpha+\sinh 2 \alpha\cos\omega t)&\frac{1}{\lambda_{R}}(-\sinh 2\alpha\sin\omega t)\\ -\lambda_{R}\sinh 2\alpha\sin\omega t&\frac{\lambda}{\lambda_{R}}(\cosh 2 \alpha-\sinh 2\alpha\cos\omega t)\end{bmatrix}\,.\]
and the Krylov complexity \(C^{+}(\alpha)\) and \(C^{-}(\alpha)\) is given by
\[C^{+}(t) \leq-\frac{1}{4}\left(\operatorname{Tr}\left(\mathbf{I}_{2\times 2 }-\Delta\left(V^{+}_{TFD}(t),V_{R}\right)\right)\right) \tag{114}\] \[=-\frac{1}{4}\left(2-\frac{(\lambda_{R}^{2}+\lambda^{2})\cosh 2 \alpha-(\lambda_{R}^{2}-\lambda^{2})\sinh 2\alpha\cos\omega t}{\lambda\lambda_{R}} \right)\,.\]
Interestingly, for \(\lambda=\lambda_{R}\), time dependence drops in the expression for bound of Krylov complexity. Similarly, \(C^{-}(t)\) is obtained by replacing \(\alpha\) with \(-\alpha\)
\[C^{-}(t)\leq-\frac{1}{4}\left(2-\frac{(\lambda_{R}^{2}+\lambda^{2})\cosh 2 \alpha+(\lambda_{R}^{2}-\lambda^{2})\sinh 2\alpha\cos\omega t}{\lambda\lambda_{R}} \right)\,. \tag{115}\]
The Krylov complexity for time-dependent TFD state, \(C(t)\) is then \(\max\{C^{+}(t),C^{-}(t)\}\) and the summed Krylov complexity for time-dependent TFD state, \(C_{\Sigma}(t)\), is
\[C_{\Sigma}(t)=C^{+}(t)+C^{-}(t)\leq-1+\frac{\cosh(2\alpha)(\lambda^{2}+ \lambda_{R}^{2})}{2\lambda\lambda_{R}}\,. \tag{116}\]
This is perfectly analogous to the limit for a time-independent TFD state 111. As a result, the time-dependency of the TFD state becomes irrelevant in the bound for the aggregate Krylov complexity. This further validates that \(C(t)\) should be the maximum of \(C^{+}(t)\) and \(C^{-}(t)\). The Krylov complexity of formation could also be calculated, but the analysis would mirror that of a time-independent TFD state.
## 6 Fermions
In this section, we will focus on Krylov complexity for Fermionic Gaussian states. We note that for \(N\) Fermionic degrees of freedom, the space of Gaussian states with \(N(N-1)\) dimension is given as \(\mathcal{M}_{f,N}=O(2N)/U(N)\). For the case where \(N=1\), the manifold is \(\mathcal{M}=O(2)/U(1)\), which essentially boils down to a set of two points (here \(U(1)\) represents the global complex phase). This implies that squeezing a single Fermionic degree of freedom is a trivial task, contrasting starkly with a single Bosonic degree of freedom. When
2, the situation becomes the first non-trivial system involving two Fermionic degrees of freedom. In this scenario, the space of Gaussian states becomes two-dimensional
\[\mathcal{M}_{f,2}=O(4)/U(2)=S^{2}\cup S^{2}\,. \tag{6.1}\]
We now consider two pairs of Fermionic creation and annihilation operators, \((a_{1},a_{1}^{\dagger})\) and \((a_{2},a_{2}^{\dagger})\) and the Fermionic Bogoliubov transformation
\[\tilde{a}_{1}=\alpha a_{1}-\beta a_{2}^{\dagger}\,,\quad\tilde{a}_{2}^{\dagger }\ =\beta^{*}a_{1}+\alpha^{*}a_{2}^{\dagger}\,. \tag{6.2}\]
Although this is not the most general transformation, it can be shown that any Bogoliubov transformation can be changed into this form by combining \(a_{1}\) with \(a_{2}\), and \(\tilde{a}_{1}\) with \(\tilde{a}_{2}\) through \(U(2)\), which brings no change in the corresponding Gaussian states, \(|\psi\rangle\) and \(|\tilde{\psi}\rangle\). Here we choose \(\alpha\) and \(\beta\) as
\[\alpha=\cos\vartheta,\qquad\beta=e^{i\varphi}\sin\vartheta. \tag{6.3}\]
Now the inverse transformation \(M\) that maps \(\tilde{\xi}^{a}\) into \(\xi^{a}\) is
\[M\equiv\begin{pmatrix}1&0&0&0\\ 0&\cos(\varphi)&0&-\sin(\varphi)\\ 0&0&1&0\\ 0&\sin(\varphi)&0&\cos(\varphi)\end{pmatrix}\begin{pmatrix}\cos(\vartheta)& \sin(\vartheta)&0&0\\ -\sin(\vartheta)&\cos(\vartheta)&0&0\\ 0&0&\cos(\vartheta)&-\sin(\vartheta)\\ 0&0&\sin(\vartheta)&\cos(\vartheta)\end{pmatrix}\begin{pmatrix}1&0&0&0\\ 0&\cos(\varphi)&0&\sin(\varphi)\\ 0&0&1&0\\ 0&-\sin(\varphi)&0&\cos(\varphi)\end{pmatrix} \tag{6.4}\]
\[=\begin{pmatrix}\cos(\vartheta)&\sin(\vartheta)\cos(\varphi)&0&\sin(\vartheta )\sin(\varphi)\\ -\sin(\vartheta)\cos(\varphi)&\cos(\vartheta)&-\sin(\vartheta)\sin(\varphi)& 0\\ 0&\sin(\vartheta)\sin(\varphi)&\cos(\vartheta)&-\sin(\vartheta)\cos(\varphi)\\ -\sin(\vartheta)\sin(\varphi)&0&\sin(\vartheta)\cos(\varphi)&\cos(\vartheta) \end{pmatrix}\,.\]
Here we see that \(M\in O(4)\) or \(M\in SO(4)\) as we can continuously reach \(\mathbb{1}\), and satisfies \(MGM^{T}=G\). Now the anti-symmetric covariance matrix \(\tilde{\Omega}=M\Omega M^{T}\) of the transformed state \(|\tilde{\psi}\rangle\) is
\[\tilde{\Omega}\equiv\begin{pmatrix}0&-\sin(2\vartheta)\sin(\varphi)&\cos(2 \vartheta)&\sin(2\vartheta)\cos(\varphi)\\ \sin(2\vartheta)\sin(\varphi)&0&-\sin(2\vartheta)\cos(\varphi)&\cos(2 \vartheta)\\ -\cos(2\vartheta)&\sin(2\vartheta)\cos(\varphi)&0&\sin(2\vartheta)\sin( \varphi)\\ -\sin(2\vartheta)\cos(\varphi)&-\cos(2\vartheta)&-\sin(2\vartheta)\sin( \varphi)&0\end{pmatrix}\,. \tag{6.5}\]
The state is identical at \(\theta=0\) and \(\theta=\pi\) due to the anti-symmetric nature of the covariance matrix \(\tilde{\Omega}\), which remains the same for these \(\theta\) values. This can also be understood by examining the Bogoliubov transformation, where \(\alpha=\cos(\theta=\pi)=-1\) and \(\beta=0\), leading to \(\tilde{a_{1}}=-a_{1}\) and \(\tilde{a_{2}}=-a_{2}\). As a result, the vacuum state remains unchanged. The state furthest from the initial Gaussian state occurs at \(\theta=\pi/2\), where \((\tilde{a_{1}},\tilde{a_{2}})=(-a_{2}^{\dagger},a_{1}^{\dagger})\). Krylov complexity is expected to reflect this, reaching its peak at \(\theta=\pi/2\) and decreasing for \(\theta\) values either smaller or larger than \(\pi/2\) within the interval \(\theta\in[0,2\pi]\). For \(\theta\) values exceeding \(2\pi\), the pattern is anticipated to recur.
We now try to encode the invariant relative information between two Fermionic Gaussian states \(|\Omega\rangle\) and \(|\tilde{\Omega}\rangle\). To do so, let us consider an orthonormal basis \(\xi^{a}\equiv(q_{1},q_{2},p_{1},p_{2})\) of Majorana modes, \(G\equiv\mathbb{1}\) and the covariance matrix becomes
\[\Omega\equiv\begin{pmatrix}\mathbb{0}&\mathbb{1}\\ -\mathbb{1}&\mathbb{0}\end{pmatrix}\,. \tag{102}\]
For the Bosonic system, the invariant information about the relation between the original state and the transformed state is encoded by the eigenvalues of the relative covariance matrix
\[\Delta_{b}^{a}=\tilde{G}^{ac}g_{cb}\qquad\text{with}\qquad g=G^{-1}\,, \tag{103}\]
i.e., \(G^{ac}g_{cb}=\delta_{b}^{a}\). Similarly, for the Fermionic system, we can describe the invariant information about the relation between the original state and the transformed state in terms of the relative covariance matrix as
\[\Delta_{b}^{a}=\tilde{\Omega}^{ac}\omega_{cb}\qquad\text{with} \qquad\omega=\Omega^{-1}\,, \tag{104}\]
i.e., \(\Omega^{ac}\omega_{cb}=\delta_{b}^{a}\). The invariant information is encoded in the eigenvalues of this matrix. Here we have \(\text{spec}(\Delta)=(e^{2i\theta},e^{2i\vartheta},e^{-2i\vartheta},e^{-2i \vartheta})\) for the Bogoliubov transformation in Eqs.(101) and (102) and we find that \(\varphi\) is not present. The bound for Krylov complexity per fermion is
\[C(t)\leq+\frac{1}{4}\left(\text{Tr}(\mathbf{I}_{n\times n}- \Delta)\right)=\frac{1}{4}(2-2\cos 2\theta)=\sin^{2}\theta\,. \tag{105}\]
This is exactly the nature of complexity we were expecting to be. In Figure 7, we have plotted Krylov complexity for free fermions. It peaks at \(\theta=\pi/2\) and is \(0\) at \(\theta=0,\pi\). Furthermore, Krylov complexity is independent of \(\varphi\) as it is just a global phase and doesn't affect physically. We get the same complexity for the second fermion. Since we defined the Krylov complexity of the system to be \(C(t)\) to be the maximum of Krylov complexity for each fermion, we get \(C(t)\) of the system to be \(\sin^{2}(\theta)\). For the two Fermions, we get the summed Krylov complexity to be
\[C_{\Sigma}(t)\leq+\frac{1}{4}\left(\text{Tr}(\mathbf{I}_{n\times n }-\Delta)\right)=\frac{1}{4}(4-4\cos 2\theta)=2\sin^{2}\theta\,, \tag{106}\]
which is equal to summing Krylov complexity for each fermion. For comparison, we can also include the complexity computed from geometric approach [3], which is \(2\theta\) for \(\theta\in[0,\pi]\).
### Complexity for Dirac field
We now aim to establish structured techniques for calculating the circuit complexity of any given Fermionic Gaussian state \(|\psi_{r}\rangle=U\,|\psi_{R}\rangle\). In this setup, the target state, represented as the ground state of the Dirac field, is \(|\psi_{r}\rangle=|0\rangle\), whereas the reference state is \(|\psi_{R}\rangle=|\vec{0}\rangle\), a state in which local Fermionic degrees of freedom are disentangled. To further discuss,
let's introduce a basis composed of four-component spinors in a free Dirac field situated in four-dimensional Minkowski space
\[u^{1}(0)=\begin{pmatrix}1\\ 0\\ 1\\ 0\end{pmatrix},\qquad u^{2}(0)=\begin{pmatrix}0\\ 1\\ 0\\ 1\end{pmatrix},\qquad v^{1}(0)=\begin{pmatrix}1\\ 0\\ -1\\ 0\end{pmatrix},\qquad v^{2}(0)=\begin{pmatrix}0\\ 1\\ 0\\ -1\end{pmatrix}. \tag{6.11}\]
Now the boosted spinors are found by acting with the boost matrix
\[u^{s}(\mathbf{p})=\frac{1}{\sqrt{m}}\begin{pmatrix}\sqrt{p\cdot\sigma}&0\\ 0&\sqrt{p\cdot\overline{\sigma}}\end{pmatrix}u^{s}(0)\,. \tag{6.12}\]
Here \(p\cdot\sigma=E_{\mathbf{p}}\mathbb{1}-\mathbf{p}\cdot\vec{\sigma}\) and \(p\cdot\bar{\sigma}=E_{\mathbf{p}}\mathbb{1}-\mathbf{p}\cdot\vec{\sigma}\), with \(E_{\mathbf{p}}=\sqrt{m^{2}+\mathbf{p}^{2}}\). We note that a similar formula applies for \(v^{s}(\mathbf{p})\). Now, on a fixed time instant, we can write the Dirac spinor field as
\[\psi(\mathbf{x})=\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\sqrt{\frac{m}{2E_{ \mathbf{p}}}}\sum_{s}\bigl{(}a^{s}_{\mathbf{p}}u^{s}(\mathbf{p})e^{i\mathbf{p }\cdot\mathbf{x}}+b^{s\dagger}_{\mathbf{p}}v^{s}(\mathbf{p})e^{-i\mathbf{p} \cdot\mathbf{x}}\bigr{)}\,. \tag{6.13}\]
Figure 7: Krylov complexity for free fermions as a function of \(\theta\).
e note the number of Fermionic degrees of freedom per momentum \(\mathbf{p}\) is four. We define \(a_{\mathbf{p}}^{s}\left|0\right>=b_{\mathbf{p}}^{s}\left|0\right>=0\), where the Fermionic Gaussian state \(\left|0\right>\) (the ground state) is the target state to evaluate the circuit complexity and we recall the following relation for the creation and annihilation operators
\[\{a_{\mathbf{p}}^{s},a_{\mathbf{q}}^{r\dagger}\}=(2\pi)^{3}\delta^{ rs}\delta(\mathbf{p}-\mathbf{q})=\{b_{\mathbf{p}}^{s},b_{\mathbf{q}}^{r\dagger}\}\,. \tag{6.14}\]
We now introduce the local creation and annihilation operators \((\bar{a}_{\mathbf{x}}^{s},\bar{a}_{\mathbf{x}}^{s\dagger})\) and \((\bar{b}_{\mathbf{x}}^{s},\bar{b}_{\mathbf{x}}^{s\dagger})\) such that
\[\{\bar{a}_{\mathbf{x}}^{s},\bar{a}_{\mathbf{y}}^{r\dagger}\}= \delta(\mathbf{x}-\mathbf{y})\delta^{rs}=\{\bar{b}_{\mathbf{x}}^{s},\bar{b}_{ \mathbf{y}}^{r\dagger}\}\,, \tag{6.15}\]
as our reference state, \(\left|\psi_{R}\right>=\left|\bar{0}\right>\), is a Gaussian state and on a given time slice, the local Fermionic degrees of freedom at each spatial point are disentangled. Now we express the Dirac field (6.13) in terms of the local operators as
\[\psi(\mathbf{x})=\frac{1}{\sqrt{2}}\sum_{s}\Bigl{(}\bar{a}_{ \mathbf{x}}^{s}u^{s}(0)+\bar{b}_{\mathbf{x}}^{s\dagger}v^{s}(0)\Bigr{)}\,. \tag{6.16}\]
Figure 8: Krylov complexity per mode of a massive Dirac field ground state as a function of \(\left|p\right|\).
n this context, the disentangled reference state is characterized by \(\bar{\alpha}^{s}_{\bf p}\ket{\bar{0}}=\bar{b}^{s}_{\bf p}\ket{\bar{0}}=0\). The unitary transformation that maps the reference state to the target state, \(\ket{\bar{0}}\rightarrow\ket{0}=U\ket{\bar{0}}\), can be understood through the Bogoliubov transformation that connects the creation and annihilation operators defining these states. To explore this, we look at the Fourier-transformed versions of the local operators specified earlier
\[\bar{\bar{a}}^{s}_{\bf p}=\int d^{3}xe^{-i{\bf p}\cdot{\bf x}}\bar{\bar{a}}^{s}_ {\bf x}\qquad\text{and}\qquad\bar{b}^{s}_{\bf p}=\int d^{3}xe^{-i{\bf p}\cdot{ \bf x}}\bar{\bar{b}}^{s}_{\bf x}\,. \tag{6.17}\]
Then the Dirac field becomes
\[\psi({\bf x})=\int\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{1}{\sqrt{2}}\sum_{s} \Bigl{(}\bar{\bar{a}}^{s}_{\bf p}u^{s}(0)e^{i{\bf p}\cdot{\bf x}}+\bar{b}^{s \dagger}_{\bf p}v^{s}(0)e^{-i{\bf p}\cdot{\bf x}}\Bigr{)}\,. \tag{6.18}\]
We note that a trivial Bogoliubov transformation is performed by the Fourier transform which results the Gaussian state, defined by the operators \(\bar{a}^{s}_{\bf x}\) and \(\bar{b}^{s}_{\bf x}\), to still be the disentangled reference state \(\ket{\bar{0}}\). Now comparing Eqs. (6.13) and (6.18), we find the Bogoliubov transformation for \((\bar{\bar{a}}^{s}_{\bf p},\bar{\bar{a}}^{s\dagger}_{\bf p},\bar{\bar{b}}^{s}_ {\bf p},\bar{\bar{b}}^{s\dagger}_{\bf p})\rightarrow(a^{s}_{\bf p},a^{s\dagger }_{\bf p},b^{s}_{\bf p},b^{s\dagger}_{\bf p})\). Specifically, computing the product
Figure 9: Krylov complexity per mode of a massive Dirac field excited state as a function of \(|p|\).
with the conjugate basis spinors \(u^{r\dagger}(\mathbf{p})\) and \(v^{r\dagger}(-\mathbf{p})\) from the left (we use the orthogonality relations, \(u^{r\dagger}(\mathbf{p})v^{s}(-\mathbf{p})=v^{r\dagger}(-\mathbf{p})u^{s}( \mathbf{p})=0\)), we find
\[a^{r}_{\mathbf{p}} =\frac{\sqrt{m}}{2\sqrt{E_{\mathbf{p}}}}\sum_{s}\Bigl{(}[u^{r \dagger}(\mathbf{p})u^{s}(0)]\bar{a}^{s}_{\mathbf{p}}+[u^{r\dagger}(\mathbf{p} )v^{s}(0)]\bar{b}^{s\dagger}_{-\mathbf{p}}\Bigr{)}\,, \tag{119}\] \[b^{r\dagger}_{-\mathbf{p}} =\frac{\sqrt{m}}{2\sqrt{E_{\mathbf{p}}}}\sum_{s}\Bigl{(}[v^{r \dagger}(-\mathbf{p})u^{s}(0)]\bar{a}^{s}_{\mathbf{p}}+[v^{r\dagger}(-\mathbf{ p})v^{s}(0)]\bar{b}^{s\dagger}_{-\mathbf{p}}\Bigr{)}\,. \tag{120}\]
Furthermore, the spinor products are
\[u^{\bar{r}\dagger}(\mathbf{p})u^{\bar{s}}(0) =\frac{\delta^{\bar{r}\bar{s}}}{\sqrt{m}}\Biggl{(}\sqrt{E_{ \mathbf{p}}+|\mathbf{p}|}+\sqrt{E_{\mathbf{p}}-|\mathbf{p}|}\Biggr{)}\,, \tag{121}\] \[u^{\bar{r}\dagger}(\mathbf{p})v^{\bar{s}}(0) =(-)^{\bar{r}}\frac{\delta^{\bar{r}\bar{s}}}{\sqrt{m}}\Biggl{(} \sqrt{E_{\mathbf{p}}+|\mathbf{p}|}-\sqrt{E_{\mathbf{p}}-|\mathbf{p}|}\Biggr{)}\,,\] \[v^{\bar{r}\dagger}(-\mathbf{p})u^{\bar{s}}(0) =(-)^{\bar{r}^{\prime}}\frac{\delta^{\bar{r}\bar{s}}}{\sqrt{m}} \Biggl{(}\sqrt{E_{\mathbf{p}}+|\mathbf{p}|}-\sqrt{E_{\mathbf{p}}-|\mathbf{p}| }\Biggr{)}\,,\] \[v^{\bar{r}\dagger}(-\mathbf{p})v^{\bar{s}}(0) =\frac{\delta^{\bar{r}\bar{s}}}{\sqrt{m}}\Biggl{(}\sqrt{E_{ \mathbf{p}}+|\mathbf{p}|}+\sqrt{E_{\mathbf{p}}-|\mathbf{p}|}\Biggr{)}\,.\]
where \(\bar{r}^{\prime}\equiv\bar{r}+1\) (mod 2). So now the Bogoliubov transformation for pairs of operators is given as
\[a^{\bar{s}}_{\mathbf{p}}=\alpha^{\bar{s}}_{\mathbf{p}}\bar{a}^{\bar{s}}_{ \mathbf{p}}-\beta^{\bar{s}}_{\mathbf{p}}\bar{b}^{\bar{s}^{\prime}\dagger}_{- \mathbf{p}}\,,\quad b^{\bar{s}^{\prime}\dagger}_{-\mathbf{p}}=\beta^{\bar{s}}_ {\mathbf{p}}\bar{a}^{\bar{s}}_{\mathbf{p}}+\alpha^{\bar{s}}_{\mathbf{p}}\bar{ b}^{\bar{s}^{\prime}\dagger}_{-\mathbf{p}}\,, \tag{122}\]
where
\[\alpha^{\bar{s}}_{\mathbf{p}}=\frac{\sqrt{E_{\mathbf{p}}+|\mathbf{p}|}+\sqrt{ E_{\mathbf{p}}-|\mathbf{p}|}}{2\sqrt{E_{\mathbf{p}}}}\,,\quad\beta^{\bar{s}}_{ \mathbf{p}}=(-)^{\bar{s}+1}\frac{\sqrt{E_{\mathbf{p}}+|\mathbf{p}|}-\sqrt{E_{ \mathbf{p}}-|\mathbf{p}|}}{2\sqrt{E_{\mathbf{p}}}}\,. \tag{123}\]
It's worth noting that the Bogoliubov transformation is properly Fermionic, as verified by the condition \(|\alpha^{\bar{s}}_{\mathbf{p}}|^{2}+|\beta^{\bar{s}}_{\mathbf{p}}|^{2}=1\). The transformation is explicitly stated in equation (112), connecting pairs of creation and annihilation operators corresponding to their specific momentum and spin (\(\bar{s}\in\{1,2\}\)). On comparing with equation (112), we find that \(\cos\vartheta=\alpha^{s}_{\mathbf{p}}\) and \(\varphi=0\) for \(\bar{s}=1\) or \(\varphi=\pi\) for \(\bar{s}=2\).
Next, we turn our focus to the complexity of transforming the disentangled reference state \(|\bar{0}\rangle\) into the Fermionic vacuum state \(|0\rangle\). Initially, it's important to note that the parametrization for a Fermionic two-mode squeezing operation as provided in equation (112) yields the geodesic distance as \(2\vartheta\). Additionally, for the two-mode squeezing transformation \(M(\vartheta,\varphi)\) articulated in equation (113), the generating function is as follows
\[A(\vartheta,\varphi)=\vartheta\begin{pmatrix}0&\cos(\varphi)&0&\sin(\varphi) \\ -\cos(\varphi)&0&-\sin(\varphi)&0\\ 0&\sin(\varphi)&0&-\cos(\varphi)\\ -\sin(\varphi)&0&\cos(\varphi)&0\end{pmatrix}\,, \tag{124}\]
where \(M(\vartheta,\varphi)=e^{A(\vartheta,\varphi)}\). Now there is a generator analogous to that in Eq.(6.24) for each pair of modes whose magnitude is given as
\[Y(m,\mathbf{p},\bar{s})=2\cos^{-1}\left[\alpha_{\mathbf{p}}^{\bar{s}}\right]=2 \tan^{-1}\!\left(\frac{|\mathbf{p}|}{E_{\mathbf{p}}+m}\right)\!\!=\tan^{-1}\! \left(\frac{|\mathbf{p}|}{m}\right), \tag{6.25}\]
where \(\sin\vartheta>0\) and \(Y(m,\mathbf{p},\bar{s})>0\) and the two spins (\(\bar{s}=1,2\)) give two identical contributions for each momentum. Therefore,
\[\theta=\frac{1}{2}\tan^{-1}\left(\frac{|p|}{m}\right)\,, \tag{6.26}\]
and the Krylov complexity for each spin is
\[C\big{(}|\bar{0}\rangle\rightarrow|0\rangle\,\big{)}\!\!=\sin^{2}\left(\frac{ 1}{2}\tan^{-1}\left(\frac{|p|}{m}\right)\right)\,. \tag{6.27}\]
This expression is plotted in Figure 8 as a function of \(|p|\) for various mass parameters. For \(m=0\) and large \(|p|\), \(\theta\approx\pi/2\), and complexity per mode becomes \(C(m,\mathbf{p},\bar{s})\approx\sin^{2}(\pi/4)=\frac{1}{2}\) and is a fixed constant. The summed complexity is then expressed as
\[\mathcal{C}_{\Sigma}\big{(}|\bar{0}\rangle\rightarrow|0\rangle\,\big{)}\!\!= V\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\sum_{\bar{s}}\sin^{2}\left(\frac{1}{2} \tan^{-1}\left(\frac{|p|}{m}\right)\right)\,. \tag{6.28}\]
Given that the Krylov complexity \(\mathcal{C}\big{(}|\bar{0}\rangle\rightarrow|0\rangle\,\big{)}\) remains constant, the cumulative Krylov complexity becomes ultraviolet (UV) divergent when considering the total complexity. A hard cutoff \(\Lambda\) can be chosen for the momentum integral, facilitating its computation.
Next, we turn our attention to assessing the complexity of excited states. Specifically, we examine excited states characterized by the following form
\[|\tilde{\psi}\rangle=a_{\mathbf{q}}^{\bar{r}\dagger}b_{-\mathbf{q}}^{\bar{r}^ {\prime}\dagger}\,|0\rangle\, \tag{6.29}\]
for which the Bogoliubov transformation are given in (6.22). The Eq. (6.25) is still valid for most of the pairs of modes as the above state is annihilated by \(a_{\mathbf{q}}^{\bar{r}\dagger}\) and \(b_{-\mathbf{q}}^{\bar{r}^{\prime}\dagger}\). But we need to reconsider the contribution for the pair labeled by \(\mathbf{p}=\mathbf{q}\) and \(s=r\). Here we can relabel the annihilation operators \((\tilde{a},\tilde{b})=(-)^{\bar{r}^{\prime}}b_{-\mathbf{q}}^{\bar{r}^{\prime }\dagger},(-)^{\bar{r}}a_{\mathbf{q}}^{\bar{r}\dagger}\). Then Eq. (6.22) can be written as
\[\tilde{a}=\tilde{\alpha}\tilde{a}_{\mathbf{q}}^{\bar{r}}-\tilde{\beta}\bar{b}_ {-\mathbf{q}}^{\bar{r}^{\prime}\dagger}\,,\quad\tilde{b}^{\dagger}=\tilde{ \beta}\bar{a}_{\mathbf{q}}^{\bar{r}}+\tilde{\alpha}\tilde{b}_{-\mathbf{q}}^{ \bar{r}^{\prime}\dagger}\,, \tag{6.30}\]
where
\[\begin{split}\alpha_{\mathbf{p}}^{\bar{s}}=(-)^{\bar{r}^{\prime }}\beta_{\mathbf{q}}^{\bar{r}}&=\frac{\sqrt{E_{\mathbf{q}}+| \mathbf{q}|}-\sqrt{E_{\mathbf{q}}-|\mathbf{q}|}}{2\sqrt{E_{\mathbf{q}}}}\,,\\ \beta_{\mathbf{p}}^{\bar{s}}=(-)^{\bar{r}}\alpha_{\mathbf{q}}^{ \bar{r}}&=(-)^{\bar{r}}\frac{\sqrt{E_{\mathbf{q}}+|\mathbf{q}|}+ \sqrt{E_{\mathbf{q}}-|\mathbf{q}|}}{2\sqrt{E_{\mathbf{q}}}}\,.\end{split} \tag{6.31}\]
Here, the Bogoliubov transformation can be compared with Eqs. (6.2) and (6.3) and we can write \(\cos\tilde{\vartheta}=\tilde{\alpha}\) and \(\varphi=\pi\) for \(\bar{r}=1\) or \(\varphi=0\) for \(\bar{r}=2\). Here the analog of Eq. (6.25) for the above Bogoliubov transformation for these particular modes is
\[\tilde{Y}(m,\mathbf{q},\bar{r})=2\cos^{-1}\left[\tilde{\alpha}\right]=2\tan^{- 1}\!\left(\frac{E_{\mathbf{q}}+m}{|\mathbf{q}|}\right)\!\!=\pi-\tan^{-1}\! \left(\frac{|\mathbf{q}|}{m}\right). \tag{6.32}\]
For this case, we arrive at
\[\theta=\frac{1}{2}\left(\pi-\tan^{-1}\left(\frac{|\mathbf{q}|}{m}\right)\right)\,. \tag{108}\]
Thus, the Krylov complexity for each spin in this case becomes
\[C\big{(}|\bar{0}\rangle\to|\tilde{\psi}\rangle\big{)}{=}\sin^{2}\left(\frac{\pi} {2}-\frac{1}{2}\tan^{-1}\!\left(\frac{|\mathbf{q}|}{m}\right)\right)=\cos^{2} \left(\frac{1}{2}\tan^{-1}\!\left(\frac{|\mathbf{q}|}{m}\right)\right)\,. \tag{109}\]
This expression is plotted in Figure 9 as a function of \(|p|\) for various mass parameters. For the case \(m=0\) and large \(|p|\), \(\theta\approx\pi/2\), and complexity per mode gets \(C(m,\mathbf{p},\bar{s})\approx\cos^{2}(\pi/4)=\frac{1}{2}\) and is a fixed constant. The summed complexity is then
\[\mathcal{C}_{\Sigma}\big{(}|\bar{0}\rangle\to|\tilde{\psi}\rangle\big{)}{=}V \int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\sum_{\bar{s}}\cos^{2}\left(\frac{1}{2} \tan^{-1}\left(\frac{|p|}{m}\right)\right)\,. \tag{110}\]
As Krylov complexity \(C\big{(}|\bar{0}\rangle\to|\tilde{\psi}\rangle\big{)}\) is constant, at the total complexity, summed Krylov complexity \(\mathcal{C}_{\Sigma}\big{(}|\bar{0}\rangle\to|\tilde{\psi}\rangle\big{)}\) is UV divergent. We can choose a hard cutoff \(\Lambda\) for the momentum integral from which one can compute the integral.
## 7 Conclusion
In the present study, we have examined Krylov complexity in the context of both Fermionic and Bosonic Gaussian states. This research was spurred by two recent developments. Firstly, Krylov spread complexity offers a method for charting the expansion of quantum states through time, without relying on a specific choice of gates. Secondly, recent theories have posited complexity as a potential element in holographic dualities, encapsulated in ideas like _Complexity = Action_ and _Complexity = Volume_, among others. It is our aspiration that insights into Krylov complexity can shed light on these areas.
We selected Gaussian states for our study because they often serve as the foundational basis for investigating more complex systems and quantum states. One intriguing aspect of Gaussian states is that the transformation between them can be characterized through the impact on their covariance matrix. For Bosonic Gaussian states, the significant part of the covariance matrix is symmetric, while it is anti-symmetric for Fermionic states. Although we discovered that the covariance matrix alone is insufficient for calculating Krylov complexity due to the absence of relative phase information, we demonstrated that the relative covariance matrix does offer a bounding constraint on Krylov complexity, as outlined below
\[C(t)\leq\begin{cases}-\frac{1}{4}\left(\operatorname{Tr}(\mathbf{I}_{n\times n }-\Delta)\right)&\text{ for Bosons }\,,\\ +\frac{1}{4}\left(\operatorname{Tr}(\mathbf{I}_{n\times n}-\Delta)\right)&\text { for Fermions }\,,\end{cases} \tag{111}\]
where \(\Delta\) is the relative covariance matrix. The bound reaches saturation for Coherent and Squeezed Bosonic Gaussian states, expressed as \(\alpha^{2}\) and \(\sinh^{2}r\), where \(\alpha\) and \(r\) signify the displacement and squeezing parameters, respectively. For generalized coherent squeezed states, the process of calculating Krylov complexity through survival amplitude proved challenging. Nevertheless, we partially computed it and established that the bound
could be characterized as \(\alpha^{2}+\sinh^{2}r\), intriguingly constituting the sum of the Krylov complexities for both coherent and squeezed states. We extended this analysis to multi-mode Bosons, using the thermofield double state as an illustrative example, given its significance in holographic theories. For Fermions, we observed that Krylov complexity follows a \(\sin^{2}(\theta)\) pattern, where \(\theta\), ranging from \(0\) to \(\pi\), quantifies the divergence of a Fermionic Gaussian quantum state from its initial position. The state exhibits maximum dissimilarity when \(\theta=\pi/2\). Previous research [12] has explored special instances involving symmetry groups like \(SL(2,R)\), \(SU(2)\), and the Heisenberg-Weyl group, demonstrating that Krylov complexity can be analytically determined in these scenarios. Their expressions, \(\sinh^{2}(\alpha t)\), \(\sin^{2}(\alpha t)\), and \(\alpha^{2}t^{2}\), are consistent with our findings for single-mode Bosonic and Fermionic Gaussian states.
Several compelling questions remain open for exploration. Firstly, the operational interpretation of Krylov complexity, along with its relationship to traditional circuit complexity or other entropy metrics, remains a vital area for study. In the realm of quantum chaos, Krylov complexity has the potential to serve as a gauge for assessing the complexity inherent in chaotic quantum circuit. Secondly, this work could be extended to explore Krylov complexity in non-Gaussian states through various avenues: beginning with a quadratic Hamiltonian but using non-Gaussian initial states, employing a non-quadratic Hamiltonian with Gaussian initial states, and ultimately investigating non-quadratic Hamiltonians alongside non-Gaussian initial states. Lastly, it would be intriguing to examine the correlation between Krylov complexity and Geometric complexity [21]. Such an investigation could also provide valuable insights into the operational significance of Krylov complexity.
## Acknowledgments
R.S. acknowledges the support of Polish NAWA Bekker program No. BPN/BEK/2021/1/00342 and Polish NCN Grant No. 2018/30/E/ST2/00432, and gratefully acknowledges support from the Simons Center for Geometry and Physics, Stony Brook University at which some of the research for this paper was performed. This research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. We also acknowledge support from Physics Without Frontiers (PWF) program of the International Centre for Theoretical Physics (ICTP), Italy. C.Deppe additionally acknowledge the financial support by the Federal Ministry of Education and Research (BMBF) in the programs with the identification numbers: 16KISK002, 16KISQ028, 16KISQ038, 16K1S1598K, 16KISQ077, 16KISQ093, 16KISR027K.
|
2305.19715 | Infinite order differential operators associated with superoscillations
in the half-plane barrier | Superoscillations are a phenomenon in physics, where linear combinations of
low-frequency plane waves interfere almost destructively in such a way that the
resulting wave has a higher frequency than any of the individual waves. The
evolution of superoscillatory initial datum under the time dependent
Schr\"odinger equation is stable in free space, but in general it is unclear
whether it can be preserved in the presence of an external potential. In this
paper, we consider the two-dimensional problem of superoscillations interacting
with a half-plane barrier, where homogenous Dirichlet or Neumann boundary
conditions are imposed on the negative $x_2$-semiaxis. We use the Fresnel
integral technique to write the wave function as an absolute convergent Green's
function integral. Moreover, we introduce the propagator of the Schr\"odinger
equation in form of an infinite order differential operator, acting
continuously on the function space of exponentially bounded entire functions.
In particular, this operator allows to prove that the property of
superoscillations is preserved in the form of a similar phenomenon called
supershift, which is stable over time. | Peter Schlosser | 2023-05-31T10:10:37Z | http://arxiv.org/abs/2305.19715v1 | # Infinite order differential operators associated with superoscillations in the half-plane barrier
###### Abstract.
Superoscillations are a phenomenon in physics, where linear combinations of low-frequency plane waves interfere almost destructively in such a way that the resulting wave has a higher frequency than any of the individual waves. The evolution of superoscillatory initial datum under the time dependent Schrodinger equation is stable in free space, but in general it is unclear whether it can be preserved in the presence of an external potential. In this paper, we consider the two-dimensional problem of superoscillations interacting with a half-plane barrier, where homogenous Dirichlet or Neumann boundary conditions are imposed on the negative \(x_{2}\)-semiaxis. We use the Fresnel integral technique to write the wave function as an absolute convergent Green's function integral. Moreover, we introduce the propagator of the Schrodinger equation in form of an infinite order differential operator, acting continuously on the function space of exponentially bounded entire functions. In particular, this operator allows to prove that the property of superoscillations is preserved in the form of a similar phenomenon called supershift, which is stable over time.
Key words and phrases:Superoscillations, Schrodinger equation, Green's function, Half-plane barrier 2020 Mathematics Subject Classification: 35A20, 35A08 This research was funded by the Austrian Science Fund (FWF) under Grant No. J 4685-N and by the European Union - NextGenerationEU
## 1. Introduction
The concept of superoscillations was first introduced in the context of antenna theory in the 1950s, see the paper [30]. However, it was in the 1990s that Y. Aharonov and his collaborators discovered the connection between superoscillations and quantum mechanics, specifically weak values, see [1, 15, 18], but also the later publications [5, 7, 14, 16, 17] dealing with several developments of the theory of superoscillations.
A mathematical investigation of a quantum mechanical superoscillating wave or particle always reduces to the time dependent Schrodinger equation with some potential \(V\) and a superoscillating function \(F\) as initial condition:
\[i\frac{\partial}{\partial t}\Psi(t,\mathbf{x}) =\big{(}-\Delta+V(t,\mathbf{x})\big{)}\Psi(t,\mathbf{x}), t>0,\,\mathbf{x}\in\Omega,\] \[\Psi(0,\mathbf{x}) =F(\mathbf{x}), \mathbf{x}\in\Omega.\]
The question whether the solution \(\Psi(t,\mathbf{x})\) is again superoscillating at times \(t>0\) was first proven for free particles in [8, 9], and later also for nonvanishing potentials as the harmonic oscillator in [19, 20, 22, 23], the electric field in [11, 13, 20, 22], the magnetic field in [13, 24, 26], the centrifugal potential in [13, 25], the step potential in [12] and distributional potentials as \(\delta\) and \(\delta^{\prime}\) in [2, 3]. A unified approach to those problems was given in [4, 28], where under certain assumptions on the corresponding Green's function, the time persistence property of superoscillations was investigated for whole classes of potentials. Another general approach
was given in [27] who provide conditions on the moments of the Green's function in order to obtain similar time persistence results.
A shared characteristic among the aforementioned examples is that they solely focus on potentials within a single spatial dimension. There are very few publications which treat the time persistence problem of the Schrodinger equation in two or more dimensions; some of them are [6, 10, 11, 13, 24]. In this paper we consider the two-dimensional half-plane barrier with Dirichlet (or Neumann) boundary conditions. In particular, we use the setting where the barrier is located on the negative \(x_{2}\)-semiaxis \(\Gamma:=\{(\begin{smallmatrix}0\\ x_{2}\end{smallmatrix})\mid x_{2}\leq 0\}\), i.e., we consider the Schrodinger equation on \(\Omega:=\mathbb{R}^{2}\setminus\Gamma\), namely
\[\begin{array}{c}\includegraphics[width=142.26378pt]{images/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/2011/20111/2011/2011/2011/20111/2011/20111/20111/2011/2011/20111/2011/20111/2011/20111/20111/20111/20111/20111/20111/20111/20111/20111/2011/20111/20111/20111/2011/2011/2011/20111/20111/2011/20111/2011/2011/20111/20111/20111/20111/20111/20111/20111/20111/20111/20111/20111/20111/2011/20111/20111/20111/20111/20111/20111/20111/2011/20111/2011/20111/20111/20111/20111/20111/20111/20111/20111/20111/20111/201111/201111/201111/201111/201111/201111/201111/201111/201111/201111/201111/201111/201111/2011111/201111/2011111/201111/2011111/2011111/201111/2011111/201111/201111/2011111/2011111/2011111/2011111/201111/2011111/2011111/2011111/20111111/2011111/2011111/20111111/2011111/20111111/2011111/2011111/20111111/201
where \(p_{1},p_{2}\in\mathbb{N}\). The superoscillatory property of these functions comes from the fact that although that the frequencies \(|k_{j}(n)|\leq 1\) are in modulus bounded by \(1\), the sequence \((F_{n})_{n}\) of functions converges as
\[\lim_{n\to\infty}F_{n}(\mathbf{x})=e^{ia^{p_{1}}x_{1}+ia^{p_{2}}x_{2}},\quad \mathbf{x}\in\mathbb{R}^{2},\]
to a plane wave with frequency \(a>1\). However, also other types of superoscillating functions were considered in the past by different physical and mathematical communities. An overview, but also one general definition, was given in the recent paper [21], which puts most of the existing notions of superoscillations into a common framework. In the paper [21] are considered superoscillations in one dimension, the natural two-dimensional extension, which we will need in this paper, reads as follows:
**Definition 1.1** (Superoscillations).: _A sequence of functions of the form_
\[F_{n}(\mathbf{z})=\int_{|\mathbf{k}|\leq k_{0}}e^{i\mathbf{k}\mathbf{z}}d\mu_ {n}(\mathbf{k}),\quad\mathbf{z}\in\mathbb{C}^{2}, \tag{1.5}\]
_with a common maximal frequency \(k_{0}>0\) and complex Borel measures \(\mu_{n}\) on the closed ball \(\overline{B_{k_{0}}(\mathbf{0})}\subseteq\mathbb{R}^{2}\) of radius \(k_{0}\), is called superoscillating, if there exists some \(\mathbf{a}\in\mathbb{R}^{2}\) with \(|\mathbf{a}|>k_{0}\), such that_
\[\lim_{n\to\infty}F_{n}(\mathbf{z})=e^{i\mathbf{a}\mathbf{z}}\quad\text{in }\mathcal{A}_{1}(\mathbb{C}^{2}). \tag{1.6}\]
_Here the products \(\mathbf{k}\mathbf{z}:=k_{1}z_{1}+k_{2}z_{2}\) of real or complex vectors is understood in the usual bilinear sense._
**Remark 1.2**.: _We point out that any function \(F_{n}\) of the form (1.5) is automatically contained in \(\mathcal{A}_{1}(\mathbb{C}^{2})\). The exponential bound is given by the estimate_
\[|F_{n}(\mathbf{z})|\leq\int_{|\mathbf{k}|\leq k_{0}}e^{|\mathbf{k}\mathbf{z}|} d|\mu_{n}|(\mathbf{k})\leq|\mu_{n}|\big{(}\overline{U_{k_{0}}(\mathbf{0})} \big{)}e^{k_{0}|\mathbf{z}|},\quad\mathbf{z}\in\mathbb{C}^{2},\]
_with \(|\mu_{n}|\) the total variation of the complex measure \(\mu_{n}\). The holomorphicity also follows from this locally uniform upper bound and a version of the dominated convegence theorem, which allows to interchange derivative and integral and leads to a holomorphic function \(F_{n}(\mathbf{z})\)._
To study the evolution of superoscillating functions we will introduce for every \(t>0\), \(\mathbf{x}\in\Omega\) an infinite order differential operator of the form
\[U_{{}_{D,N}}(t,\mathbf{x})=\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t, \mathbf{x})\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2} ^{n_{2}}},\]
where, the coefficients \(c_{n_{1},n_{2}}(t,\mathbf{x})\) depend on the potential and on the auxiliary complex variables \(z_{1}\) and \(z_{2}\). These operators applied to the initial datum \(F=F_{n}\) of (1.1), analytically extended to a holomorphic function, give the solution \(\Psi_{{}_{D,N}}(t,\mathbf{x})\) as
\[\Psi_{{}_{D,N}}(t,\mathbf{x})=U_{{}_{D,N}}(t,\mathbf{x})F(\mathbf{z})\Big{|}_ {\mathbf{z}=\mathbf{0}}.\]
As we will see the continuity on spaces of entire functions of the above operator \(U_{{}_{D,N}}\) is of crucial importance in the investigation of the time evolution of superoscillations.
_Plan of the paper._ In Section 2 we consider the Green's function of the half-plane barrier and the integral representation of the solution of the Cauchy problem for the Schrodinger equation using Fresnel integrals.
In Section 3 we identify suitable infinite order differential operators associated with half-plane barrier that will be the key tools to study, in Section 4, the time persistence of superoscillations and the supershift property of the solution of Schrodinger equation with superoscillatory initial datum.
## 2. The Green's function of the half-plane barrier
The strategy to solve the Schrodinger equation of the half-plane barrier is based on Green's functions techniques, i.e., we write the solution of the Schrodinger equation with initial condition \(F\) as an integral of the form
\[\Psi(t,\mathbf{x})=\int_{\Omega}G(t,\mathbf{x},\mathbf{y})F(\mathbf{y})d \mathbf{y},\quad t>0,\,\mathbf{x}\in\Omega. \tag{2.1}\]
The Green's function \(G\) for the particular problem of the half-plane barrier is calculated in [29], and using polar coordinates \(\mathbf{x}=r(\frac{\cos\varphi}{\sin\varphi})\), \(\mathbf{y}=(\frac{\cos\theta}{\sin\theta})\) with \(r,\rho>0\), \(\varphi,\theta\in(-\frac{\pi}{2},\frac{3\pi}{2})\), explicitly given by
\[G_{\!D}(t,\mathbf{x},\mathbf{y}) =\frac{e^{-\frac{(r+\rho)^{2}}{4it}}}{8i\pi t}\bigg{(}\Lambda \bigg{(}\frac{\sqrt{r\rho}\,\cos(\frac{\varphi-\theta}{2})}{\sqrt{it}}\bigg{)} -\Lambda\bigg{(}-\frac{\sqrt{r\rho}\,\sin(\frac{\varphi+\theta}{2})}{\sqrt{it }}\bigg{)}\bigg{)}, \tag{2.2a}\] \[G_{\!N}(t,\mathbf{x},\mathbf{y}) =\frac{e^{-\frac{(r+\rho)^{2}}{4it}}}{8i\pi t}\bigg{(}\Lambda \bigg{(}\frac{\sqrt{r\rho}\,\cos(\frac{\varphi-\theta}{2})}{\sqrt{it}}\bigg{)} +\Lambda\bigg{(}-\frac{\sqrt{r\rho}\,\sin(\frac{\varphi+\theta}{2})}{\sqrt{it }}\bigg{)}\bigg{)}. \tag{2.2b}\]
Here the indices \(D\) and \(N\) indicate the type of boundary conditions (Dirichlet or Neumann) in (1.1), and for a shorter notation we used the entire function
\[\Lambda(z):=\frac{2}{\sqrt{\pi}}\int_{0}^{\infty}e^{-s^{2}-2zs}ds,\quad z\in \mathbb{C}. \tag{2.3}\]
It can be shown that \(\Lambda(z)=e^{z^{2}}(1-\operatorname{erf}(z))\) is a modification of the well-known error function. Since we allow initial conditions \(F\in\mathcal{A}_{1}(\mathbb{C}^{2})\), which may grow exponentially at \(\infty\), some regularization is needed in the integral (2.1), such that the solution of (1.1) can be written as
\[\Psi_{\!D,N}(t,\mathbf{x})=\lim_{\varepsilon\to 0^{+}}\int_{\Omega}e^{- \varepsilon|\mathbf{y}|^{2}}G_{\!D,N}(t,\mathbf{x},\mathbf{y})F(\mathbf{y})d \mathbf{y},\qquad t>0,\,\mathbf{x}\in\Omega. \tag{2.4}\]
The aim of this section is to find an absolute convergent integral representation of the wave functions in (2.4), which will then be needed in the sequel to prove the main results of the paper. The key ingredient will be the so called Fresnel integral technique, which roughly speaking rotates the domain of integration into the complex plane and produces in this way an absolute convergent integrand. Note that the following Lemma 2.1 is a special version of [4, Proposition 2.1], where also a proof can be found.
**Lemma 2.1** (Fresnel integral).: _Let \(a>0\) and \(f:\mathbb{C}_{\mathrm{Re}>0}\to\mathbb{C}\) be holomorphic on_
\[\mathbb{C}_{\mathrm{Re}>0}:=\left\{\,z\in\mathbb{C}\mid\mathrm{Re}(z)>0\, \right\},\]
_and satisfies the estimate_
\[|f(z)|\leq Ae^{B|z|},\qquad z\in\mathbb{C}_{\mathrm{Re}>0}, \tag{2.5}\]
_for some and \(A,B\geq 0\). Then, for every \(\alpha\in(0,\frac{\pi}{2})\), we get_
\[\lim_{\varepsilon\to 0^{+}}\int_{0}^{\infty}e^{-\varepsilon y^{2}}e^{iay^{2}}f(y) dy=e^{i\alpha}\int_{0}^{\infty}e^{ia(ye^{i\alpha})^{2}}f(ye^{i\alpha})dy.\]
The following Theorem 2.2 now uses this Fresnel integral technique, to rewrite the regularized integral (2.4) as an absolute convergent integral in the complex plan.
**Theorem 2.2**.: _Let \(F\in\mathcal{A}_{1}(\mathbb{C}^{2})\). Then, for every \(\alpha\in(0,\frac{\pi}{2})\) the functions \(\Psi_{\!{}_{D}}\) and \(\Psi_{\!{}_{N}}\) in (2.4) can be written as the absolute convergent integral_
\[\Psi_{\!{}_{D,N}}(t,\mathbf{x})=e^{2i\alpha}\int_{0}^{\infty}\int_{-\frac{\pi}{ 2}}^{\frac{3\pi}{2}}G_{\!{}_{D,N}}\big{(}t,\mathbf{x},\rho e^{i\alpha}\big{(} \begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}\rho e^{i\alpha}\big{(} \begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\rho\,d\theta d\rho. \tag{2.6}\]
Proof.: First, we transform the integral (2.4) into polar coordinates, which is
\[\Psi_{\!{}_{D,N}}(t,\mathbf{x})=\lim_{\varepsilon\to 0^{+}}\int_{0}^{\infty} \int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}e^{-\varepsilon\rho^{2}}G_{\!{}_{D,N}} \big{(}t,\mathbf{x},\rho\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}\rho\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\rho\,d\theta d\rho. \tag{2.7}\]
Next, we note that the function \((0,\infty)\mapsto G_{\!{}_{D,N}}(t,\mathbf{x},\rho(\begin{smallmatrix}\cos \theta\\ \sin\theta\end{smallmatrix}))\) holomorphically extends to \(\mathbb{C}_{\mathrm{Re}>0}\) by simply replacing \(\rho\in(0,\infty)\) by \(z\in\mathbb{C}_{\mathrm{Re}>0}\) in (2.2). Note that it could also be extended to \(\mathbb{C}\setminus(-\infty,0]\), which is the maximal domain of the square root in (2.8), but this is not necessary for our purposes. Furthermore, we can decompose this extended Green's function into
\[G_{\!{}_{D,N}}\big{(}t,\mathbf{x}, z\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\] \[=e^{-\frac{z^{2}}{4it}}\underbrace{\frac{e^{-\frac{r^{2}+2rx}{4it }}}{8i\pi t}\bigg{(}\Lambda\bigg{(}\frac{\sqrt{rz}\,\cos(\frac{\varphi-\theta} {2})}{\sqrt{it}}\bigg{)}\mp\Lambda\bigg{(}-\frac{\sqrt{rz}\,\sin(\frac{\varphi +\theta}{2})}{\sqrt{it}}\bigg{)}\bigg{)}}_{=:\widetilde{G}_{\!{}_{D,N}}\big{(}t,\mathbf{x},z\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}}, \tag{2.8}\]
and hence write the integral (2.7) as
\[\Psi_{\!{}_{D,N}}(t,\mathbf{x})=\lim_{\varepsilon\to 0^{+}}\int_{0}^{ \infty}e^{-\varepsilon\rho^{2}}e^{-\frac{\rho^{2}}{4it}}\int_{-\frac{\pi}{2}}^ {\frac{3\pi}{2}}\widetilde{G}_{\!{}_{D,N}}\big{(}t,\mathbf{x},\rho\big{(} \begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}\rho\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\rho\,d\theta d\rho.\]
The idea is now to apply the Fresnel integral technique of Lemma 2.1 with respect to the radial part of the above integral. To do so, we have to check if the function
\[z\mapsto\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}\widetilde{G}_{\!{}_{D,N}} \big{(}t,\mathbf{x},z\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}z\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}z\,d\theta \tag{2.9}\]
is holomorphic on \(\mathbb{C}_{\mathrm{Re}>0}\) and exponentially bounded as in (2.5). It is obvious that the integrand
\[z\mapsto\widetilde{G}_{\!{}_{D,N}}\big{(}t,\mathbf{x},z\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}z\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\]
is holomorphic in \(\mathbb{C}_{\mathrm{Re}>0}\). Also knowing that
\[\theta\mapsto\frac{d}{dz}\widetilde{G}_{\!{}_{D,N}}\big{(}t,\mathbf{x},z \big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}z\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)},\]
is continuous, it follows that also the integral (2.9) is holomorphic on \(\mathbb{C}_{\mathrm{Re}>0}\) for every fixed \(t>0\), \(\mathbf{x}\in\Omega\). To verify the exponential bound (2.5) for the mapping (2.9), we first estimate
the reduced Green's function \(\widetilde{G}_{D,N}\) by
\[\big{|}\widetilde{G}_{D,N}\big{(}t,\mathbf{x},z\big{(}\begin{smallmatrix} \cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\big{|} =\frac{e^{-\frac{r\operatorname{Im}(z)}{2t}}}{8\pi t}\bigg{|} \Lambda\bigg{(}\frac{\sqrt{rz}\,\cos(\frac{\varphi-\theta}{2})}{\sqrt{it}} \bigg{)}\mp\Lambda\bigg{(}-\frac{\sqrt{rz}\,\sin(\frac{\varphi+\theta}{2})}{ \sqrt{it}}\bigg{)}\bigg{|}\] \[\leq\frac{e^{\frac{r|z|}{2t}}}{4\pi t}\Big{(}e^{\frac{r|z|}{t}\cos ^{2}(\frac{\varphi-\theta}{2})}+e^{\frac{r|z|}{t}\sin^{2}(\frac{\varphi+ \theta}{2})}\Big{)}\] \[\leq\frac{1}{2\pi t}e^{\frac{3r|z|}{2t}},\quad z\in\mathbb{C}_{ \operatorname{Re}>0}, \tag{2.10}\]
where we used the estimate
\[|\Lambda(z)|\leq\frac{2}{\sqrt{\pi}}\int_{0}^{\infty}e^{-s^{2}-2 \operatorname{Re}(z)s}ds\leq\frac{2e^{|z|^{2}}}{\sqrt{\pi}}\int_{\mathbb{R}}e ^{-(s+\operatorname{Re}(z))^{2}}ds=2e^{|z|^{2}},\quad z\in\mathbb{C},\]
of the function \(\Lambda\) in (2.3). Since \(F\in\mathcal{A}_{1}(\mathbb{C}^{2})\), there exist constants \(A,B\geq 0\), such that
\[\big{|}F\big{(}z\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\big{|}\leq Ae^{B\sqrt{|z\cos\theta|^{ 2}+|z\sin\theta|^{2}}}=Ae^{B|z|},\quad z\in\mathbb{C}. \tag{2.11}\]
Combining now (2.10), (2.11) as well as \(|z|\leq e^{|z|}\), which is an immediate consequence of the power series expansion of the exponential, the integral in (2.9) admits the estimate
\[\bigg{|}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}\widetilde{G}_{D,N} \big{(}t,\mathbf{x},z\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}z\big{(}\begin{smallmatrix} \sin\theta\\ \cos\theta\end{smallmatrix}\big{)}\big{)}z\,d\theta\bigg{|}\leq\frac{A}{t}e^{( \frac{3r}{2t}+B+1)|z|},\quad z\in\mathbb{C}_{\operatorname{Re}>0}. \tag{2.12}\]
Hence, we verified that the assumptions of Lemma 2.1 for the mapping (2.9) are satisfied and so we can write the integral (2.7) for any \(\alpha\in(0,\frac{\pi}{2})\) in the form
\[\Psi_{D,N}(t,\mathbf{x})=e^{i\alpha}\int_{0}^{\infty}e^{-\frac{( \rho e^{i\alpha})^{2}}{4it}}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}\widetilde{G }_{D,N}\big{(}t,\mathbf{x},\rho e^{i\alpha}\big{(}\begin{smallmatrix}\cos \theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}F\big{(}\rho e^{i\alpha}\big{(} \begin{smallmatrix}\sin\theta\\ \cos\theta\end{smallmatrix}\big{)}\big{)}\rho e^{i\alpha}d\theta d\rho,\]
which, after substituting the defintion of \(\widetilde{G}_{D,N}\) from (2.8), is exactly the stated representation (2.6).
## 3. The infinite order differential operators associated with half-plane barrier
In this section we introduce, based on the Green's function integral (2.6), another representation of the solution \(\Psi_{D,N}(t,\mathbf{x})\) of (1.1), using some infinite order differential operator acting on the initial condition \(F\). More precisely, we use the two-dimensional power series representation
\[F(\mathbf{z})=\sum_{n_{1},n_{2}=0}^{\infty}\frac{\partial_{z_{1}}^{n_{1}} \partial_{z_{2}}^{n_{2}}F(\mathbf{0})}{n_{1}!n_{2}!}z_{1}^{n_{1}}z_{2}^{n_{2}},\quad\mathbf{z}=\big{(}\begin{smallmatrix}z_{1}\\ z_{2}\end{smallmatrix}\big{)}\in\mathbb{C}^{2},\]
to rewrite (for the moment formally) the function \(\Psi_{D,N}(t,\mathbf{x})\) in (2.6) as
\[\Psi_{D,N}(t,\mathbf{x})=e^{2i\alpha}\int_{0}^{\infty}\int_{-\frac{ \pi}{2}}^{\frac{3\pi}{2}}G_{{}_{\!D,N}}\big{(}t,\mathbf{x},\rho e^{i\alpha} \big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\times\sum_{n_{1},n_{2}=0}^{ \infty}\frac{\partial_{z_{1}}^{n_{1}}\partial_{z_{2}}^{n_{2}}F(\mathbf{0})}{n _{1}!n_{2}!}(\rho e^{i\alpha}\cos\theta)^{n_{1}}(\rho e^{i\alpha}\sin\theta)^{n _{2}}\rho\,d\theta d\rho\] \[=\sum_{n_{1},n_{2}=0}^{\infty}\frac{e^{(n_{1}+n_{2}+2)i\alpha}}{n _{1}!n_{2}!}\int_{0}^{\infty}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}G_{{}_{\!D, N}}\big{(}t,\mathbf{x},\rho e^{i\alpha}\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times(\cos\theta)^{n _{1}}(\sin\theta)^{n_{2}}\rho^{n_{1}+n_{2}+1}d\theta d\rho\frac{\partial^{n_{1 }+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}F(\mathbf{z})\Big{|}_{ \mathbf{z}=\mathbf{0}}\] \[=\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t,\mathbf{x})\frac{ \partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}F( \mathbf{z})\Big{|}_{\mathbf{z}=\mathbf{0}}, \tag{3.1}\]
using the coefficients
\[c_{n_{1},n_{2}}(t,\mathbf{x}):=\frac{e^{(n_{1}+n_{2}+2)i\alpha}}{n_{1}!n_{2}! }\int_{0}^{\infty}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}G_{{}_{\!D,N}}\big{(} t,\mathbf{x},\rho e^{i\alpha}\big{(}\begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}(\cos\theta)^{n_{1}}(\sin\theta)^{n _{2}}\rho^{n_{1}+n_{2}+1}d\theta d\rho. \tag{3.2}\]
The above computations mean, that using the _infinite order differential operator_
\[U_{{}_{\!D,N}}(t,\mathbf{x}):=\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t, \mathbf{x})\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2 }^{n_{2}}}, \tag{3.3}\]
we can write the solution \(\Psi_{D,N}(t,\mathbf{x})\) as
\[\Psi_{D,N}(t,\mathbf{x})=U_{{}_{\!D,N}}(t,\mathbf{x})F(z)\Big{|}_{z=0},\qquad t >0,\,\mathbf{x}\in\Omega. \tag{3.4}\]
The main advantage of the representation (3.4) using the operator \(U_{{}_{\!D,N}}(t,\mathbf{x})\) will be, that the many properties \(\Psi_{D,N}\), as the continuous dependency result or the supershift property discussed in the sequel, will turn out to be simple consequences of the continuity of this operator in the space \(\mathcal{A}_{1}(\mathbb{C}^{2})\). However, in order to prove this continuity and also to make the calculations in (3.1) rigorous, we need the following lemma about the exponential boundedness of the derivatives of functions in \(\mathcal{A}_{1}(\mathbb{C}^{2})\).
**Lemma 3.1**.: _If a function \(F\in\mathcal{A}^{1}(\mathbb{C}^{2})\) satisfies the estimate \(|F(\mathbf{z})|\leq Ae^{B|\mathbf{z}|}\), for some \(A\geq 0\), \(B>0\), then for every \(n_{1},n_{2}\in\mathbb{N}_{0}\), the derivatives of \(F\) can be estimated as_
\[\Big{|}\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{ 2}}}F(\mathbf{z})\Big{|}\leq A(eB)^{n_{1}+n_{2}}e^{B|\mathbf{z}|},\qquad \mathbf{z}=\big{(}\begin{smallmatrix}z_{1}\\ z_{2}\end{smallmatrix}\big{)}\in\mathbb{C}^{2}. \tag{3.5}\]
Proof.: Let us first consider the case \(n_{1},n_{2}\neq 0\). By the Cauchy-formula we can represent the derivative \(\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}z_{2}^{n_{2}}}F(\mathbf{z})\), for every fixed \(\mathbf{z}\in\mathbb{C}^{2}\) by the integral
\[\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}z_{2}^{n_{2}}}F(\mathbf{z} )=\frac{n_{1}!n_{2}!}{(2\pi i)^{2}}\int_{|\xi_{2}-z_{2}|=r_{2}}\int_{|\xi_{1}-z_ {1}|=r_{1}}\frac{F\big{(}\begin{smallmatrix}\xi_{1}\\ \xi_{2}\end{smallmatrix}\big{)}}{(\xi_{1}-z_{1})^{n_{1}+1}(\xi_{2}-z_{2})^{n_{2} +1}}d\xi_{1}d\xi_{2}, \tag{3.6}\]
where \(r_{1},r_{2}>0\) for the moment arbitrary and will be specified later. Hence we can estimate the derivative by
\[\Big{|}\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}z_{2}^{n_ {2}}}F(\mathbf{z})\Big{|} =\frac{n_{1}!n_{2}!}{4\pi^{2}}\bigg{|}\int_{0}^{2\pi}\int_{0}^{2 \pi}\frac{F\big{(}\begin{smallmatrix}z_{1}+r_{1}e^{i\varphi_{1}}\\ z_{2}+r_{2}e^{i\varphi_{2}}\end{smallmatrix}\big{)}}{(r_{1}e^{i\varphi_{1}})^{n _{1}}(r_{2}e^{i\varphi_{2}})^{n_{2}}}d\varphi_{1}d\varphi_{2}\bigg{|}\] \[\leq\frac{An_{1}!n_{2}!}{4\pi^{2}r_{1}^{n_{1}}r_{2}^{n_{2}}}\int_ {0}^{2\pi}\int_{0}^{2\pi}e^{B\sqrt{|z_{1}+r_{1}e^{i\varphi_{1}}|^{2}+|z_{2}+r_ {2}e^{i\varphi_{2}}|^{2}}}d\varphi_{1}d\varphi_{2}\] \[\leq\frac{An_{1}!n_{2}!}{r_{1}^{n_{1}}r_{2}^{n_{2}}}e^{B\sqrt{|z _{1}|^{2}+|z_{2}|^{2}}+B\sqrt{r_{1}^{2}+r_{2}^{2}}}. \tag{3.7}\]
Assuming now for the moment \(z_{1},z_{2}\neq 0\). Then we choose the radii \(r_{1},r_{2}\) as
\[r_{1}:=\frac{(n_{1}!)^{\frac{1}{n_{1}}}}{B}\qquad\text{and}\qquad r_{2}:= \frac{(n_{2}!)^{\frac{1}{n_{2}}}}{B}. \tag{3.8}\]
Using the inequality \(n!\leq n^{n}\) for every \(n\geq 1\), we can then estimate these radii by
\[r_{1}\leq\frac{n_{1}}{B}\qquad\text{and}\qquad r_{2}\leq\frac{n_{2}}{B}. \tag{3.9}\]
Using now the values (3.8) of \(r_{1}\) and \(r_{2}\) in the denomenator of (3.7) and the estimates (3.9) in the exponent of (3.7), leads to the stated estimate
\[\Big{|}\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}z_{2}^{n_{2}}}F( \mathbf{z})\Big{|}\leq A(eB)^{n_{1}+n_{2}}e^{B(|z_{1}|+|z_{2}|)},\]
whenever \(z_{1},z_{2}\neq 0\). However, since both sides of this inequality are continuous functions in \(z_{1},z_{2}\), it can be extended to every \(z_{1},z_{2}\in\mathbb{C}\) by continuity. The case \(n_{1}=0\) and/or \(n_{2}=0\) follows the same calculations with the choice \(r_{1}=1\) and/or \(r_{2}=1\) in (3.7). er reduces to
\[\Big{|}\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}z_{2}^{n_{2}}}F( \mathbf{z})\Big{|}\leq A(eB)^{n_{1}+n_{2}}e^{B|\mathbf{z}|}.\]
For the case where both \(n_{1}=n_{2}=0\) vanish, the estimate (3.5) is trivial. If exactly one of the numbers \(n_{1},n_{2}\) vanish, lets say \(n_{1}\neq 0\) and \(n_{2}=0\), we can do a similar computation, only replacing the Cauchy-formula (3.6) by
\[\frac{\partial^{n_{1}}}{\partial z_{1}^{n_{1}}}F(\mathbf{z})=\frac{n_{1}!}{2\pi i }\int_{|\xi_{1}-z_{1}|=r_{1}}\frac{F\big{(}\begin{smallmatrix}\xi_{1}\\ z_{2}\end{smallmatrix}\big{)}}{(\xi_{1}-z_{1})^{n_{1}+1}}d\xi_{1}.\qed\]
Next we prove the main property of the operator \(U_{\!D,N}(t,\mathbf{x})\), being a continuous operator in the space \(\mathcal{A}_{1}(\mathbb{C})\).
**Theorem 3.2**.: _For every fixed \(t>0\), \(\mathbf{x}\in\Omega\), the operator \(U_{\!D,N}(t,\mathbf{x}):\mathcal{A}_{1}(\mathbb{C}^{2})\to\mathcal{A}_{1}( \mathbb{C}^{2})\) is continuous. Moreover, there exists some constant \(C(t,\mathbf{x})\geq 0\), continuously depending on \(t\) and \(\mathbf{x}\), such that_
\[|U_{\!D,N}(t,\mathbf{x})F(z)|\leq AC(t,\mathbf{x})e^{B|\mathbf{z}|},\qquad \mathbf{z}\in\mathbb{C}^{2}, \tag{3.10}\]
_whenever \(F\in\mathcal{A}_{1}(\mathbb{C}^{2})\) satisfies \(|F(\mathbf{z})|\leq Ae^{B|\mathbf{z}|}\) for some \(A\geq 0\), \(B>0\)._
Proof.: Using the decomposition (2.8) of the Green's function \(G_{\!D,{ N}}(t,{\bf x},{\bf z})\) and the estimate (2.10) of the reduced Green's function \(\widetilde{G}(t,{\bf x},{\bf z})\), we can estimate the coefficients (3.2) by
\[|c_{n_{1},n_{2}}(t,{\bf x})| \leq\frac{1}{2\pi tn_{1}!n_{2}!}\int_{0}^{\infty}\int_{-\frac{ \pi}{2}}^{\frac{3\pi}{2}}e^{-\frac{\rho^{2}\sin(2\alpha)}{4t}}e^{\frac{3r\rho} {2t}}|\cos\theta|^{n_{1}}|\sin\theta|^{n_{2}}\rho^{n_{1}+n_{2}+1}d\theta d\rho\] \[\leq\frac{1}{tn_{1}!n_{2}!}\int_{0}^{\infty}e^{-\frac{\rho^{2} \sin(2\alpha)}{4t}}e^{\frac{3r\rho}{2t}}\rho^{n_{1}+n_{2}+1}d\rho\] \[=\frac{1}{tn_{1}!n_{2}!}\Big{(}\frac{8t}{\sin(2\alpha)}\Big{)}^{ \frac{n_{1}+n_{2}+2}{2}}\int_{0}^{\infty}e^{-2\rho^{2}+\frac{3\sqrt{2}r\rho}{ \sqrt{t\sin(2\alpha)}}}\rho^{n_{1}+n_{2}+1}d\rho\] \[\leq\frac{1}{tn_{1}!n_{2}!}\Big{(}\frac{8t}{\sin(2\alpha)}\Big{)} ^{\frac{n_{1}+n_{2}+2}{2}}e^{\frac{9r^{2}}{2t\sin(2\alpha)}}\int_{0}^{\infty}e ^{-\rho^{2}}\rho^{n_{1}+n_{2}+1}d\rho\] \[=\frac{\Gamma(\frac{n_{1}+n_{2}+2}{2})}{2tn_{1}!n_{2}!}\Big{(} \frac{8t}{\sin(2\alpha)}\Big{)}^{\frac{n_{1}+n_{2}+2}{2}}e^{\frac{9r^{2}}{2t \sin(2\alpha)}}\] \[\leq\frac{\pi^{2}}{2t\Gamma(\frac{n_{1}+1}{2})\Gamma(\frac{n_{2} +1}{2})}\Big{(}\frac{16t}{\sin(2\alpha)}\Big{)}^{\frac{n_{1}+n_{2}+2}{2}}e^{ \frac{9r^{2}}{2t\sin(2\alpha)}}, \tag{3.11}\]
where we used the estimates
\[\Gamma(a+b+1)\leq 2^{a+b+1}\Gamma\Big{(}a+\frac{1}{2}\Big{)} \Gamma\Big{(}b+\frac{1}{2}\Big{)}\quad\text{and}\] \[\frac{\Gamma(\frac{a+1}{2})^{2}}{\Gamma(a+1)}=B\Big{(}\frac{a+1}{ 2},\frac{a+1}{2}\Big{)}\leq B\Big{(}\frac{1}{2},\frac{1}{2}\Big{)}=\pi,\]
of the \(\Gamma\)-function, which are true for every \(a,b\geq 0\). Hence, if we assume that \(|F({\bf z})|\leq A^{B|{\bf z}|}\), we can use the estimate (3.5) of the derivatives of \(F\) to estimate the action of the operator \(U_{\!D,{ N}}\) by
\[\big{|}U_{\!D,{ N}}(t,{\bf x})F({\bf z})\big{|} =\bigg{|}\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t,{\bf x}) \frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}F( {\bf z})\bigg{|}\] \[\leq\frac{8A\pi^{2}}{\sin(2\alpha)}e^{\frac{9r^{2}}{2t\sin(2 \alpha)}}\sum_{n_{1},n_{2}=0}^{\infty}\frac{1}{\Gamma(\frac{n_{1}+1}{2}) \Gamma(\frac{n_{2}+1}{2})}\Big{(}\frac{4eB\sqrt{t}}{\sqrt{\sin(2\alpha)}} \Big{)}^{n_{1}+n_{2}}e^{B|{\bf z}|}\] \[=\frac{8A\pi^{2}}{\sin(2\alpha)}e^{\frac{9r^{2}}{2t\sin(2\alpha) }}E_{\frac{1}{2},\frac{1}{2}}\Big{(}\frac{4eB\sqrt{t}}{\sqrt{\sin(2\alpha)}} \Big{)}^{2}e^{B|{\bf z}|},\qquad{\bf z}\in\mathbb{C}^{2}.\]
Hence \(U_{\!D,{ N}}(t,{\bf x})F\in\mathcal{A}_{1}(\mathbb{C}^{2})\) and the inequality (3.10) is satisfied with the constant
\[C(t,{\bf x})=\frac{8\pi^{2}}{\sin(2\alpha)}e^{\frac{9r^{2}}{2t\sin(2\alpha)}}E _{\frac{1}{2},\frac{1}{2}}\Big{(}\frac{4eB\sqrt{t}}{\sqrt{\sin(2\alpha)}} \Big{)}^{2}.\]
For the proof of the continuity let \(F,(F_{n})_{n\in\mathbb{N}}\in\mathcal{A}_{1}(\mathbb{C}^{2})\) such that \(\lim_{n\to\infty}F_{n}=F\) in \(\mathcal{A}_{1}(\mathbb{C}^{2})\). By (1.3) this means that there exists some \(B\geq 0\) such that
\[A_{n}:=\sup_{z\in\mathbb{C}^{2}}|F_{n}(z)-F(z)|e^{-B|z|}\stackrel{{ n\to\infty}}{{\longrightarrow}}0.\]
With this constants \(A_{n}\), the difference admits the estimate \(|F_{n}(\mathbf{z})-F(\mathbf{z})|\leq A_{n}e^{B|\mathbf{z}|}\) and using (3.10), we get
\[\sup_{\mathbf{z}\in\mathbb{C}^{2}}\big{|}U_{\!{}_{D,N}}(t,\mathbf{x})F_{n}( \mathbf{z})-U_{\!{}_{D,N}}(t,\mathbf{x})F(\mathbf{z})\big{|}e^{-B|\mathbf{z}|} \leq A_{n}C(t,\mathbf{x})\stackrel{{ n\to\infty}}{{\longrightarrow}}0. \tag{3.12}\]
This proves the convergence \(\lim_{n\to\infty}U_{\!{}_{D,N}}(t,\mathbf{x})F_{n}=U_{\!{}_{D,N}}(t,\mathbf{x })F\) in \(\mathcal{A}_{1}(\mathbb{C}^{2})\) and hence the continuity of \(U_{\!{}_{D,N}}(t,\mathbf{x})\).
The next lemma uses the operator \(U_{\!{}_{D,N}}(t,\mathbf{x})\) to prove the continuous dependency of the solution \(\Psi_{\!{}_{D,N}}\) from the initial value \(F\). In order to emphasize the initial value, we will use the notation \(\Psi_{\!{}_{D,N}}(t,\mathbf{x};F)\) in the following.
**Corollary 3.3**.: _Let \(F,(F_{n})_{n\in\mathbb{N}}\in\mathcal{A}_{1}(\mathbb{C}^{2})\) be such that \(\lim_{n\to\infty}F_{n}=F\) in \(\mathcal{A}_{1}(\mathbb{C}^{2})\). Then_
* \(\Psi_{\!{}_{D,N}}(t,\mathbf{x};F)=U_{\!{}_{D,N}}(t,\mathbf{x})F(\mathbf{z}) \big{|}_{\mathbf{z}=\mathbf{0}}\)_,_
* \(\lim_{n\to\infty}\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})=\Psi_{\!{}_{D,N}}(t, \mathbf{x};F)\) _uniformly on compact subsets of_ \((0,\infty)\times\Omega\)_._
Proof.: The representation i) follows from the calculations in (3.1), where the differentiation \(\frac{\partial^{n_{1}+n_{2}}}{\partial x_{1}^{n_{1}}\partial z_{2}^{n_{2}}}\) and summation \(\sum_{n_{1},n_{2}=0}^{\infty}\) can be carried outside the integral due to the estimate
\[\bigg{|}G_{\!{}_{D,N}}\big{(}t,\mathbf{x},\rho e^{i\alpha}\big{(} \begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\frac{\partial n_{z_{1}}^{n_{1}} \partial n_{z_{2}}^{n_{2}}F(\mathbf{0})}{n_{1}!n_{2}!}(\rho e^{i\alpha}\cos \theta)^{n_{1}}(\rho e^{i\alpha}\sin\theta)^{n_{2}}\rho\bigg{|}\] \[\leq\frac{A(eB)^{n_{1}+n_{2}}}{2\pi tn_{1}!n_{2}!}e^{-\frac{\rho ^{2}\sin(2\alpha)}{4t}}e^{\frac{3r\rho}{2t}}\rho^{n_{1}+n_{2}+1},\]
following from (2.10) and (3.5), which makes
\[\sum_{n_{1},n_{2}=0}^{\infty}\int_{0}^{\infty}\int_{-\frac{\pi}{2}}^{\frac{3 \pi}{2}}\bigg{|}G_{\!{}_{D,N}}\big{(}t,\mathbf{x},\rho e^{i\alpha}\big{(} \begin{smallmatrix}\cos\theta\\ \sin\theta\end{smallmatrix}\big{)}\big{)}\frac{\partial n_{z_{1}}^{n_{1}} \partial x_{z_{2}}^{n_{2}}F(\mathbf{0})}{n_{1}!n_{2}!}(\rho e^{i\alpha}\cos \theta)^{n_{1}}(\rho e^{i\alpha}\sin\theta)^{n_{2}}\rho\bigg{|}d\varphi d\rho<\infty\]
absolute convergent. In order to prove the convergence in ii), we note that by i) and (3.12) we get
\[\big{|}\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})-\Psi_{\!{}_{D,N}}(t,\mathbf{x};F )\big{|}=\Big{|}U_{\!{}_{D,N}}(t,\mathbf{x})(F_{n}(\mathbf{z})-F(\mathbf{z}) )\Big{|}_{\mathbf{z}=\mathbf{0}}\leq A_{n}C(t,\mathbf{x}),\]
using the coefficients
\[A_{n}:=\sup_{z\in\mathbb{C}^{2}}|F_{n}(z)-F(z)|e^{-B|z|}\stackrel{{ n\to\infty}}{{\longrightarrow}}0.\]
Since the coefficient \(C(t,\mathbf{x})\) is moreover continuous by Theorem 3.2, the convergence
\[\lim_{n\to\infty}\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})=\Psi_{\!{}_{D,N}}(t, \mathbf{x};F)\]
is uniform on compact subsets of \((0,\infty)\times\Omega\).
## 4. Time persistence of superoscillations and supershift
In this section, we investigate the evolution of superoscillating functions as initial conditions in the Schrodinger equation (1.1). The expectation that for any sequence \((F_{n})_{n}\) of superoscillating initial conditions, the sequence of solutions \(\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})\) will again be superoscillating (for fixed times \(t>0\)), can easily be negated. In fact, one way of reasoning is that the convergence (1.6) of the initial conditions \(F_{n}\) to a plane wave \(e^{iax}\) implies that the solutions converge as
\[\lim_{n\to\infty}\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})=\Psi_{\!{}_{D,N}}\big{(} t,\mathbf{x};e^{i\mathbf{a}\cdot}\big{)},\]
see, e.g., the continuous dependency result in Corollary 3.3 ii). However, for \(\Psi_{D,N}(t,\mathbf{x};F_{n})\) to be superoscillating, the limit function \(\Psi_{D,N}(t,\mathbf{x};e^{i\mathbf{a}\,\cdot\,})\) has to be plane wave due to the definition in (1.6), which is not possible since the boundary condition forces the wave function (or its derivative) to vanish on \(\Gamma\). Moreover, the solution \(\Psi_{D,N}(t,\mathbf{x};e^{i\mathbf{a}\,\cdot\,})\) will no longer be a holomorphic function in the \(\mathbf{x}\)-variable and hence no element in the space \(\mathcal{A}_{1}(\mathbb{C}^{2})\). These considerations show that the precise mathematical notion of superoscillations is too narrow to persist in time.
This motivates the following notion of _supershift_, which basically is a replacement of the holomorphic exponentials \(e^{i\mathbf{a}\mathbf{x}}\) by arbitrary continuous functions \(\varphi_{\mathbf{a}}(\mathbf{x})\) in the Definition 1.1 of superoscillations.
**Definition 4.1** (Supershift).: _Let \(X\) be a metric space and_
\[\varphi_{\mathbf{k}}:X\to\mathbb{C},\qquad\mathbf{k}\in\mathbb{C}^{2}, \tag{4.1}\]
_be a family of complex valued functions such that \(\mathbf{k}\mapsto\varphi_{\mathbf{k}}(s)\) is continuous for every \(s\in X\). We say that a sequence of the form_
\[\Phi_{n}(s):=\int_{|\mathbf{k}|\leq k_{0}}\varphi_{\mathbf{k}}(s)d\mu_{n}( \mathbf{k}),\qquad s\in X, \tag{4.2}\]
_for some \(k_{0}>0\) and complex Borel measures \(\mu_{n}\) on the closed ball \(\overline{B_{k_{0}}(0)}\subseteq\mathbb{C}^{2}\), admits a supershift, if there exists some \(\mathbf{a}\in\mathbb{C}^{2}\) with \(|\mathbf{a}|>k_{0}\), such that_
\[\lim_{n\to\infty}\Phi_{n}(s)=\varphi_{\mathbf{a}}(s),\qquad s\in X, \tag{4.3}\]
_converges uniformly on compact subsets of \(X\)._
**Remark 4.2**.: _With the special choice \(X=\mathbb{C}^{2}\) and \(\varphi_{\mathbf{k}}(\mathbf{z})=e^{i\mathbf{k}\mathbf{z}}\), it turns out that the notion of supershift in Definition 4.1 is a generalization of the notion of superoscillations in Definition 1.1. Indeed, with this choice the integrals (1.5) and (4.2) coincide and since the \(\mathcal{A}_{1}\)-convergence (1.3) is stronger than the convergence on compact sets, the convergence (1.6) implies the convergence (4.3)._
In the following theorem we now prove that after the interaction of superoscillations with the half-plane barrier, the superoscillatory property turns into a supershift property, which then persists for all times \(t>0\).
**Theorem 4.3**.: _Let \((F_{n})_{n\in\mathbb{N}}\) be a superoscillating sequence according to Definition 1.1, i.e._
\[F_{n}(\mathbf{z})=\int_{|\mathbf{k}|\leq k_{0}}e^{i\mathbf{k}\mathbf{z}}d\mu _{n}(\mathbf{k})\stackrel{{ n\to\infty}}{{\longrightarrow}}e^{i \mathbf{a}\mathbf{z}},\quad\text{in }\mathcal{A}_{1}(\mathbb{C}^{2}). \tag{4.4}\]
_Then the sequence \(\Psi_{D}N(t,\mathbf{x};F_{n})\), \(n\in\mathbb{N}\) of solutions admits a supershift according to Definition 4.1. In particular we have_
\[\Psi_{D,N}(t,\mathbf{x};F_{n})=\int_{|\mathbf{k}|\leq k_{0}}\Psi_{D,N}\big{(} t,\mathbf{x};e^{i\mathbf{k}\,\cdot\,}\big{)}d\mu_{n}(\mathbf{k})\stackrel{{ n\to\infty}}{{\longrightarrow}}\Psi_{D,N}\big{(}t,\mathbf{x},e^{i \mathbf{a}\,\cdot\,}\big{)}, \tag{4.5}\]
_where the convergence is uniform on any compact subset of \((0,\infty)\times\Omega\)._
Proof.: For the first identity in (4.5), we use the representation of the wave function via the infinite order differential operator in Corollary 3.3 i). Then we get
\[\Psi_{D,N}(t,\mathbf{x};F_{n}) =U_{D,N}(t,\mathbf{x})\int_{|\mathbf{k}|\leq k_{0}}e^{i\mathbf{k} \mathbf{z}}d\mu_{n}(\mathbf{k})\Big{|}_{\mathbf{z}=\mathbf{0}}\] \[=\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t,\mathbf{x})\frac{ \partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}\int_{| \mathbf{k}|\leq k_{0}}e^{i\mathbf{k}\mathbf{z}}d\mu_{n}(\mathbf{k})\Big{|}_{ \mathbf{z}=\mathbf{0}}\] \[=\int_{|\mathbf{k}|\leq k_{0}}\sum_{n_{1},n_{2}=0}^{\infty}c_{n_ {1},n_{2}}(t,\mathbf{x})\frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}} \partial z_{2}^{n_{2}}}e^{i\mathbf{k}\mathbf{z}}\Big{|}_{\mathbf{z}=\mathbf{0 }}d\mu_{n}(\mathbf{k})\] \[=\int_{|\mathbf{k}|\leq k_{0}}U_{D,N}(t,\mathbf{x})e^{i\mathbf{ k}\mathbf{z}}d\mu_{n}(\mathbf{k})=\int_{|\mathbf{k}|\leq k_{0}}\Psi_{D,N} \big{(}t,\mathbf{x};e^{i\mathbf{k}\cdot}\,\big{)}d\mu_{n}(\mathbf{k}). \tag{4.6}\]
Here, in the third equation we were allowed to interchange the sum and the derivative with the integral because from (3.11) we conclude the estimate
\[\Big{|}c_{n_{1},n_{2}}(t,\mathbf{x})\frac{\partial^{n_{1}+n_{2}}} {\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}e^{i\mathbf{k}\mathbf{z}}\Big{|} =\Big{|}c_{n_{1},n_{2}}(t,\mathbf{x})(ik_{1})^{n_{1}}(ik_{2})^{n_{ 2}}e^{i\mathbf{k}\mathbf{z}}\Big{|}\] \[\leq\frac{\pi^{2}}{2t\Gamma(\frac{n_{1}+1}{2})\Gamma(\frac{n_{2}+ 1}{2})}\Big{(}\frac{16t}{\sin(2\alpha)}\Big{)}^{\frac{n_{1}+n_{2}+2}{2}}e^{ \frac{9r^{2}}{2t\sin(2\alpha)}}|k_{1}|^{n_{1}}|k_{2}|^{n_{2}}e^{|\mathbf{k} \mathbf{z}|}\] \[\leq\frac{8\pi^{2}}{\sin(2\alpha)\Gamma(\frac{n_{1}+1}{2})\Gamma (\frac{n_{2}+1}{2})}\Big{(}\frac{4k_{0}\sqrt{t}}{\sqrt{\sin(2\alpha)}}\Big{)} ^{n_{1}+n_{2}}e^{\frac{9r^{2}}{2t\sin(2\alpha)}}e^{k_{0}|\mathbf{z}|},\]
and hence the sum
\[\sum_{n_{1},n_{2}=0}^{\infty}\big{|}c_{n_{1},n_{2}}(t,\mathbf{x})\frac{ \partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}e^{i \mathbf{k}\mathbf{z}}\big{|}<\infty\]
is absolute convergent and interchanging sum and integral is allowed by the dominated convergence theorem and the fact that the measure \(\mu_{n}\) is complex and hence finite.
Secondly, the convergence in (4.5) has already been proven in Corollary 3.3 ii). Since finally the representation (4.6) is exactly the one of the supershift (4.2) with the metric space \(X=(0,\infty)\times\Omega\) and the functions
\[\varphi_{\mathbf{k}}(t,\mathbf{x}):=\Psi_{D,N}\big{(}t,\mathbf{x};e^{i\mathbf{ k}\cdot}\,\big{)}.\qed\]
In the following we consider the special case of superoscillating functions of the form (1.4) and show that the resulting wave functions admit a supershift.
**Corollary 4.4**.: _Let \(F_{n}\) be functions of the form_
\[F_{n}(\mathbf{z})=\sum_{j=0}^{n}C_{j}(n)e^{i\mathbf{k}_{\mathbf{j}}(n) \mathbf{z}},\quad\mathbf{z}\in\mathbb{C}^{2},\]
_with coefficients \(C_{j}(n)\in\mathbb{C}\) and wave vectors \(\mathbf{k}_{\mathbf{j}}(n)\in\mathbb{R}^{2}\) satisfying \(|\mathbf{k}_{\mathbf{j}}(n)|\leq 1\). If_
\[\lim_{n\to\infty}F_{n}(\mathbf{z})=e^{i\mathbf{z}\mathbf{z}}\]
converges in \(\mathcal{A}_{1}(\mathbb{C}^{2})\), for some \(\mathbf{a}\in\mathbb{R}^{2}\) with \(|\mathbf{a}|>1\), then also the sequence of solutions \(\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})\) converge as_
\[\lim_{n\to\infty}\Psi_{\!{}_{D,N}}(t,\mathbf{x};F_{n})=\lim_{n\to\infty}\sum_{j= 0}^{n}C_{j}(n)\Psi_{\!{}_{D,N}}\big{(}t,\mathbf{x};e^{i\mathbf{k}_{\mathbf{j}}( n)\,\cdot\,}\big{)}=\Psi_{\!{}_{D,N}}\big{(}t,\mathbf{x};e^{i\mathbf{a}\,\cdot\,}\big{)}, \tag{4.7}\]
_uniformly for \((t,\mathbf{x})\) in compact subsets of \((0,\infty)\times\Omega\)._
For the final part of this paper let us now fix \(t>0\) and \(\mathbf{x}\in\Omega\) and look at equation (4.7) in terms of the mapping \(\mathbf{k}\mapsto\Psi_{\!{}_{D,N}}(t,\mathbf{x};e^{i\mathbf{k}\,\cdot\,})\). Then, one sees that the value of this mapping at a point \(\mathbf{a}\) with \(|\mathbf{a}|>1\), which is located outside the unit ball is determined by only values \(|\mathbf{k}_{\mathbf{j}}(n)|\leq 1\) inside the unit ball. This property looks very much like analyticity. The following proposition shows that this is indeed the case.
**Proposition 4.5**.: _For every fixed \(t>0\), \(\mathbf{x}\in\Omega\), the mapping_
\[\mathbf{k}\mapsto\Psi_{\!{}_{D,N}}\big{(}t,\mathbf{x};e^{i\mathbf{k}\,\cdot\,} \big{)}\quad\text{is holomorphic on $\mathbb{C}^{2}$}.\]
Proof.: Using the representation of the wave function using the infinite order differential operator in Corollary 3.3 i), gives
\[\Psi_{\!{}_{D,N}}\big{(}t,\mathbf{x};e^{i\mathbf{k}\,\cdot\,} \big{)} =U_{\!{}_{D,N}}(t,\mathbf{x})e^{i\mathbf{k}\mathbf{z}}\Big{|}_{ \mathbf{z}=\mathbf{0}}\] \[=\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t,\mathbf{x}) \frac{\partial^{n_{1}+n_{2}}}{\partial z_{1}^{n_{1}}\partial z_{2}^{n_{2}}}e^{ i\mathbf{k}\mathbf{z}}\Big{|}_{\mathbf{z}=\mathbf{0}}\] \[=\sum_{n_{1},n_{2}=0}^{\infty}c_{n_{1},n_{2}}(t,\mathbf{x})(ik_{1 })^{n_{1}}(ik_{2})^{n_{2}}.\]
Since this is an everywhere convergent power series in \(\mathbf{k}\), the mapping \(\mathbf{k}\mapsto\Psi_{\!{}_{D,N}}(t,\mathbf{x};e^{i\mathbf{k}\,\cdot\,})\) is holomorphic on \(\mathbb{C}^{2}\).
|
2307.16511 | Classifying multilingual party manifestos: Domain transfer across
country, time, and genre | Annotating costs of large corpora are still one of the main bottlenecks in
empirical social science research. On the one hand, making use of the
capabilities of domain transfer allows re-using annotated data sets and trained
models. On the other hand, it is not clear how well domain transfer works and
how reliable the results are for transfer across different dimensions. We
explore the potential of domain transfer across geographical locations,
languages, time, and genre in a large-scale database of political manifestos.
First, we show the strong within-domain classification performance of
fine-tuned transformer models. Second, we vary the genre of the test set across
the aforementioned dimensions to test for the fine-tuned models' robustness and
transferability. For switching genres, we use an external corpus of transcribed
speeches from New Zealand politicians while for the other three dimensions,
custom splits of the Manifesto database are used. While BERT achieves the best
scores in the initial experiments across modalities, DistilBERT proves to be
competitive at a lower computational expense and is thus used for further
experiments across time and country. The results of the additional analysis
show that (Distil)BERT can be applied to future data with similar performance.
Moreover, we observe (partly) notable differences between the political
manifestos of different countries of origin, even if these countries share a
language or a cultural background. | Matthias Aßenmacher, Nadja Sauter, Christian Heumann | 2023-07-31T09:16:13Z | http://arxiv.org/abs/2307.16511v1 | # Classifying multilingual party manifestos:
###### Abstract
Annotating costs of large corpora are still one of the main bottlenecks in empirical social science research. On the one hand, making use of the capabilities of domain transfer allows re-using annotated data sets and trained models. On the other hand, it is not clear how well domain transfer works and how reliable the results are for transfer across different dimensions. We explore the potential of domain transfer across geographical locations, languages, time, and genre in a large-scale database of political manifestos. First, we show the strong within-domain classification performance of fine-tuned transformer models. Second, we vary the genre of the test set across the aforementioned dimensions to test for the fine-tuned models' robustness and transferability. For switching genres, we use an external corpus of transcribed speeches from New Zealand politicians while for the other three dimensions, custom splits of the Manifesto database are used. While BERT achieves the best scores in the initial experiments across modalities, DistilBERT proves to be competitive at a lower computational expense and is thus used for further experiments across time and country. The results of the additional analysis show that (Distil)BERT can be applied to future data with similar performance. Moreover, we observe (partly) notable differences between the political manifestos of different countries of origin, even if these countries share a language or a cultural background.
## 1 Introduction
Publishing party manifestos in the time frame leading up to an election is a common procedure in most parliamentary democracies around the globe. Summarizing the parties' political agendas for the upcoming electoral period, the published manifestos are intended to serve as guides for voters to reach their decision (Suiter and Farrell, 2011). Since the content of these manifestos also constitutes the foundation for the process of building government coalitions, analyzing them can be very insightful. Janda et al. (1995), for instance, investigate the common assumption that political parties often try to change their images following a poor election result. Other researchers examine if parties learn from foreign successful parties (Bohmelt et al., 2016). Tavits and Letki (2009) and Tsebeli's (1999) also investigate their research questions based on political manifestos.
The Manifesto Project1 covers programs of over 1000 political parties from more than 50 countries over a time frame from 1945 until today (Lehmann, 2022). The database provides access to the raw content of all documents as well as additional annotation for further analysis. Human annotators from over 50 different countries contributed by splitting the documents into quasi-sentences and subsequently classifying each of them according to a coding scheme covering 54 thematic categories. On a more course-grained scale, these 54 categories were further summarized into eight topics. Since manual annotation is extremely time and labor-intensive, requiring annotator training reliability, (partial) automation of the process could yield enormous potential for savings.
Footnote 1: [https://manifesto-project.wxb.eu/](https://manifesto-project.wxb.eu/)
Our research explores how methods from the field of Natural Language Processing (NLP), which are more and more frequently used in social science research (Wankmuller, 2021), can be used to classify the quasi-sentences of the political manifestos into the eight topics of the Manifesto coding scheme. Therefore, different NLP methods, namely TF-IDF + logistic regression (LR) as a comparative baseline (cf. Osnabrugge et al. (2023)) and different monolingual and multilingual versions of BERT (Devlin et al., 2019) are used to process and subsequently classify the sequences. In the following, first, the related work (cf. Sec. 2.1) and the data extraction process (cf. Sec. 2.2) will be explained in further detail followed by
the experimental setup (cf. Sec. 3), where we delve deeper into the concept of cross-domain classification and motivate the different cross-domain scenarios. The predictive performances of each evaluated model for each of the different scenarios are compared and discussed in Section 4. We conclude the experiments by fine-tuning a multilingual model on the whole corpus.
**Contribution:** Our main contributions can be summarized as follows: We extend the cross-domain setting introduced by Osnabrugge et al. (2023) along multiple axes. We not only measure transfer across genre (manifestos \(\rightarrow\) speeches) but also across time (2018 \(\rightarrow\) 2022) and country (leave-one-country-out, LOCO). Instead of relying on simple machine learning classifiers, we fine-tune pre-trained language models (Devlin et al., 2019; Sanh et al., 2019) achieving superior performance to simple models. We don't only rely on English texts, but leverage the whole Manifesto database by employing multilingual pre-trained models. This enables us to train one single model which can be used for all languages and countries. The code for our experiments and the trained models are publicly available to nurture further research: [https://github.com/slds-lmu/manifesto-domaintransfer](https://github.com/slds-lmu/manifesto-domaintransfer) (code) and [https://huggingface.co/assenmacher](https://huggingface.co/assenmacher) (models).
## 2 Materials and Methods
### Related work
We draw inspiration for our work from the research article "Cross-Domain Topic Classification for Political Texts" (Osnabrugge et al., 2023). The authors employ supervised machine learning (logistic regression, LR) alongside feature engineering techniques for text (TF-IDF w/ n-grams) for the classification of political manifestos and speeches. The analysis was performed on two (labeled) data sets, where each utterance was assigned one of the eight possible categories "freedom and democracy", "fabric of society", "economy", "political system", "welfare and quality of life", "social groups", "external relations" and "no topic". The source corpus consists of manifestos, collected between 1984 and 2018, which were extracted from the Manifesto Project (Krause et al., 2018) for the following seven English-speaking countries: Australia, Canada, Ireland, New Zealand, South Africa, the UK, and the USA. Each document was split into quasi-sentences (\(n_{source}=115,410\)) and then labeled by a trained human annotator from the Manifesto Project. In most cases, one quasi-sentence roughly equals one sentence, however, some long sentences containing several statements were split into multiple quasi-sentences. Osnabrugge et al. (2023) use this source corpus for training and for measuring the within-domain performance. The target corpus (\(n_{target}\) = 4,165), consists of English speeches held by members of the New Zealand Parliament in the time period from 1987 to 2002. The speeches were extracted from the official record of the New Zealand Parliament (Hansard), and manually annotated according to the same schema by Osnabrugge et al. (2023), who then use it for measuring the cross-domain classification performance.
After the hyperparameter tuning using grid search, they achieve an accuracy of 0.641 on the held-out set of the source corpus and an accuracy of 0.507 on the speeches, showing that cross-domain classification is a reasonable approach. Additionally, the authors create their own, more fine-grained, coding scheme with 44 topic categories for which they report lower performance values for both the within- (0.538) and the cross-domain (0.410) setting. It is important to note, that our performance scores are not perfectly comparable to Osnabrugge et al. (2023), since we download the data ourselves (with slight differences, cf. Sec. 2.2) and thus have a different train/validation/test split.
### Data extraction from Manifesto Project
For conducting the experiments described in Sec. 3, we extract the manifestos ourselves from the Manifesto Project database using its dedicated R-package _manifestoR_(Lewandowski et al., 2020). Thus, as opposed to Osnabrugge et al. (2023), our corpus also includes additional information on the year and country of origin for each utterance. Our data sets include the 2018-2 version of the corpus (Krause et al., 2018), similar to Osnabrugge et al. (2023), as well as the most recent version (2022-1, Lehmann et al., 2022), resulting in \(n_{2018,en}=114,523\) for the seven English-speaking countries mentioned in Sec. 2.1 and \(n_{2018,all}=996,008\) in total. For the 2022 corpus, there are in total \(158,601\) English observations and \(1,504,721\) for all languages, respectively. Among those, \(n_{2022,en}=27,764\) observations from the period between 2019 and 2022 constitute our test set for the experiments across time for
the English language. We observe a difference of 887 samples between the data from Osnabrugge et al. (2023) (\(n_{source}=115,410\)) and our data set (\(n_{2018,en}=114,523\)), which is probably due to potential changes in the 2018 version the database.
Figure 2 (Appendix A) visualizes the different label distributions for (a) the source corpus of Osnabrugge et al. (2023), (b) our extraction of the 2018-2 corpus, (c) our extraction of the 2022-1 corpus, and (d) the target corpus of the New Zealand speeches (Osnabrugge et al., 2023). While the former three roughly follow the same distribution, with about 57% of the observations assigned to either "_welfare and quality of life_" or "_economy_", the most common class of the latter is "_political system_" (\(\sim\)26%) followed by "_welfare and quality of life_" (\(\sim\)19%). Thus, the two main challenges aside from the domain transfer are the overall class imbalance as well as the differences between the source and target domain with respect to the label distribution. Further Figure 3 (Appendix A) shows the distribution of the target classes separated by the language the manifestos are written in. We display the three most frequent languages, which we use for conducting experiments across country (cf. Sec. 3.1), against the distribution in the entire 2018-2 corpus of all manifestos. Here we observe some minor differences, as "_welfare and quality of life_" and "_political system_" are more frequently addressed in German-speaking countries (compared to the overall corpus), "_welfare and quality of life_" and "_economy_" in French-speaking ones, and "_political system_" and "_economy_" in English-speaking ones. Notably, for all three languages, the topics "_freedom and democracy_" and "_external relations_" are addressed less often than in the whole 2018-2 corpus.
## 3 Experimental Setup
In this section, we introduce the concept of domain transfer in general and in particular the cross-domain classification settings for our application. Further, the methodological background for the employed model architectures will be laid out as follows: First, we briefly review common feature engineering techniques for text data and elaborate on the advantages and disadvantages. These techniques include term-frequency inverse-document-frequency (TF-IDF) weighting, as well as dense word or document embeddings. Second, we introduce two state-of-the-art NLP architectures that we employ in our analysis, namely BERT (Devlin et al., 2019) and DistilBERT (Sanh et al., 2019), both of which do not require prior feature engineering steps but accommodate the whole pipeline in one single model. Finally, we briefly sketch the individual experiments which were carried out over the course of this study.
### Cross-Domain Classification
When talking about _classification_ in the context of machine learning, researchers commonly implicitly refer to within-domain/within-distribution classification, implying that the trained model is tested on data from the same origin/distribution as the training data (i.e. the _source domain_). Cross-domain classification, on the other hand, explicitly considers a shift in the domain/distribution/source of the data, i.e. the data-generating process is assumed to be different. Frequently examined cases of domain shift in NLP include a change in language (i.e. training the model on text from one language and evaluating it in another, cf. Conneau et al. (2018, 2019)), topic (e.g. training the model on reviews on restaurants and evaluation it on reviews on laptops, cf. Pontiki et al. (2014)) or genre (e.g. training on texts and evaluation on transcribed audio data, cf. Osnabrugge et al. (2023)). In our experiments, we contribute to this body of research by considering the following different cross-domain settings:
Transfer across genre:We consider party manifestos from all seven (English-speaking) countries as our source corpus \(C_{source}=C_{2018,en}\) and evaluate the trained model on a target corpus \(C_{target}\) of transcribed parliamentary speeches from New Zealand. This setting is equivalent to the work of Osnabrugge et al. (2023), yet we rely on more elaborated model architectures.
Transfer across time:We use the party manifestos from all countries for all years up until 2018 as source corpus \(C_{source}\)2, while the target corpus \(C_{target}\) consists of party manifestos from the year 2019 - 2022. This setting is intended to test the temporal robustness of the fine-tuned models.
Footnote 2: \(C_{source}\) is either \(C_{2018,en}\), \(C_{2018,de}\) or \(C_{2018,fr}\)
Transfer across country:This setup comprises three distinct experiments for different languages (English, German, French), for each of which we include data from all3 countries, where manifestos in
the given language exist in the 2018-2 corpus. The setting for each language consists again of seven (five and four, respectively) different individual experiments, since for each language we include all but one country as source corpus \(C_{source}\) and evaluate the model on a target corpus \(C_{target}\) including only the manifestos from the single held-out country. Further, we also inspect a true multimodel model trained on data from all available countries.
Metrics and TrainingWe compare our results, which we measure in terms of Accuracy and Macro-F1 Score, from the cross-domain experiments to the performance we obtain for the within-domain setting. We opt for reporting the macro-averaged version of the F1 Score in order to take into account the class imbalance (cf. Fig. 2). For model training, we conduct a train/validation/test split with proportions.8/.1/.1; all reported performance values are measured on the test set. Note that, depending on the cross-domain setting, also different test sets than the random split are used. Table 1 summarizes the different investigated scenarios in a comprehensive manner, provides an overview of the respectively used corpora for training and evaluation, and specifies with which procedure the respective test sets were created or selected.
### Model architectures
Early feature engineering techniques relying on the bag-of-words (BoW) assumption have in recent years been replaced by more elaborated representation learning algorithms. BoW refers to counting the occurrences of words (or n-grams) in a document and representing it as \(V\)-dimensional vector, where \(V\) is the vocabulary size. This representation can be enhanced via TF-IDF, as done by Osnabrugge et al. (2023), via a re-weighting using corpus-level occurrence statistics.
With the advent of representation learning, it became possible to represent words (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2016) and documents (Le and Mikolov, 2014) by dense vectors of a comparably low, fixed dimensionality. These representations were used in a similar fashion in conjunction with a classifier as BoW-based representations. BERT (Devlin et al., 2019) enabled the coupling of these two steps, i.e. it provided one single end-to-end trainable model for learning (contextual) representations and training the classifier. The commonality of BERT and all subsequent architectures is that they all are relying on the Transformer architecture (Vaswani et al., 2017). Based on BERT, DistilBERT models can be trained using model distillation (Bucilua et al., 2006; Hinton et al., 2015), a training process during which the smaller student model (DistilBERT) is trained to mimic the larger teacher model's (BERT) behavior. In the case of DistilBERT, the student model, while having half the size of its teacher model, is able to retain approximately 95% of the teacher model's performance on the GLUE benchmark (Sanh et al., 2019).
We use bert-base-cased as well as distilbert-base-cased for English. For further experiments, we employ distilbert-base-german-cased, flaubert_small_cased (as no French DistilBERT is available) and distilbert-base-multilingual-cased.
### Experiments
In the first step, we stick to the setup from Osnabrugge et al. (2023), extracting similar data, re-running their experiments, and comparing against their LR+TF-IDF baseline. We further compare
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{2}{c}{Data set characteristic} & \multicolumn{2}{c}{Data set splitting} & \multicolumn{2}{c}{Data set sizes} \\ \cline{3-7} Scenario & Corpus & Language(s) & Training set & Test set & Training set & Test set \\ \hline within-domain & 2018-2 & En, De, Fr & random split\({}^{a}\) & random split\({}^{b}\) & 91,618 / 104,710 / 17,885 & 11,452 / 13,089 / 2,236 \\ \hline \hline manifestos \(\rightarrow\) speeches & 2018-2 & En & random split\({}^{a}\) & speeches & 91,618 & 4,165 \\
2018 \(\rightarrow\) 2022 & 2018-2 & En, De, Fr & random split\({}^{a}\) & future\({}^{c}\) & 91,618 / 104,710 / 17,885 & 27,764 / 30,542 / 343 \\ across country & 2018-2 & En, De, Fr & \(n-1\) countries & held-out country & \(-\)\({}^{d}\) & \(-\)\({}^{d}\) \\ \hline \hline Multilingual & 2018-2 & 38 languages & random split\({}^{a}\) & random split\({}^{b}\) & 796,806 & 99,601 \\ \hline \hline \multicolumn{7}{l}{\({}^{a}\)Here: 8/.1/.1, i.e. 80\% of the 2018-2 data.} \\ \multicolumn{7}{l}{\({}^{b}\)Here: 8/.1/.1, i.e. 10\% of the 2018-2 data.} \\ \multicolumn{7}{l}{\({}^{c}\)_”future”_ refers to all the data from the 2022-1 corpus recorded after the 2018-2 cut-off.} \\ \multicolumn{7}{l}{\({}^{d}\)Multiple different scenarios, test set contains one single country in each experiment.} \\ \end{tabular}
\end{table}
Table 1: Overview of the different cross-domain scenarios investigated in this work, alongside the respectively used corpora, test sets, and examined languages.
the performance of BERT against the cheaper DistilBERT for the English within-domain setting and the English cross-domain settings (manifestos \(\rightarrow\) speeches, 2018 \(\rightarrow\) 2022, and across country) to assess the competitiveness of the latter one. For the cross-domain scenarios in the other languages (German, French) we thereafter conduct all experiments with DistilBERT, since it is the cheaper model. The concluding multilingual experiments on the complete corpus are also conducted using a DistilBERT model, fine-tuning the model on the train set of a random split of _the whole_ 2018-2 data set.
## 4 Results
This section will be structured as follows: First, we will show the superior within-domain performance of pre-trained BERT-based models over the simple baseline from Osnabrugge et al. (2023) and will closely inspect the per-class within-domain performances of the different models. In conjunction with this, we also compare our models to Osnabrugge et al. (2023) on the manifestos \(\rightarrow\) speeches scenario, since we adopt it from their work. This scenario we can, however, only inspect for the English language as the corpus of speeches is from New Zealand. Second, we will verify if and how well experiments across genre and time work for the different monolingual models and the multilingual one. Third, we inspect closely how well performance can be transferred across different countries speaking the same language. Subsequently, we delve deeper into a truly multilingual by fine-tuning a pre-trained multilingual model on the entirety of the corpus and examining its performance for the different countries and languages.
Within-domain performanceThe results of our experiments comparing different models for within-domain classification, manifestos \(\rightarrow\) speeches, and \(2018\)\(\rightarrow\) 2022 classification are presented in Table 2. For within-domain classification, the TF-IDF + LR model is clearly outperformed by the deep learning models, where the English models perform better than the German, French, and Multilingual ones. It is notable that in general, the French model exhibits rather low performance values4 (within-domain as well as across time) compared to all other models, which may for one reason be caused by the relatively small corpus size for this language compared to all other ones (cf. Tab. 1). We also observe the expectedly higher performance of the English BERT model compared to the English DistilBERT, since it generally outperforms DistilBERT in all scenarios except for the accuracy in _manifesto \(\rightarrow\) speeches_ transfer. However, the performance gaps between these two models are rather small, which very well justifies the use of DistilBERT for the remainder of the experiments, trading some performance for saving computational expenses.5
Footnote 4: Note, that cannot be compared to the English TF-IDF + LR baseline due to different training and test sets.
Footnote 5: While training BERT for one epoch took roughly 1h 11min, DistilBERT nearly halved this training time per epoch to about 38min. Adding this up over three epochs amounts to time savings of nearly 100min.
When further considering the predictive performance separately for each of the eight classes (cf. Tab. 3), we learn that for none of the languages and for none of the investigated scenarios any of the monolingual DistilBERT models was able to predict a single case of the highly underrepresented "_no topic_" class. The obvious reasons for this are the low number of observations as well as the potential ambiguity, heterogeneity, and fuzziness of the manifestos that could not even by the human annotators be classified into one coherent class but
\begin{table}
\begin{tabular}{l c|c|c c c c} \hline \hline & \multicolumn{2}{c|}{within-domain} & \multicolumn{2}{c}{manifestos \(\rightarrow\) speeches} & \multicolumn{2}{c}{2018 \(\rightarrow\) 2022} \\ \cline{3-7} & Accuracy & Macro-F1 & Accuracy & Macro-F1 & Accuracy & Macro-F1 \\ \hline TF-IDF + LR & 0.6413 & 0.5195 & 0.5059 (\(\downarrow\) 0.1354) & 0.4474 (\(\downarrow\) 0.0586) & – & – \\ \hline \hline English BERT & 0.6977 & 0.5841 & 0.5613 (\(\downarrow\) 0.1364) & 0.5046 (\(\downarrow\) 0.0795) & 0.6841 (\(\downarrow\) 0.0136) & 0.5707 (\(\downarrow\) 0.0134) \\ English DistilBERT & 0.6866 & 0.5694 & 0.5669 (\(\downarrow\) 0.1197) & 0.5026 (\(\downarrow\) 0.0568) & 0.6784 (\(\downarrow\) 0.0082) & 0.5620 (\(\downarrow\) 0.0074) \\ \hline \hline German DistilBERT & 0.6583 & 0.5628 & – & – & 0.6559 (\(\downarrow\) 0.0024) & 0.5485 (\(\downarrow\) 0.0143) \\ FlauBERT & 0.6087 & 0.5159 & – & – & 0.6093 (\(\uparrow\) 0.0006) & 0.4783 (\(\downarrow\) 0.0376) \\ \hline \hline Multilingual DistilBERT & 0.6748 & 0.5941 & – & – & 0.6311 (\(\downarrow\) 0.0437) & 0.5278 (\(\downarrow\) 0.0663) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance values (Accuracy and Macro-F1 Score) of TF-IDF + LR (Osnabrugge et al., 2023) versus English BERT and DistilBERT models (upper part) as well as for German DistilBERT and French FlauBERT models (middle part) and the multilingual DistilBERT model (lower part). Absolute in-/decrease versus the within-domain performance values are appended in parentheses.
were assigned to this collection basin. This peculiarity of the results should always be taken into account when interpreting them since the macro-averaged F1 Score tends to be a rather conservative performance measure as it weighs the performance of this class similarly to all other classes. This also largely explains the quite notable gap between the Accuracies and Macro-F1 Scores (cf. Tab. 2).
The largest class (in terms of the number of observations) was easiest to classify for the DistilBERT models across all languages, i.e. for "_welfare and quality of life_" overall the highest values in \(P\), \(R\), and \(F1\) are observed. Interestingly it is not the second largest class ("_economy_") where the models perform next best, but rather one of the smallest classes ("_external relations_"), which is nicely visualized by the highlighting in Table 3. Nevertheless, the models are capable of predicting also the "_economy_" class quite well. Further, it is interesting to observe that for the classes exhibiting high F1 Scores, the gap between recall and precision is (a) rather small and (b) sometimes even in favor of the recall, while for the low-performance classes, the recall often appears to be notably worse than the precision. This is especially consistently observable for the class "_social groups_".
When compared to the monolingual models, the multilingual one stands out due to two distinct reasons (cf. Tab. 3): First, it is the only one of the four models to detect at least _any_ true "_no topic_" observations in its test set. Although the performance for this particular class still is not great, it still seems as if learning from more (and more diverse) data seems to help in this respect. Second, and probably also related to the first advantage, the performance seems to be more stable when comparing the scores across the different classes. While for the other English and French, the ranges (excluding "_no topic_") of the F1 Score were 0.2290, and 0.1957 respectively, this metric is with a value of only 0.1556 comparably small, similar to 0.1666 for the German language.
Transfer across genre and timeInspecting the two cross-domain settings in Table 2 more closely, we see that transfer across the temporal axis works better than across the genre axis. While for the English DistilBERT model the performance on the New Zealand speeches drops by quite a margin (\(\downarrow\) 0.1197 / \(\downarrow\) 0.0568), it merely changes when evaluated on the data from a different time period (\(\downarrow\) 0.0082 / \(\downarrow\) 0.0074). Again, comparing BERT to DistilBERT, the latter even seems to be more stable over time since the performance decrease is slightly less pronounced. For the cross-modal transfer scenario, we provide the confusion matrix (cf. Fig. 4 in Appendix B) to enable further error analysis. While the two most frequent classes are still very accurately predicted, the model severely struggles when it comes to distinguishing many of the other classes from the "_political system_" category. Even for the two largest classes, a notable amount of the instances were misclassified into this category. Further, the model's error of confusing a certain category with "_political system_" is even worse for the smaller classes, e.g. "_freedom and democracy_", with fewer samples.
While this comparison of the scenarios across genre and across time can not be made for the other languages and the multilingual scenario, we also observe only very minor drops in performance for the latter scenario there. For the two monolingual models, we record decreases for accuracy of 0.24 percentage points for the German model and even no decrease at all for the accuracy of the French DistilBERT model, as well as decreases of 1.43 (German) and 3.76 (French) percentage points for
\begin{table}
\begin{tabular}{l c c|c c c|c c c|c c c} \hline \hline & \multicolumn{2}{c}{English} & \multicolumn{4}{c}{German} & \multicolumn{4}{c}{French} & \multicolumn{4}{c}{Multilingual} \\ \cline{2-11} \cline{2-11} & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline No Topic & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.4142 & 0.1394 & 0.2086 \\ Freedom / Democracy & 0.6258 & 0.5318 & 0.5750 & 0.6631 & 0.6133 & 0.6372 & 0.6533 & 0.5868 & 0.6183 & 0.6165 & 0.5787 & 0.5970 \\ External Relations & **0.7395** & 0.7517 & 0.7456 & **0.7429** & **0.7067** & **0.7243** & **0.6688** & **0.6913** & **0.6799** & **0.7357** & 0.7068 & 0.7209 \\ Social Groups & 0.5794 & 0.5488 & 0.5637 & 0.6040 & 0.5370 & 0.5685 & 0.6034 & 0.4506 & 0.5160 & 0.6242 & 0.5372 & 0.5774 \\ Political System & 0.5629 & 0.4773 & 0.5166 & 0.6088 & 0.5145 & 0.5577 & 0.4407 & 0.5372 & 0.4842 & 0.6012 & 0.5646 & 0.5823 \\ Fabric of Society & 0.6463 & 0.6727 & 0.6592 & 0.5909 & 0.6496 & 0.6189 & 0.5485 & 0.4837 & 0.5140 & 0.6212 & 0.6092 & 0.6151 \\ Economy & 0.7269 & 0.7570 & 0.7416 & 0.6882 & 0.7009 & 0.6945 & 0.6270 & 0.6449 & 0.6358 & 0.6934 & 0.7449 & 0.7182 \\ Welfare / Quality of Life & 0.7293 & **0.7793** & **0.7534** & 0.6686 & **0.7379** & 0.7015 & 0.6604 & **0.6990** & 0.6791 & 0.7151 & **0.7517** & **0.7330** \\ \hline \hline \end{tabular}
\end{table}
Table 3: A detailed performance report for per-class within-domain performance, measured in terms of Precision (P), Recall (R), and F1 Score, for the DistilBERT models in English and German, the French FlauBERT as well as for the multilingual DistilBERT. Best scores (per language) in **bold**, runner-up underlined.
Macro-F1. The multilingual model, however, exhibits somewhat larger drops in performance of 4.37 percentage points for accuracy and 6.63 percentage points for Macro-F1, respectively.
Transfer across countriesThe results of our LOCO experiments using the monolingual DistilBERT models for English and German, and a FlauBERT model for French, are presented in Table 4. We support the results by visualizations (cf. Fig. 1) of how the performance on manifests from a certain country changes depending on whether we (a) evaluate on its portion of the random test split or (b) on all manifestos of this country as a hold-out set. The most important takeaway from these illustrations is the fact that completely withholding data from a certain country hurts model performance on data from this specific country, but not in equal parts for the different languages. For German-speaking countries (cf. Fig. 1, middle) the decrease from left to right is less pronounced than for the other two languages (Fig. 1, top/bottom).
The overall takeaway from the previous experiments (better performance for English) is not entirely confirmed by these results, also showing a much more nuanced picture regarding interesting inter-country differences per language. For the LOCO scenario within the English-speaking countries, Australia and New Zealand exhibit the highest values for accuracy, while South Africa and Canada outperform the other with respect to Macro-F16. The two European countries and the United States overall show the worst performance with respect to both metrics. Further, it is worth noting that there is a rather high variation among these performance values compared to German and French. Excluding the "_no topic_" class, the values for accuracy exhibit a range of 0.0560, while the Macro-F1 Score has a range of 0.0686. On a final note, it is interesting to see that the performance on New Zealand _manifestos_ is among the top-ranking countries in accuracy, while the domain transfer across modalities (to New Zealand _parliamentary speeches_) shows a little bit of a performance decrease.
Footnote 6: Canada has better Macro-F1 Scores than most other countries (except for the top two), but comparably low accuracy.
The German LOCO classification experiments using DistilBERT exhibit somewhat different results compared to the English experiments. While the overall averages are comparable, the ranges (0.0415 for accuracy and 0.0344 for Macro-F1) indicate that the values for all countries are relatively similar, with Luxembourg having the highest accuracy of 0.6114 as well as the highest Macro-F1 Score of 0.5134. We speculate that the reason for this observation might lie (a) in the similarity of the political systems7 of all these countries and (b) in their geographical and cultural closeness. However, being no experts in political science, we would leave the definite interpretation of such matters to those. Regarding the overall performance, the German model performs no worse than the English model(s) which was not necessarily to be expected due to our conclusions drawn from Tables 2 and 3.
Footnote 7: Despite Luxembourg being a parliamentary monarchy, the country still has a similar landscape of political parties compared to its neighbors, including i.a. social and Christian democrats, liberals, a Green party, as well as different smaller left- and right-wing parties.
A rather distinct picture emerges when inspecting the results for the French LOCO classification (still bearing in mind that the performance estimates for Switzerland, with only 19 observations, might make the interpretations rather unreliable). The range for accuracy is 0.2739 and 0.3466 for Macro-F1, which is notably larger than the ranges for both the English-speaking countries and the German-speaking countries. Switzerland exhibits by far the highest values, but it should again be
Figure 1: Comparison of the performance on data from specific English- (top), German- (middle), and French-speaking (bottom) countries via the Accuracy (left) and Macro-F1. On the left-hand side of each subfigure, performance is measured on the portion of each country in the random test set, while on the right side, the country-specific LOCO performance is displayed. Lines are drawn between the respective points to visualize the connection within one country. Switzerland is excluded, since there is only one sample in the random test split.
noted that they are based on only 19 observations. The average values are comparable, although a bit lower, to the other two languages, but again strongly influenced by the seemingly strong performance on Swiss manifestos. Regarding the other three countries, France itself stands out from the other two, exhibiting both the highest accuracy as well as the highest Macro-F1 Score among them.
## 5 Discussion and Limitations
The advent of large language models (LLMs), in particular ChatGPT (OpenAI, 2022; Bubeck et al., 2023), resulted in a paradigm change in NLP research. Since then, we can loosely categorize existing and newly introduced classification models into several bins: "pre-train/fine-tune", "prompting", and "chatting"While "pre-train/fine-tune" has been (and still widely is) the pre-dominant research paradigm in applied NLP research since \(\sim 2018\), "prompting" has upon the introduction of GPT-3 (Brown et al., 2020) become an exciting approach for tackling (a) multi-task learning and (b) low-resource scenarios via few-/zero-shot learning. Further, accessing a model via prompting might be considered more "human-like" / "natural" than training a model on class labels via gradient descent.
On the other hand, there are still also numerous reasons not to abandon architectures relying on the "pre-train/fine-tune" paradigm (Yang et al., 2023), several of which we consider fulfilled as far as our research question is concerned. First, given the large, annotated training corpus there is no need to rely on few-shot learning but rather to use all of the available data points to achieve maximum model performance. Prompting models would struggle with this amount of data due to context length constraints. Second, given the very custom-defined label set of political topics for this political corpus, for general-purpose prompting models, this label set would always have to be in some way appended to the prompt for the model to be informed about the granularity in the first place. On the one hand, this would probably lead to the model struggling with learning the underlying concepts, on the other hand, it would lead to better adaptive capabilities in case the granularity changes. Third, for domain-specific research questions like this, it might not always be feasible for researchers to access the computational resources for running or prompting such large models, and hence a task-specific, parameter-efficient model that does the trick equally well might be preferable.
\begin{table}
\begin{tabular}{l c c|c c|c c|c c} \hline \hline & & \multicolumn{3}{c}{English-LOCO} & \multicolumn{2}{c}{German-LOCO} & \multicolumn{2}{c}{French-LOCO} \\ & \multicolumn{2}{c}{(DistilBERT)} & \multicolumn{2}{c}{(DistilBERT)} & \multicolumn{2}{c}{(FlauBERT)} & \multicolumn{2}{c}{(FlauBERT)} \\ \cline{2-9} & \(n_{random}\) & \(n_{country}\) & Accuracy & Macro-F1 & Accuracy & Macro-F1 & Accuracy & Macro-F1 \\ \hline Australia & 1,861 & 18,480 & **0.6304** & 0.4877 & – & – & – & – \\ Canada & 322 & 3,047 & 0.5829 & **0.5441** & – & – & – & – \\ Ireland & 2,548 & 25,357 & 0.5962 & 0.4895 & – & – & – & – \\ New Zealand & 2,840 & 28,561 & 0.6268 & 0.4761 & – & – & – & – \\ South Africa & 628 & 6,423 & 0.5997 & 0.4954 & – & – & – & – \\ United Kingdom & 2,182 & 21,836 & 0.6080 & 0.4924 & – & – & – & – \\ United States & 1,071 & 10,819 & 0.5744 & 0.4755 & – & – & – & – \\ \hline Austria & 3,361 & 33,818 & – & – & 0.6071 & 0.5077 & – & – \\ Germany & 6,452 & 63,413 & – & – & 0.6039 & 0.5060 & – & – \\ Italy & 63 & 651 & – & – & 0.5699 & 0.4733 & – & – \\ Luxembourg & 1,850 & 19,291 & – & – & **0.6114** & **0.5134** & – & – \\ Switzerland & 1,390 & 13,715 & – & – & 0.5754 & 0.4878 & – & – \\ \hline Canada & 517 & 5,386 & – & – & – & – & 0.4629 & 0.3822 \\ France & 850 & 8,290 & – & – & – & – & 0.5624 & 0.4511 \\ Luxembourg & 868 & 8,662 & – & – & – & – & 0.5179 & 0.3993 \\ Switzerland & 1 & 19 & – & – & – & – & **0.7368** & **0.7288** \\ \hline \hline
**Average** & & & **0.6026** & **0.4944** & **0.5935** & **0.4976** & **0.5700** & **0.4904** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Leave-one-country-out performance (Accuracy and Macro-F1) for English (7 countries), German (5 countries), and French (4 countries). Best scores per language in **bold**, runner-up underlined. We report both \(n_{random}\) for the number of observations in the random test split and \(n_{country}\) for the number of observations when the respective country is used as held-out set.
We further acknowledge that the performance could potentially still be increased using more elaborate models following the "pre-train/fine-tune" paradigm, e.g. variants of the T5 model family (Raffel et al., 2020; Xue et al., 2020). Using these models, however, come at the cost of a higher computational expense potentially requiring much more VRAM than the average practitioner has access to. The models we employ can, on the other hand, be fine-tuned comfortably using smaller GPUs with around 16GB of VRAM in an acceptable amount of time. Given the ever-increasing model sizes and thus also the computational requirements, this is an important issue to keep an eye on.
## 6 Conclusion and Future Work
We showed in a series of extensive experiments that domain transfer along three different axes (genre, time, country) in principal works for this sort of political text. We observed the largest performance drops when attempting to generalize across modalities, however, the models tend to generalize very well across time. While the first finding might be foreseeable, the latter result is insofar kind of interesting since after the time point we chose for splitting the data (2018) quite some new topics, e.g. the global covid-19 pandemic or the Ukrainian war, emerged. Regarding the generalization across country, even within languages (and hence to some extent also cultural backgrounds), there seem to be notable differences between the political communication in the different countries as observed by the large performance differences. To conclude, we can state that a true multilingual approach towards classifying political text looks promising, yielding good and stable performance across numerous countries with different languages.
Interesting starting points for future work are obviously to examine the capacities of the emerging ever more powerful LLMs to tackle challenging tasks like this and to make use of the continuously extending data pool from the Manifesto project. Since new countries and time points are added constantly, there is definitely the potential to extend our work in future research.
## Ethical considerations
To the best of our knowledge, no ethical considerations are implied by our work. The only aspect that is affected in a broader sense is the environmental impact of the computationally expensive experiments. This issue naturally comes with pre-training large language models and is obviously a concern that has to be expressed in every work dealing with this sort of model. But on the other hand, our work rather works against increasing the environmental impact, since we "only" focus on reusing existing pre-trained models and performing the cheap(er) fine-tuning step. Further, we also provide access to our fine-tuned models which can be used by other researchers.
## Acknowledgements
This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of BERD@NFDI - grant number 460037581.
|
2309.09189 | Shuffling posets on trajectories (technical report) | Choreographies describe possible sequences of interactions among a set of
agents. We aim to join two lines of research on choreographies: the use of the
shuffle on trajectories operator to design more expressive choreographic
languages, and the use of models featuring partial orders, to compactly
represent concurrency between agents. Specifically, in this paper, we explore
the application of the shuffle on trajectories operator to individual posets,
and we give a characterisation of shuffles of posets which again yield an
individual poset. | Luc Edixhoven | 2023-09-17T07:30:17Z | http://arxiv.org/abs/2309.09189v1 | # Shuffling posets on trajectories
###### Abstract
Choreographies describe possible sequences of interactions among a set of agents. We aim to join two lines of research on choreographies: the use of the shuffle on trajectories operator to design more expressive choreographic languages, and the use of models featuring partial orders, to compactly represent concurrency between agents. Specifically, in this paper, we explore the application of the shuffle on trajectories operator to individual posets, and we give a characterisation of shuffles of posets which again yield an individual poset.
Keywords:Posets Shuffle on trajectories Concurrency Technical report of a paper to be published in the proceedings of iFM 2023.
## 1 Introduction
Distributed systems are becoming ever more important. However, designing and implementing them is difficult. The complexity resulting from concurrency and dependencies among agents makes the process error-prone and debugging non-trivial. As a consequence, much research has been dedicated to analysing communication patterns, or protocols, among sets of agents in distributed systems. Examples of such research goals are to show the presence or absence of certain safety properties in a given system, to automate such analysis, and to guarantee the presence of desirable properties by construction.
Part of this research deals with _choreographies_. Choreographies can be used as global specifications for asynchronously communicating agents, and contain certain safety properties by construction. As a drawback, choreographic languages typically have limitations on their expressiveness, since they rely on grammatical constructs for their safety properties, which exclude some communication patterns. We have recently shown that the _shuffle on trajectories_ operator can be used to specify choreographies without compromising expressiveness [2]. Consequently, it could serve as a basis for more expressive choreographic languages.
Other recent work on choreographies includes the use of models featuring partial orders, such as event structures [1] and pomsets [6, 3], to represent and
analyse the behaviour of choreographies. By using a partial order to explicitly capture causal dependencies between pairs of actions, these models avoid the exponential blowup from, e.g., parallel composition of finite state machines.
We aim to join these two lines of research by extending the shuffle on trajectories operator from words, i.e., totally ordered traces, and languages to partially ordered traces and sets thereof. In this paper, as a first step, we explore the application of the shuffle on trajectories operator to individual partially ordered sets, or posets. The main challenge is that the resulting behaviour cannot always be represented as one poset and may require a set of them. In particular, we give a characterisation of shuffles of posets which again yield an individual poset.
OutlineWe recall the concept and definition of the shuffle on trajectories operator in Section 2. We briefly discuss posets in Section 3. In Section 4 we discuss how to apply the shuffle on trajectories operator to posets, and specifically which shuffles of posets will yield an individual poset as a result. Finally, we briefly discuss future work in Section 5.
The proofs of Proposition 1 and Lemma 1 can be found in the appendix.
## 2 Shuffle on trajectories
We recall the basic definitions from [2]. The shuffle on trajectories operator is a powerful variation of the traditional shuffle operator3, which adds a control trajectory (or a set thereof) to restrict the permitted orders of interleaving. This allows for fine-grained control over orderings when shuffling words or languages. The binary operator was defined -- and its properties thoroughly studied -- by Mateescu et al. [4]; a multiary variant was introduced slightly later [5].
Footnote 3: In concurrency theory, the shuffle operator is also known as free interleaving, non-communication merge, or parallel composition.
When defined on words, the shuffle on trajectories takes \(n\) words and a _trajectory_, which is a word over the alphabet \(\{1,\ldots,n\}\). This trajectory specifies the exact order of interleaving of the shuffled words: in Figure 1, the trajectory \(1221112112\) specifies that the result should first take a symbol from the first word, then from the second, then again from the second and so on.
Formally, let \(w_{1},\ldots,w_{n}\) be finite words over some alphabet and let \(t\) be a finite word over the alphabet \(\{1,\ldots,n\}\). Let \(\varepsilon\) be the empty word. Then:
\[\shuffle_{t}^{n}(w_{1},\ldots,w_{n})=\begin{cases}a\shuffle_{t^{\prime}}^{n}(w _{1},\ldots,w_{i}^{\prime},\ldots,w_{n})&\text{if $t=it^{\prime}$ and $w_{i}=aw_{i}^{\prime}$}\\ \varepsilon&\text{if $t=w_{1}=\ldots=w_{n}=\varepsilon$}\end{cases}\]
We note that \(\shuffle_{t}^{n}(w_{1},\ldots,w_{n})\) is only defined if the number of occurrences of \(i\) in \(t\) precisely matches the length of \(w_{i}\) for every \(i\). We then say that \(t\)_fits_\(w_{i}\).
Example 1: \(-\shuffle_{121332}^{3}(ab,cd,\mathit{ef})=\mathit{acbefd}\), since \(121332\) fits every word.
* \(\sqcup\!\!1^{2}_{121}(ab,cd)\) is undefined, since \(121\) does not fit \(cd\).
The shuffle on trajectories operator naturally generalises to languages: the shuffle of a number of languages on a set (i.e., a language) of trajectories is defined as the set of all valid shuffles of words in the languages for which the trajectory fits all the words. Formally:
\[\sqcup\!\!1^{n}_{T}(L_{1},\ldots,L_{n})=\{\sqcup\!\!1^{n}_{t}(w_{1},\ldots,w_{n })\mid t\in T,w_{1}\in L_{1},\ldots,w_{n}\in L_{n}\}\]
As the operator's arity is clear from its operands, we typically omit it.
## 3 Posets
Partially ordered sets, or posets for short, consist of a set of nodes \(E\) (events), and a partial order1\(\leq\) defining dependencies between pairs of events -- i.e., an event can only fire if all events preceding it in the partial order have already fired. We write \(a<b\) to denote that \(a\leq b\) and \(a\neq b\). We write \(a\geq b\) resp. \(a>b\) to denote that \(b\leq a\) resp. \(b<a\). We write \(a\ngeq b\) to denote that \(a\nleq b\) and \(b\nleq a\); we then say that \(a\) and \(b\) are _concurrent_. We occasionally write \(E_{P},\leq_{P}\), \(<_{P}\), \(\geq_{P}\), \(>_{P}\) and \(\nleq_{P}\) to specify that the set of events or relation belongs to poset \(P\), but where this is clear from context we typically omit the subscript.
Footnote 1: Recall that a partial order is reflexive, transitive and antisymmetric.
The behaviour (or language) of a poset \(P\), written \(L(P)\), is the set of all maximal traces, i.e., maximal sequences of its events, that abide by \(\leq\). In this sense, posets can be considered a generalisation of words with concurrency: they feature a fixed set of symbols (events)2, but they can allow multiple orderings of them instead of only a single one. Concurrent events can happen in any order. Consequently, all traces obtained from a trace in \(L(P)\) by swapping adjacent concurrent events must also be in \(L(P)\). In fact, any trace in \(L(P)\) can be obtained from any other trace in \(L(P)\) in this fashion.
Figure 1: The shuffle of ‘banana’ and ‘pear’ over a trajectory \(1221112112\): ‘bpeanaanar’.
Example 2: For poset \(P_{ex}\) in Figure 2, \(E=\{a,b,c,d\}\) and the partial order consists of \(a\leq a\), \(a\leq c\), \(a\leq d\), \(b\leq b\), \(b\leq d\), \(c\leq c\) and \(d\leq d\). Its language \(L(P_{ex})\) consists of the traces \(abcd\), \(abdc\), \(acbd\), \(bacd\), and \(badc\).
We note that the dependencies in a poset can also be observed in its set of traces. For example, if \(a<b\) then \(a\) will precede \(b\) in every trace, and if \(a\not\gtrless b\) then there will both be traces where \(a\) precedes \(b\) and traces where \(b\) precedes \(a\). Formally, we can extract the following relation \(\leq_{L}\) from a set of traces \(L\subseteq E^{*}\):
\[\exists x,y,z\in E^{*}:xaybz\in L\] \[\frac{\forall\hat{x},\hat{y},\hat{z}\in E^{*}:\hat{x}b\hat{y}a\hat {z}\notin L}{a\leq_{L}b}\qquad\frac{a\leq_{L}b\leq_{L}c}{a\leq_{L}a}\]
We then propose the following:
Proposition 1: _Let \(P=\langle E_{P},\leq_{P}\rangle\) be a poset. Then \(\leq_{L(P)}=\leq_{P}\)._
To model trajectories, which require duplicate symbols, we must also introduce _labelled_ posets, or _lposets_. In these, every event is assigned a label, which is not necessarily unique. Its traces then use these labels instead of the events.
## 4 Shuffling posets
As a first step towards shuffling posets, we first reinterpret shuffles on words as posets. In other words: we consider the case where all posets, including the trajectory, are totally ordered and thus consist of a single trace. This is shown in Figure 3, which features the shuffle from Figure 1 interpreted as a poset. The traces 'banana' and 'pear' are present as totally ordered parts of the poset, and the trajectory adds additional dependencies between the two, as shown by the vertical (and diagonal) arrows.
Generalising this to arbitrary posets and lposets is not trivial, but we have some knowledge to assist us. Crucially, since we can determine the language of a
Figure 2: Graphical representation of a number of posets and lposets, where an arrow from \(a\) to \(b\) should be read as \(a\leq b\). The partial order is the reflexive and transitive closure of the dependencies depicted by the arrows. For the lposets, the labels are shown rather than their events.
poset, it must be so that the result of shuffling posets yields the same language as the shuffle of the languages of these posets, which is defined in Section 2:
\[L(\sqcup_{P_{t}}(P_{1},\ldots,P_{n}))=\sqcup_{L(P_{t})}(L(P_{1}),\ldots,L(P_{n}))\]
If the result is an individual poset, by Proposition 1 it must then be:
\[\sqcup_{P_{t}}(P_{1},\ldots,P_{n})=\langle E_{P_{1}}\cup\ldots\cup E_{P_{n}}, \leq_{\sqcup_{L(P_{t})}(L(P_{1}),\ldots,L(P_{n}))}\rangle\]
For example, consider \(\sqcup_{LP_{t_{1}}}(P_{1},P_{2})\), with \(LP_{t_{1}}\), \(P_{1}\) and \(P_{2}\) as in Figure 2. \(LP_{t_{1}}\) has traces \(1121\) and \(1112\), \(P_{1}\) has traces \(abcd\), \(acbd\) and \(cabd\), and \(P_{2}\) has a single trace \(e\). Shuffling these languages yields \(L_{1}=\{abced,acbed,cabed,abcde,\)\(acbde,cabde\}\). From this we extract \(\leq_{L_{1}}\), which contains all the dependencies present in \(P_{1}\) and \(P_{2}\) and, additionally, \(a\leq_{L_{1}}e\), \(b\leq_{L_{1}}e\) and \(c\leq_{L_{1}}e\). This corresponds to poset \(P_{r_{1}}\) in Figure 2, which indeed yields the language \(L_{1}\).
However, now consider \(\sqcup_{LP_{t_{2}}}(P_{1},P_{2})\), again as in Figure 2. \(LP_{t_{2}}\) has traces \(1211\), \(1121\) and \(1112\), which yields \(L_{2}=L_{1}\cup\{abced,acebd,caebd\}\). From this we extract \(\leq_{L_{2}}\), which still contains all the dependencies in \(P_{1}\) and \(P_{2}\), but otherwise only \(a\leq_{L_{2}}e\): the traces \(abced\) and \(acebd\) imply that \(b\) and \(c\) are concurrent with \(e\). However, then the trace \(aebcd\) should also be in \(L_{2}\), which it is not. We can then conclude from Proposition 1 that there exists no poset \(P\) such that \(L(P)=L_{2}\). In fact, \(L_{2}\) corresponds to a set of two posets, namely \(P_{r_{2a}}\) and \(P_{r_{2b}}\) in Figure 2.
We proceed by giving a characterisation of shuffles of posets for which the result corresponds to an individual poset. A key insight is that, if the result must correspond to an individual poset, then any two events which are concurrent in one of the operands of the shuffle must, in the resulting poset, have the same relation (\(<\), \(>\) or \(\ncong\)) to any third event originating from another operand:
Lemma 1: _Let \(LP_{t}\) be an lposet and \(P_{1},\ldots,P_{n},P\) posets such that \(L(\sqcup_{LP_{t}}(P_{1},\)\(\ldots,P_{n}))=L(P)\) and \(L(P)\neq\emptyset\). If \(a,b\in E_{P_{i}}\) such that \(a\ncong_{P_{i}}b\) and \(c\in E_{P_{j}}\) with \(i\neq j\), then either \(a,b<_{P}c\), or \(a,b>_{P}c\), or \(a,b\ncong_{P}c\)._
We can then group the events in every \(P_{i}\) according to the reflexive and transitive closure of the concurrency relation \(\ncong_{P_{i}}\); two events which are related in this closure then belong to the same group. Note that, while the events in a group are partially ordered, the groups of every \(P_{i}\) are, by construction, totally ordered. It follows from Lemma 1 that two events in the same group, even when not concurrent, must have the same relation to any event outside of their group
Figure 3: The figure on the left shows the shuffle from Figure 1 interpreted as a shuffle of posets. Indices have been added to duplicate symbols to make them unique. Some of the arrows are redundant but are kept to illustrate the general idea. The figure on the right shows the trajectory, \(1221112112\), as an lposet.
in \(P\). This in turn implies a similar condition on the trajectory lposet: any two \(i\)-labelled events in \(LP_{t}\) that can match two events from the same group of \(P_{i}\) must have the same relation to any \(j\)-labelled event in \(LP_{t}\) (where \(j\) is not necessarily unequal to \(i\)) that can match an event outside of their group.
Figure 4 shows \(P_{r_{1}}\), corresponding to \(\sqcup_{LP_{t_{1}}}(P_{1},P_{2})\), and \(LP_{t_{1}}\) (from Figure 2), both restructured to show the groups of \(P_{1}\) and \(P_{2}\). This demonstrates an interesting parallel with Figure 3: both feature horizontal traces with additional arrows specifying dependencies between components of these traces. However, in Figure 3 the components consist of individual events, whereas in Figure 4 the components consist of posets. In this sense, shuffles resulting in individual posets generalise shuffles on traces.
Concluding, we can then characterise shuffles on posets which result in individual posets as those where the trajectory lposet is structured along the operand posets' groups, as in Figure 4, possibly with dependencies between different operands' groups.
## 5 Future work
Now that we have studied shuffles of posets resulting in individual posets, there are two evident avenues for future work: (1) shuffles of lposets, where one label may occur multiple times rather than just considering orderings of unique events and (2) shuffles of posets resulting in sets of posets and shuffles of sets of posets, where the main challenge may be to minimise the resulting number of posets.
|
2307.16505 | Emergence of stable meron quartets in twisted magnets | The investigation of twist engineering in easy-axis magnetic systems has
revealed the remarkable potential for generating topological spin textures,
such as magnetic skyrmions. Here, by implementing twist engineering in
easy-plane magnets, we introduce a novel approach to achieve fractional
topological spin textures such as merons. Through atomistic spin simulations on
twisted bilayer magnets, we demonstrate the formation of a stable double meron
pair in two magnetic layers, which we refer to as the "Meron Quartet" (MQ).
Unlike merons in a single pair, which is unstable against pair annihilation,
the merons within the MQ exhibit exceptional stability against pair
annihilation due to the protective localization mechanism induced by the twist
that prevents the collision of the meron cores. Furthermore, we showcase that
the stability of the MQ can be enhanced by adjusting the twist angle, resulting
in increased resistance to external perturbations such as external magnetic
fields. Our findings highlight the twisted magnet as a promising platform for
investigating the intriguing properties of merons, enabling their realization
as stable magnetic quasiparticles in van der Waals magnets. | Kyoung-Min Kim, Gyungchoon Go, Moon Jip Park, Se Kwon Kim | 2023-07-31T09:02:09Z | http://arxiv.org/abs/2307.16505v1 | # Emergence of stable meron quartets in twisted magnets
###### Abstract
The investigation of twist engineering in easy-axis magnetic systems has revealed the remarkable potential for generating topological spin textures, such as magnetic skyrmions. Here, by implementing twist engineering in easy-plane magnets, we introduce a novel approach to achieve fractional topological spin textures such as merons. Through atomistic spin simulations on twisted bilayer magnets, we demonstrate the formation of a stable double meron pair in two magnetic layers, which we refer to as the "Meron Quartet" (MQ). Unlike merons in a single pair, which is unstable against pair annihilation, the merons within the MQ exhibit exceptional stability against pair annihilation due to the protective localization mechanism induced by the twist that prevents the collision of the meron cores. Furthermore, we showcase that the stability of the MQ can be enhanced by adjusting the twist angle, resulting in increased resistance to external perturbations such as external magnetic fields. Our findings highlight the twisted magnet as a promising platform for investigating the intriguing properties of merons, enabling their realization as stable magnetic quasiparticles in van der Waals magnets.
**Keywords:** van der Waals magnet, moire magnet, twist engineering, magnetic vortex, meron, topological spin texture
###### Contents
* 1 Introduction
* 2 Results
* 2.1 Antiferromagnetic domain array
* 2.2 Emergence of stable merons: meron quartets
* 2.3 Stability of meron quartet states
* 2.4 Diverse forms of meron quartets
* 3 Discussion
* 4 Methods
* 4.1 Moire superlattice
* 4.2 Interlayer Heisenberg exchange interactions
* 4.3 Iterative optimization method
* 4.4 Local magnetic energy maps
* 4.5 Determination of FM-MD phase boundary
* 4.6 Critical twist angle formula for FM-MD transition
4.7 Determination of MD-MQ phase boundary * 4.8 Determination of critical field strengths for MQ-MD transition * 4.9 MQ states with different skyrmion numbers
* 5 Supplementary information
* 5.1 Supplementary Video 1
* 5.2 Supplementary Video 2
List of Figures
1 Schematic illustration
* 2 Emergence of AFM domain array
* 3 Emergence of stable meron-antimeron pairs
* 4 Comparison of MQ and MD
* 5 Stability of MQ
* 6 Ext. Data: Interlayer exchange interactions
* 7 Ext. Data: Relaxation method
* 8 Ext. Data: FM-MD transition
* 9 Ext. Data: MD-MQ transition
* 10 Ext. Data: Critical field for MQ-MD transition
* 11 Ext. Data: MQ states with different skyrmion numbers
## 1 Introduction
The search for novel spin configurations motivated by fundamental interest and technological applications has led to the discovery of intriguing textures with nontrivial topology in various magnetic systems [1]. Specifically, vortex-type topological spin textures, so-called merons, have been observed in confined magnetic disks [2, 3, 4, 5, 6] and continuous thin films [7, 8, 9, 10, 11, 12, 13]. Recently, monolayer chromium trichloride (CrCl\({}_{3}\)), a two-dimensional (2D) van der Waals (vdW) magnetic crystal, has emerged as a promising candidate for achieving merons in this novel atomically thin limit [14, 15, 16]. The intrinsic easy-plane magnetic anisotropy in such a system offers a pathway to attain in-plane swirling spin textures for merons. However, these merons are inherently unstable against pair annihilation and possess only a limited lifespan [16]. The mechanism responsible for stabilizing such merons has yet to be definitively determined, hampering future exploration of merons in 2D vdW magnets.
The field of twist engineering has opened up a fascinating realm of possibilities in generating topological spin textures in 2D vdW magnets. By harnessing moire patterns, researchers have demonstrated the creation of skyrmion spin textures in lattice-mismatched heterostructures [17, 18, 19] and twisted homo-bilayer systems [20, 21, 22, 23, 24, 25], with a primary focus on Ising-type easy-axis magnetic systems. However, extending this approach to XY-type easy-plane magnets holds tremendous intrigue, as it enables exploration of captivating phenomena such as the Berezinskii-Kosterlitz-Thouless transition [26, 27], the emergence of merons [15, 16], and the potential discovery of hidden magnetic phases driven by strong spin fluctuations [28]. The recent discovery of twisted magnets further enhances the allure, prompting a continued investigation into the captivating moire effects in these systems [29, 30, 31, 32].
In this study, we investigate twist engineering in vdW magnets as a promising avenue to realize stable merons. By conducting atomistic spin simulations, we demonstrate that antiferromagnetic domain arrays in the twisted magnet [23, 24, 29, 30, 31, 32, 33, 34] can be utilized to localize the cores of merons along the boundaries of their respective domains (Fig. 1**a-b**). This localization mechanism effectively preserves the stable spin configuration of the meron pair by separating their cores (Fig. 1**c**). Furthermore, we show that the stability of merons can be tuned by adjusting the twist angle, providing controllable resistance against external magnetic fields. These findings present a promising avenue for achieving stable merons as magnetic quasiparticles in vdW magnets, offering the opportunity to explore their captivating properties with significant flexibility through external stimuli [35, 36] or the creation of heterostructures [37].
Figure 1: **Schematic illustration of a stable meron-antimeron pair in a twisted magnet.****a:** Twist-induced antiferromagnetic (AFM) domain array in a ferromagnetic (FM) order background. Red and blue colors indicate parallel and antiparallel spin alignments between the top and bottom layers, respectively. **b:** Emergence of a stable meron-antimeron pair. Arrows and circles depict their in-plane winding textures and core positions, respectively. **c:** Schematic energy landscape for localizing the cores within different AFM domains (left), and its schematic profile along the dashed line (right).
## 2 Results
### Antiferromagnetic domain array
We construct twisted bilayer magnets by rotating two magnetic layers in a honeycomb lattice with a relative twist angle (Fig. 2**a**). These twisted magnets can be effectively described using a Heisenberg spin model given by [24]:
\[H= -\frac{J}{2}\sum_{l=t,b}\sum_{\langle i,j\rangle}\mathbf{S}_{i}^{l }\cdot\mathbf{S}_{j}^{l}+A\sum_{l=t,b}\sum_{i}\left(\mathbf{S}_{i}^{l}\cdot \hat{z}\right)^{2}\] \[+\sum_{i,j}J_{ij}^{\perp}\mathbf{S}_{i}^{t}\cdot\mathbf{S}_{j}^{ b}. \tag{1}\]
Here, \(\mathbf{S}_{i}^{l}\) represents the spin at site \(i\) on the top layer (\(l=t\)) and the bottom layer (\(l=b\)). \(J\) represents the intralayer FM exchange interactions between nearest-neighbor spins. \(A=0.1\) meV represents the single-ion
Figure 2: **Emergence of AFM domain array.****a**: Moiré superlattice for a twist angle \(\theta=5.08\)°. The colored circles denote local stacking patterns, including AA (green), AB (blue), BA (cyan), and monoclinic (red). The yellow rhombus and black arrows denote the unit cell and lattice vectors of the moiré superlattice, respectively. **b**: Modulation of the local interlayer exchange energy (\(J_{i}^{\perp}\)) computed for \(\theta=1.02\)°, with blue (red) color representing FM (AFM) coupling. **c**: Zero temperature magnetic phase diagram depicting FM phase (white) and magnetic domain phase (green), as a function of twist angle (\(\theta\)) and intralayer exchange (\(J\)). The markers represent the phase boundary obtained from numerical simulations, while the dotted line indicates the fitting curve computed from an effective continuum model (see Methods 4.6). **d-f** Ground-state spin configuration in the magnetic domain phase for \(J=2\) meV and \(\theta=1.61\)°. **d-e**: The color scales denote the phase angles (\(\phi_{l,b}\)) of the normalized spin vectors \(\mathbf{n}_{t,b}=(\cos\phi_{t,b},\sin\phi_{t,b},0)\) in the top (**d**) and the bottom (**e**) layers, respectively. In the magnified images, the arrows denote the direction of \(\mathbf{n}_{t}\) and \(\mathbf{n}_{b}\), and the yellow areas highlight the domain walls with \(90\)° or \(-90\)° spin rotations. Here, \(\text{``t$}\) and \(\text{``b$}\) represent the top and bottom layers, respectively. **f**: The color scale denotes the relative orientation of the spin vectors between the two layers (\(\mathbf{n}_{t}\cdot\mathbf{n}_{b}\)), where red (blue) represents parallel (antiparallel) alignment. The magnified image highlights the correspondence between the interlayer exchange and magnetic domain structure.
anisotropy energy favoring in-plane magnetization. \(J_{ij}^{\perp}\) represent the interlayer exchange interactions, which are adopted from previous ab-initio calculations on bilayer CrI\({}_{3}\)[24]. Due to so-called stacking-dependent interlayer magnetism [21, 38, 39, 40, 41, 42], \(J_{ij}^{\perp}\) switch from FM to AFM coupling depending on the local stacking pattern between the two magnetic layers (Methods 4.2). Consequently, the interlayer exchange coupling exhibits the coexistence of AFM and FM interactions in the moire superlattice accommodating various local stacking patterns (Fig. 2**a**) [20, 21, 22, 23, 24, 25, 34]. We illustrate this behavior in Fig. 2**b** through the map of the local interlayer exchange energy \(J_{i}^{\perp}=\sum_{j}J_{ij}^{\perp}\) computed in an FM configuration \(\mathbf{S}_{i}^{l}=S\hat{z}\)[20, 22, 24, 34]. Specifically, in the monoclinic stacking region (red patches), \(J_{i}^{\perp}\) exhibits AFM character (\(J_{i}^{\perp}>0\)), indicating a tendency for the spins in the top and bottom layers to align antiparallel to each other. Conversely, in the other stacking regions, \(J_{i}^{\perp}\) exhibits FM character (\(J_{i}^{\perp}<0\)), signifying a preference for parallel alignment. As a result, the twisted magnet embeds local AFM patches in a background of FM coupling (Fig. 2**b**).
In this work, we investigate the influence of the AFM patches on the stabilization of merons. We first identify the magnetic phases of twisted easy-plane magnets. Our atomistic simulations on Eq. (1), conducted using an iterative optimization method (Methods 4.3), reveal the zero-temperature magnetic phase diagram shown in Fig. 2**c**. Within this diagram, we observe two distinct magnetic phases: an FM phase and a magnetic domain (MD) phase. The FM phase exhibits a uniform spin configuration with parallel alignment between the spins of the top and bottom layers. On the other hand, the MD phase exhibits antiparallel alignment within the AFM patches, while maintaining parallel alignment outside these patches to minimize the interlayer exchange energy (Fig. 2**d**-**f**). This contrasting AFM-FM order results in the formation of AFM domains within each AFM patch as well as domain walls surrounding the domains, characterized by spin rotations of \(90^{\circ}\) and \(-90^{\circ}\). Furthermore, the AFM domains are arranged into an array structure across the superlattice, resembling a Kagome lattice. We dub this distinctive magnetic structure an AFM domain array.
We attribute the emergence of the AFM domain array to the amplified effect of interlayer exchange in the small twist angle regime (\(\theta<\theta_{c1}\sim\sqrt{J_{\perp}/J}\)). In this regime, the formation of an AFM domain becomes energetically favorable as the reduction in the interlayer exchange energy (\(\Delta E_{\rm inter}\sim-\bar{J}_{\perp}\frac{L^{2}}{a^{2}}\sim-\bar{J}_{\perp} \theta^{-2}\)) outweighs the increase in the intralayer exchange energy (\(\Delta E_{\rm intra}\sim J\)). Here, we consider \(J_{\perp}\) as the average value of \(J_{i}^{\perp}\) within the AFM patch. The patch is approximated as a disk with a radius denoted by \(L\sim a/\theta\), where \(a\) represents the lattice constant of the honeycomb lattice. Despite interlayer exchange being weaker than intralayer exchange, its effect is significantly amplified by the large size of the AFM patch (\(\pi L^{2}\)). Consequently, the formation of the AFM domain array is expected when the twist angle is sufficiently small. This phenomenon manifests in the phase diagram, which exhibits a consistent relationship \(\theta_{c1}\sim\sqrt{J_{\perp}/J}\) for the phase boundary between the FM and MD phases (dashed line in Fig. 2**c**).
### Emergence of stable merons: meron quartets
A meron is a vortex-like topological spin texture, characterized by an integer winding number. In contrast to conventional magnetic vortices, the core of a meron exhibits an out-of-plane polarization, resulting in a unique half-skyrmion number denoted as [12]:
\[Q=\frac{1}{4\pi}\int dx\int dy(\partial_{x}\mathbf{n}\times\partial_{y} \mathbf{n})\cdot\mathbf{n}=p\cdot w=-\frac{1}{2}. \tag{2}\]
Here, the vector field \(\mathbf{n}=(\sin\vartheta\cos\varphi,\sin\vartheta\sin\varphi,\cos\vartheta)\) represents the orientation of spins. The polarity \(p\) and vorticity \(w\) of \(\mathbf{n}\) are defined by \(p=\frac{1}{2}[\cos\vartheta(r=\infty)-\cos\vartheta(r=0)]\) and \(w=\frac{1}{2\pi}\oint_{\gamma}d\mathbf{l}\cdot\nabla\varphi\), respectively, where \(\gamma\) is any contour that encircles the core. Merons possess two distinct characteristics, corresponding to the combinations \((w,p)=\left\{(+1,-\frac{1}{2}),(-1,+\frac{1}{2})\right\}\). Antimerons are counterparts to merons, possessing an opposing skyrmion number of \(Q=+\frac{1}{2}\) with two distinct characteristics \((w,p)=\left\{(-1,-\frac{1}{2}),(+1,+\frac{1}{2})\right\}\).
In continuous magnetic systems, merons and antimerons typically exist as pairs with opposite winding numbers (\(w=+1\) and \(w=-1\)) [7, 8, 9, 10, 11, 12, 13]. The formation of such pairs allows their swirling spin textures to cancel out away from the cores, resulting in localized spin configurations with finite energy. However, the mutual attraction between the cores renders the pairs inherently unstable, leading to pair annihilation during magnetization dynamics [43]. Consequently, in conventional untwisted magnetic systems, these magnetic textures are usually observed as transient states with a limited lifespan [7, 8, 13, 16].
In this work, we discover that the merons and antimerons in twisted magnets can evade pair annihilation by forming a double meron pair in two magnetic layers. To illustrate the emergence of such stable merons, we present a typical relaxation process of the magnetic state in Fig. 3, which is obtained through the relaxation of a random initial configuration (Methods 4.7). In the intermediate magnetic state (Fig. 3**a**-**b**),
we observe the spontaneous formation of four merons (M\({}_{\rm t1}\), M\({}_{\rm t2}\), M\({}_{\rm b1}\), and M\({}_{\rm b2}\)) and four antimerons (\(\overline{\rm M}_{\rm t1}\), M\({}_{\rm t2}\), \(\overline{\rm M}_{\rm b1}\), and \(\overline{\rm M}_{\rm b2}\)) on the top and bottom layers. Upon subsequent relaxation, the intra-patch pairs, i.e. the meron-antimeron pairs occupying the same AFM patch within the same layer such as M\({}_{\rm t2}\)-\(\overline{\rm M}_{\rm t2}\) and M\({}_{\rm b2}\)-\(\overline{\rm M}_{\rm b2}\), undergo pair annihilation due to the attractive interactions driven by the intralayer exchange interactions. However, the inter-patch pairs, i.e. the meron-antimeron pairs occupying different AFM patches within the same layer such as M\({}_{\rm t1}\)-\(\overline{\rm M}_{\rm t1}\) and M\({}_{\rm b1}\)-\(\overline{\rm M}_{\rm b1}\), remain robust against pair annihilation due to their mutual correlation facilitated by the interlayer coupling. As a result, the fully relaxed state (Fig. 3**c-d**) accommodates only this correlated double meron pair.
We attribute the stabilization of the double meron pair to the localization of their cores. During the relaxation process (Supplementary Video 1), we observe that the cores shift exclusively along the boundaries of the AFM patches. This behavior arises from the bulk energy minimization condition: the requirement to minimize the interlayer exchange energy over the bulk region by maintaining the AFM domain configuration, i.e. antiparallel and parallel alignments inside and outside the AFM patches as depicted in Fig. 2**f**. Enforcing
Figure 3: **Snapshots depicting the relaxation process and the formation of stable meron-antimeron pairs.****a-b:** Intermediate state illustrating the spontaneous generation of merons (M\({}_{\rm t1}\), M\({}_{\rm t2}\), M\({}_{\rm b1}\), and M\({}_{\rm b2}\)) and antimerons (\(\overline{\rm M}_{\rm t1}\), \(\overline{\rm M}_{\rm t2}\), \(\overline{\rm M}_{\rm b1}\), and \(\overline{\rm M}_{\rm b2}\)). **c-d:** Fully-relaxed state displaying stabilized meron-antimeron pairs (M\({}_{\rm t1}\)–M\({}_{\rm t1}\), M\({}_{\rm b1}\)–M\({}_{\rm b1}\)), with the annihilation of pairs M\({}_{\rm t2}\)–\(\overline{\rm M}_{\rm t2}\) and M\({}_{\rm b2}\)–\(\overline{\rm M}_{\rm b2}\). Panels **a,c** and **b,d** represent the top and bottom layers, respectively, corresponding to a magnified area shown in Fig. 2**d-e**. Arrows indicate in-plane components, while the color scale in the markers represents out-of-plane components. Marker sizes are adjusted for better visibility. Shaded areas indicate AFM patches. The parameters \(J=2\) meV and \(\theta=1.61^{*}\) are utilized.
such a condition localizes the cores within their respective AFM patches by constraining their motions along the patch boundaries and prohibiting them from transferring to other patches. This localization mechanism preserves the inter-patch pairs by ensuring the separation of their cores and protecting them against pair annihilation. However, the localization mechanism cannot preserve the intra-patch pairs, as their cores reside within the same patch (Supplementary Video 2).
The localization mechanism of the meron pair can be understood by considering the effective confining potential arising from the interlayer coupling (Fig. 1**c**). The cores of merons in one layer (e.g., the top layer) experience an effective potential generated by the other layer (e.g., the bottom layer) through the interlayer exchange coupling (Fig. 4**m**). This potential reaches its minimum energy along the boundaries of the AFM patches due to the bulk energy minimization condition. Any displacement of the cores away from these boundaries results in increased energy relative to the minimum, creating potential wells along the AFM patch boundaries. These wells act as confining forces, effectively localizing the cores within them. Furthermore, the establishment of such potential wells requires the presence of two counterpart merons in the bottom layer to facilitate the bulk energy minimization condition. Consequently, the creation of four merons is required to protect the merons via the confining potential.
Based on our findings, we introduce a novel magnetic state dubbed the "Meron Quartet" (MQ) state, which consists of four merons, two for each layer, as depicted in Fig. 4. This state exhibits two key characteristics: Firstly, each layer contains two vortices with opposite winding numbers (\(w=+1\) and \(w=-1\)), ensuring the total winding number cancels out. Secondly, each occupied AFM patch harbors two vortices with the same winding numbers (\(w=+1\) or \(w=-1\)) in both the top and bottom layers. These specific arrangements enable the MQ state to realize stable meron pairs through the implementation of the confining potential. Furthermore, the MQ state minimizes the interlayer exchange energy over the bulk region (Fig. 4**c,i**), as observed in the ground MD state (Fig. 4**f,l**). The distinction between the MQ and MD states lies solely in the localized core energy (Fig. 4**g-h** vs. **j-k**). As a result, the MQ state exhibits a low magnetic energy of -3.064 meV per spin, comparable to the energy of the ground MD state (-3.071 meV), despite its
Figure 4: **Comparison of meron quartet (MQ) state to magnetic domain (MD) state.****a-c/d-f**: Spin configurations of the MQ (**a-c**) and MD (**d-f**) state, corresponding to Fig. 3**b,d** and Fig. 2**d-f**, respectively. **a-b/d-e**: Phase angles (\(\phi_{k,b}\)) of the normalized spin vectors (\(\mathbf{n}_{t,b}=(\cos\phi_{t,b},\sin\phi_{t,b},0)\)) in the top (**a/d**) and bottom (**b/e**) layers. In **a-b**, the black color indicates out-of-plane polarization. **c/f**: Relative orientation (\(\mathbf{n}_{t}\cdot\mathbf{n}_{b}\)) between the top and bottom layers, with red (blue) denoting parallel (antiparallel) alignment. **g-i/j-l**: Local magnetic energy maps corresponding to the spin configurations (**a-e/d-f**). **g-h/j-k**: Intralayer exchange energy plus single-ion anisotropy energy (E\({}_{\text{t,b}}\)) in the top (**g/j**) and bottom (**h/k**) layers, respectively. **j-l**: Interlayer exchange energy (E\({}_{\text{b}}\)). In **a-i**, the dashed lines denote the boundaries of AFM patches. **m/m**: Schematic illustration of the MQ (**m**) and MD (**n**) states across a single AFM patch, corresponding to the dotted lines shown in **a-b** and **d-e**, respectively. Red and blue colors depict the AFM patch region (\(J_{i}^{L}>0\)) and FM coupling background (\(J_{i}^{L}<0\)), respectively. Yellow and blue arrows represent spin orientations in the top and bottom layers, respectively.
intricate spin textures arising from the presence of merons (Fig. 4**a-b**). In other words, the MQ state naturally accommodates the stable correlated pairs of merons, i.e. the meron quartet, while satisfying the bulk energy minimization condition (Fig. 4**m**), similar to the ground MD state (Fig. 4**n**).
### Stability of meron quartet states
We observe that the MQ state acquires metastability in the small twist angle regime (green area in Fig. 5**a**). This phenomenon is attributed to the enhancement of the confining potential in such a regime. To elucidate this, we consider the transition of the MQ state to the MD state. This transition necessitates the mutual attractions of the meron cores. However, such attraction leads to an inevitable increase in the interlayer exchange energy, as it disrupts the bulk energy minimization condition, as illustrated in Fig. 4**c,i**. We estimate this energy increase as \(\Delta E_{\rm tb}\sim J_{\perp}^{\rm FM}\frac{dL}{a^{2}}\), where \(J_{\perp}^{\rm FM}\) represents the FM interlayer exchange near an AFM patch, and \(d\) signifies the displacement of a core from its equilibrium position due to the attraction. The term \(\frac{dL}{a^{2}}\sim\frac{d}{a\theta}\) corresponds to the number of spins that undergo unfavorable ordering for the interlayer exchange, which increases as the twist angle \(\theta\) decreases. Consequently, this energy cost escalates as the twist angle \(\theta\) decreases, creating a substantial energy barrier for the transition to the MD state, which corresponds to the confining potential mentioned before.
This high energy barrier surpasses the attractive force between merons and antimerons, stabilizing the MQ state. The attractive force is mediated by the intralayer exchange interaction energy between a meron and an antimeron within the same layer. This energy is akin to the Coulomb energy between a vortex and an antivortex in the XY model, denoted as \(E_{\rm C}\sim J\ln{(R/a)}\)[43], where \(R\sim a/\theta\) represents the distance between the cores. The attraction of the cores by a displacement of \(d\) can potentially reduce the Coulomb energy by \(\Delta E_{\rm C}\sim-Jd/R\sim-Jd\theta/a\). However, this energy reduction \(\Delta E_{\rm C}\) is surpassed by the energy increase \(\Delta E_{\rm tb}\) in the small twist angle regime (\(\theta<\theta_{c2}\sim\sqrt{J_{\perp}^{\rm FM}/J}\)). As a result, the attraction of the cores is effectively prohibited in such a regime, leading to the stabilization of the MQ state. We find that this phenomenon is evident in the phase diagram, which exhibits a consistent relationship \(\theta_{c2}=\sqrt{J_{c2}/J}\) (dashed line in Fig. 5**a**), where \(J_{c2}\) is a fitting parameter proportional to \(J_{\perp}^{\rm FM}\).
The high energy barrier also indicates the enhanced stability of the MQ state against external perturbations, such as external magnetic fields, in the small twist angle regime. To further illustrate this stability, we incorporate the Zeeman term:
\[H_{\rm Zeeman}=-g\mu_{B}B\sum_{l=t,b}\sum_{i}S_{i,z}^{l}, \tag{3}\]
where \(B\) represents an external magnetic field applied in the out-of-plane direction. Through the systematic examination of the behavior of the MQ state under the influence of the Zeeman term (Methods 4.8), we identify the critical field strength for the destruction of the MQ state, as illustrated in Fig. 5**b**. The critical field strength differs depending on the relative orientation of the applied field with respect to the
Figure 5: **a:** Magnetic phase diagram illustrating the meron quartet phase (green) and the magnetic domain phase (white), as a function of twist angle (\(\theta\)) and intralayer exchange (\(J\)). The markers represent the phase boundary determined through numerical simulations, while the dashed line represents the phenomenological fitting curve \(\theta_{c2}=\sqrt{J_{c2}/J}\), where the fitting parameter \(J_{c2}\) is found to be 8 meV. **b:** Out-of-plane net magnetization (\(M_{z}=\frac{1}{N}\sum_{l=t,b}\sum_{i}n_{i,z}^{l}\)) as a function of an external magnetic field in the out-of-plane direction (\(B\)), with the solid and dashed lines corresponding to the MQ and MD states, as depicted in Fig. 4**a-c** and **d-f**, respectively. The arrows mark the critical field strengths (\(B_{c1}\) and \(B_{c2}\)) that signify the degradation of the MQ state to the MD state, as shown by the merging of the two curves at \(B_{c1}\) and \(-B_{c2}\). **c:** Evolution of \(B_{c1}\) and \(B_{c2}\) with twist angle (\(\theta\)). Error bars denote standard errors from different samples. The parameter \(J=2\)meV is utilized.
polarity of the meron and antimeron cores, with the antiparallel field exhibiting a much higher critical field strength (\(B_{c2}\)) compared to the parallel field (\(B_{c1}\)). Nevertheless, both critical field strengths are significantly enhanced as the twist angle decreases (Fig. 5**c**). This corroborates the enhanced stability of the MQ state in the small twist angle regime.
### Diverse forms of meron quartets
We discover that the twisted magnet can realize stable merons in diverse forms with a different total skyrmion number (\(Q_{\rm tot}=0,\pm 1,\pm 2\)) and distinct combinations of merons and antimerons between two magnetic layers in addition to the specific configuration shown in Fig. 4**a-c**. Due to the condition of the vanishing of the total winding number, two vortices consisting of each inter-patch pair must exhibit opposite winding numbers \(w=+1\) and \(w=-1\), respectively. However, the cores have the flexibility to possess their own polarity, which can be either \(p=-\frac{1}{2}\) or \(p=+\frac{1}{2}\). This gives rise to four potential configurations for each pair: (i) M-\(\overline{\rm M}\), (ii) \(\overline{\rm M}\)-M, (iii) M-M, and (iv) \(\overline{\rm M}\)-\(\overline{\rm M}\). Furthermore, these configurations can independently occur in the top and bottom layers, resulting in a total of sixteen potential configurations for the MQ state, with distinct numbers of merons and antimerons in each layer. Our investigation employing general random initial configurations confirms the emergence of such diverse configurations (Fig. 10). This observation highlights the flexibility in achieving merons and antimerons in the twisted magnet, setting it apart from conventional magnetic systems, which typically have fixed skyrmion numbers [1, 13].
## 3 Discussion
We have shown that the AFM domain array induced by the twist provides a favorable environment for hosting stable merons. Once these topological defects form, their destruction is impeded due to the constraints imposed by the bulk energy minimization condition. Moreover, our theory suggests the feasibility of realizing such stable merons and their enhanced robustness in the small twist angle regime.
We propose our theory can be applied to CrCl\({}_{3}\), which exhibits two essential factors for hosting the meron quartet: easy-plane magnetic anisotropy [15] and stacking-dependent interlayer magnetism [21]. Notably, merons have been observed in monolayer CrCl\({}_{3}\)[15, 16]. In this context, twist engineering offers an effective approach to realizing the meron quartet. Another potential application lies in CrI\({}_{3}\), where the modification of its intrinsic easy-axis magnetic anisotropy to easy-plane anisotropy can be achieved through experimental control, such as gate-voltage tuning [44].
For experimental observations, we propose utilizing scanning magnetometry techniques with nitrogen-vacancy centers [29], as well as Lorentz transmission electron microscopy [45] and magnetic transmission soft X-ray microscopy [13], to directly observe meron pairs ranging in size from 80 to 20 nm at different twist angles (\(\theta\) = 0.5\({}^{\circ}\) to 2\({}^{\circ}\)). Indirect measurements can involve detecting anomalous kinks in the magnetization curve, which can serve as an indication of the presence of merons. Techniques such as the magneto-optical Kerr effect, commonly employed in the study of 2D vdW magnets [30, 31], can offer valuable insights for such indirect measurements.
Future research should consider incorporating various magnetic interactions present in vdW magnets that were overlooked in our current model, such as exchange anisotropy, interactions beyond nearest neighbors [16], the Dzyaloshinskii-Moriya interaction, and magnetic dipole-dipole interactions [15]. An important research question to explore is how these additional interactions impact the stabilization of the meron quartet and their behaviors in vdW magnets.
We highlight that the discovery and realization of merons in 2D vdW magnets via twist open a unique avenue for investigating their fascinating properties with remarkable flexibility, through either external stimuli [35, 36] or the creation of heterostructures [37]. This significant breakthrough not only deepens our understanding of these fractionalized topological spin textures in magnets but also holds great promise for future technological advancements.
|
2310.20478 | Unveiling Black-boxes: Explainable Deep Learning Models for Patent
Classification | Recent technological advancements have led to a large number of patents in a
diverse range of domains, making it challenging for human experts to analyze
and manage. State-of-the-art methods for multi-label patent classification rely
on deep neural networks (DNNs), which are complex and often considered
black-boxes due to their opaque decision-making processes. In this paper, we
propose a novel deep explainable patent classification framework by introducing
layer-wise relevance propagation (LRP) to provide human-understandable
explanations for predictions. We train several DNN models, including Bi-LSTM,
CNN, and CNN-BiLSTM, and propagate the predictions backward from the output
layer up to the input layer of the model to identify the relevance of words for
individual predictions. Considering the relevance score, we then generate
explanations by visualizing relevant words for the predicted patent class.
Experimental results on two datasets comprising two-million patent texts
demonstrate high performance in terms of various evaluation measures. The
explanations generated for each prediction highlight important relevant words
that align with the predicted class, making the prediction more understandable.
Explainable systems have the potential to facilitate the adoption of complex
AI-enabled methods for patent classification in real-world applications. | Md Shajalal, Sebastian Denef, Md. Rezaul Karim, Alexander Boden, Gunnar Stevens | 2023-10-31T14:11:37Z | http://arxiv.org/abs/2310.20478v1 | # Unveiling Black-boxes: Explainable Deep Learning Models for Patent Classification
###### Abstract
Recent technological advancements have led to a large number of patents in a diverse range of domains, making it challenging for human experts to analyze and manage. State-of-the-art methods for multi-label patent classification rely on deep neural networks (DNNs), which are complex and often considered black-boxes due to their opaque decision-making processes. In this paper, we propose a novel deep explainable patent classification framework by introducing layer-wise relevance propagation (LRP) to provide human-understandable explanations for predictions. We train several DNN models, including Bi-LSTM, CNN, and CNN-BiLSTM, and propagate the predictions backward from the output layer up to the input layer of the model to identify the relevance of words for individual predictions. Considering the relevance score, we then generate explanations by visualizing relevant words for the predicted patent class. Experimental results on two datasets comprising two-million patent texts demonstrate high performance in terms of various evaluation measures. The explanations generated for each prediction highlight important relevant words that align with the predicted class, making the prediction more understandable. Explainable systems have the potential to facilitate the adoption of complex AI-enabled methods for patent classification in real-world applications.
P +
Footnote †: This is the “_Submitted Manuscript_” to the \(1^{st}\) World Conference on eXplainable Artificial Intelligence (xAI2023), Lisbon, Portugal. The published manuscript by Springer can be found here [https://doi.org/10.1007/978-3-031-44067-0_24](https://doi.org/10.1007/978-3-031-44067-0_24)
P +
Footnote †: This is the “_Submitted Manuscript_” to the \(1^{st}\) World Conference on eXplainable Artificial Intelligence (xAI2023), Lisbon, Portugal. The published manuscript by Springer can be found here [https://doi.org/10.1007/978-3-031-44067-0_24](https://doi.org/10.1007/978-3-031-44067-0_24)
P +
Footnote †: This is the “_Submitted Manuscript_” to the \(1^{st}\) World Conference on eXplainable Artificial Intelligence (xAI2023), Lisbon, Portugal. The published manuscript by Springer can be found here [https://doi.org/10.1007/978-3-031-44067-0_24](https://doi.org/10.1007/978-3-031-44067-0_24)
## 1 Introduction
Patent classification is an important task in the field of intellectual property management, involving the categorization of patents into different categories based on their technical contents [1]. Traditional approaches to patent classification have relied on manual categorization by experts, which can be time-consuming and subjective [2]. However, due to the exponential growth of
ponent applications in recent times, it has become increasingly challenging for human experts to classify patents. The international patent classification (IPC) system, which consists of 645 labels for the general classes and over 67,000 labels for the sub-groups, reflects the magnitude of challenges in multi-level patent classification tasks [1]. Furthermore, patent texts are generally lengthy and contain irregular scientific terms, making them a challenging field of application for text classification approaches, as patents often include highly technical and scientific terms that are not commonly used in everyday language, and authors often use jargon to make their patents unique and innovative [3]. These factors contribute to the significant challenges associated with patent classification, making it a formidable task.
However, recent advancements in machine learning (ML) and deep neural network (DNN) have made significant progress in automating the patent classification process. In the past, classical ML models, such as support vector machine (SVM), K-nearest neighbour, and naive bayes, have been widely used to automatically classify patent texts [4]. However, more recently, several DNN models have been proposed to address the challenges associated with patent classification. Generally, these models represent patent text using word embedding and transformer-based pre-trained models [5, 6, 1, 2, 7]. The DNN models, including recurrent neural networks (RNN) and their variants such as convolutional neural networks (CNN), long short-term memory networks (LSTM), bidirectional LSTM (Bi-LSTM), and gated recurrent unit (GRU), can learn to classify patents based on their textual content [5, 7, 2, 8, 9]. Hence, these enable faster and more reliable categorization of patents and scientific articles.
Mathematically, DNN-based classification approaches are often complex in their architecture, and the decision-making procedures can be opaque [10, 11]. While these approaches may exhibit efficient performance in classifying patents, the decisions they make are often not understandable to patent experts, or even to practitioners of artificial intelligence (AI). As a result, it is crucial to ensure that the methods and decision-making procedures used in patent classification are transparent and trustworthy, with clear explanations provided for the reasons behind each prediction. This is particularly important because patents are legal documents, and it is essential to comprehend the reasoning behind the classification decisions made by the model. Therefore, patent classification models should be designed to be explainable, allowing the reasons and priorities behind each prediction to be presented to users. This will help build trust in the predictive models and promote transparency among users and stakeholders.
For text-based uni-modal patent classification tasks, explanations can be provided by highlighting relevant words and their relevance to the prediction, thus increasing trust of users in the accuracy of predictions. In recent years, there has been a growing interest in developing explainable artificial intelligence (XAI) to unveil the black-box decision-making process of DNN models in diverse fields, including image processing [12], text processing, finance [13, 14], and health applications [15, 16]. These XAI models can provide insights into the decision-making process, explaining the reasoning behind specific predictions, the overall model's priorities in decision making, and thereby enhancing the transparency and trustworthiness of the application [11, 17, 10, 18, 12].
In this paper, our goal is to develop a patent classification framework that not only predicts the classes of patents but also provides explanations for the predicted classes. To achieve this, we propose a new explainable method for patent classification based on layer-wise relevance propagation (LRP). This method can break down the contribution of patent terms that are crucial
in classifying a given patent into a certain class. We start by representing the patent terms using a high-dimensional distributed semantic feature vector obtained from pre-trained word-embedding models. Next, we proceed to train several DNN-based models, including Bi-LSTM, CNN, and CNN-BiLSTM, which are capable of predicting the patent class. Finally, the LRP-enabled explanations interface highlights relevant words that contributed to the final prediction, providing an explanation for the model's decision.
We conducted experiments using two benchmark patent classification datasets, and the experimental results demonstrated the effectiveness of our approach in both classifying patent documents and providing explanations for the predictions. Our contributions in this paper are twofold:
1. We propose an LRP-based explainability method that generates explanations for predictions by highlighting relevant patent terms that support the predicted class.
2. Our developed DNN models show effective performance in terms of multiple evaluation metrics on two different benchmark datasets, and performance comparison with existing works confirms their consistency and effectiveness.
Overall, explainable DNN models offer promising solutions for patent classification, enabling faster and more accurate categorization while providing insights into the decision-making process. With the increasing volume of patent applications, the development of such explainable models could be beneficial in automatically categorizing patents with efficiency and transparency.
The rest of the paper is structured as follows: section 2 presents the summary of existing research on patent classification. Our proposed explainable deep patent classification framework is presented in section 3. We demonstrate the effectiveness of our methods in classifying patents and explaining the predictions in detail in section 4. Finally, section 5 concluded our findings with some future directions in explainable patent classification research.
## 2 Related Work
In recent years, the patent classification task has gained significant attention in the field of natural language processing (NLP) research, as evidenced by several notable studies [19, 2, 3]. Various methods have been employed for classifying and analyzing patent data, and the methods can be categorized based on different factors such as the techniques utilized, the tasks' objectives (e.g., multi-class or multi-level classification), and the type of resources used to represent the patent data (i.e., uni-modal vs multi- modal) [20, 7, 9]. However, traditional approaches have relied on classical ML and bag-of-words (BoW)-based text representation, which have limitations in capturing semantic and contextual information of the text, as they can only capture lexical information. With the advent of different word-embedding techniques such as _word2vec_ by Mikolov et al. [21, 22], _Glove_ by Pennington et al. [23], and _FastText_ by Bojanowski et al. [24], the NLP research has been revolutionized with the ability to represent text using high-dimensional semantic vector representations [25, 26, 27]. More recently, there has been a growing trend in employing transformer-based pre-trained models, including deep bidirectional transformer (BERT) [28], robust optimized BERT (RoBERTa) [29], distilled BERT (DistilBERT) [30], and XLNet [31], for text representation in NLP tasks.
Shaobo et al. [2] introduced a deep patent classification framework that utilized convolutional neural networks (CNNs). They started by representing the text of patents, which was extracted
from the title and abstract of the USPTO-2 patent collection, using a skip-gram-based word-embedding model [2]. They then used the resulting high-dimensional semantic representations to train CNN model. Similarly, Lee et al. [3] also employed a CNN-based neural network model, however, they fine-tuned a pre-trained BERT model for text representations. A DNN-based framework employing Bi-LSTM-CRF and Bi-GRU-HAN models has been introduced to extract semantic information from patents' texts [7].
A multi-level classification framework [9] has been proposed utilizing fine-tuned transformer-based pre-trained models, such as BERT, XLNet, RoBERTa, and ELECTRA[32]. Their findings revealed that XLNet outperformed the baseline models in terms of classification accuracy. In another study, Roudsari et al. [20] addressed multi-level (sub-group level) patent classification tasks by fine-tuning a DistilBERT model for representing patent texts. Jiang et al. [6] presented a multi-modal technical document classification technique called _TechDoc_, which incorporated NLP techniques, such as word-embedding, for extracting textual features and descriptive images to capture information for technical documents. They modelled the classification task using CNNs, RNNs, and Graph neural networks (GNNs). Additionally, Kang et al. [33] employed a multi-modal embedding approach for searching patent documents.
A patent classification method called _Patent2vec_ has been introduced, which leverages multi-view patent graph analysis to capture low-dimensional representations of patent texts [8]. Pujari et al. [34] proposed a transformer-based multi-task model (TMM) for hierarchical patent classification, and their experimental results showed higher precision and recall compared to existing non-neural and neural methods. They also proposed a method to evaluate neural multi-field document representations for patent text classification. Similarly, Aroyehu et al. [35] introduced a hierarchical transfer and multi-task learning approach for patent classification, following a similar methodology. Roudsari et al. [36] compared different word-embedding methods for patent classification performance. Li et al. [37] proposed a contrastive learning framework called _CoPatE_ for patent embedding, aimed at capturing high-level semantics for very large-scale patents to be classified. An automated ensemble learning-based framework for single-level patent classification is introduced by Kamateri et al. [38].
However, to the best of our knowledge, none of the existing patent classification methods are explainable. Given the complexity of the multi-level classification task, it is crucial for users and patent experts to understand the reasoning behind the AI-enabled method's predictions, as it classifies patents into one of more than 67,000 classes (including sub-group classes). Therefore, the aim of this paper is to generate explanations that highlight relevant words, helping users understand the rationale behind the model's predictions. Taking inspiration from the effectiveness and interpretability of layer-wise relevance propagation (LRP) in other short-text classification tasks [39, 40, 41], we have adopted LRP [12] as the method for explaining the complex neural networks-based patent classification model.
## 3 Explainable Patent Classification
Our proposed explainable patent classification framework consists of two major components, i) training DNN-based classification model using the semantic representation of patent text, and ii) explanation generation component leveraging layer-wise relevance propagation (LRP). The conceptual diagram with major components is depicted in Fig 1. Our method first represents
preprocessed patent texts semantically by high-dimensional vector leveraging pre-trained word embedding models. Then, the semantic representations for patent text are fed to train multiple DNN-based classification models including Bi-LSTM, CNN, and CNN-BiLSTM. For a particular deep patent classification model, our introduced LRP algorithm computes the relevance score towards a certain class for a given patent by redistributing the relevance score with backward propagation from the output layer to the input layer. Eventually, we get the score for patent terms that highlight the relevancy related to the predicted class of a given input patent.
### Training deep neural models
Before training any specific DNN-based patent classification model, we employ _FastText_ word-embedding model to represent each word of patent text with a high-dimensional feature vector and the element of each vector carries semantic and contextual information of that word. _FastText_ is a character n-gram-based embedding technique. Unlike, _Glove_ and _Word2Vec_, it can provide a word vector for out-of-vocabulary (OOV) words. Patents' text contains less used scientific terms and some words that are higly context specific. For example, patent in the field of chemistry has a lot of reagents and chemical names, even for some new patents the reagents' names might be completely new, proposed by the inventors. Considering this intuition, we chose _FastText_ embedding instead of _Glove_ and _word2vec_. We make a sequence of embedding of the words for each patent and then fed it into the deep-learning model. Our trained different neural network models includes bidirectional LSTM (Bi-LSTM), convolutional neural networks (CNN), CNN-BiLSTM, a combination of CNN and Bi-LSTM.
### Explaining predictions with LRP
Let \(c\) denotes the predicted class for the input patent \(p\). The LRP algorithm applies the layer-wise conservation principle to calculate the relevance score for features. The computation starts from the output layer and then redistributes the relevance weight, eventually back-propagating it to the input layers [40, 39]. In other words, the relevance score is computed at each layer of the DNN model. Following a specific rule, the relevance score is attributed from lower-layer neurons to higher-layer neurons, and each immediate-layer neuron is assigned a relevance score up to the
Figure 1: A conceptual overview diagram of our explainable patent classification framework.
input layers, based on this rule. The flow of propagation for computing the relevance is depicted by the red arrow that goes from the output towards the input layers in Fig. 1.
The prediction score, \(f_{c}(p)\) by our deep patent classification model, which is a scalar value corresponding to the patent class \(c\). Using LRP, our aim is to identify the relevance score for each dimension \(d\) of a given patent vector \(p\) for the target patent class \(c\). Our objective is to compute the relevance score of each input feature (i.e., words) that illustrates how positively (or negatively) contributes to classifying the patent as class \(c\) (or another class). Let \(z_{j}\) be the neuron of the upper layer and the computation of the neuron is calculated as
\[z_{j}=\sum_{i}z_{i}\cdot w_{ij}+b_{j}, \tag{1}\]
where \(w_{ij}\) be the weight matrix and \(b_{j}\) denotes the bias [40]. Given that the relevance score for upper-layer neurons \(z_{j}\) is \(R_{j}\) and we move towards lower-layer neurons to distribute that relevance. In the final layer, there is only one neuron (i.e., the prediction score) and in that case, \(R_{j}\) is the prediction score by the function \(f_{c}(p)\). The redistribution of the relevance to the lower layers is done by following two major steps. We need to compute relevance messages to go from upper-layer to lower-layer neurons [40].
Let \(i\) be the immediate lower layer and its neurons are denoted by \(z_{i}\). Computationally, the relevance messages \(R_{i\gets j}\) can be computed as followings [40].
\[R_{i\gets j}=\frac{z_{i}\cdot w_{ij}+\frac{\epsilon\cdot sign(z_{i})+ \delta\cdot b_{j}}{N}}{z_{j}+\epsilon\cdot sign(z_{j})}\cdot R_{j}. \tag{2}\]
The total number of neurons in the layer \(i\) is denoted as \(N\) and \(\epsilon\) is the stabilizer, a small positive real number (i.e., 0.001). By summing up all the relevance scores of the neuron in \(z_{i}\) in layer \(i\), we can obtain the relevance in layer \(i\), \(R_{i}=\sum_{i}R_{i\gets j}\). \(\delta\) can be either 0 or 1 (we use \(\delta=1\)) [40, 41]. With the relevance messages, we can calculate the amount of relevance that circulates from one layer's neuron to the next layer's neuron. However, the computation for relevance distribution in the fully connected layers is computed as \(R_{j\to k}=\frac{z_{jk}}{\sum_{j}z_{jk}}R_{k}\)[39]. The value of the relevance score for each relevant term lies in [0,1]. The higher the score represents higher the relevancy of the terms towards the predicted class.
## 4 Experiments
This section presents the details about the datasets, experiment results, and discussion of generated explanation with LRP.
### Dataset
_AI-Growth-Lab patent dataset:_ We conducted experiments on a dataset containing 1.5 million patent claims annotated with patent class1[42]. According to the CPC patent system, the classification is hierarchical with multiple levels including section, class, subclass, and group. For example, there are 667 labels in the subclass level [42]. However, for a better understanding of the generated explanations and the reasons behind a prediction for a given patent, we modeled the patent classification task with 9 general classes including _Human necessities, Performing
operations; transporting, Chemistry; metallurgy, Textiles; paper, Fixed constructions, Mechanical engineering; lighting; heating; weapons; blasting engines or pumps, Physics, Electricity and General._
BigPatent dataset:BigPatent2 dataset is prepared by processing 1.3 million patent texts [43]. However, the classification dataset contains in total of 35k patent texts with 9 above-mentioned classes as labels. They provided the dataset by splitting it into training, validation, and testing set, the number of samples are 25K, 5K, and 5K, respectively. There are two different texts for each patent, one is a raw text from patent claims and another version is the human-generated abstract summarized from the patent claims.
Footnote 2: Dataset: [https://huggingface.co/datasets/ccdv/patent-classification/tree/main](https://huggingface.co/datasets/ccdv/patent-classification/tree/main)
Figure 3: The distribution of the patents for different class on BigPatent data
Figure 2: The distribution of the patents for different class on AI-growth-Lab data
However, the number of samples per patent class is varied widely for both both datasets, which means both are imbalanced dataset. The horizontal bar chart in Fig. 2 and 3 show the level of imbalance for both datasets. This imbalance distribution of samples per class poses an additional challenge in this multi-level classification task.
### Experimental setup
We conducted experiments using three different DNN models, namely Bi-LSTM, CNN, and CNN-BiLSTM, utilizing the _FastText_ pre-trained word-embedding model for text representation in the embedding layers. The Bi-LSTM model consists of a layer of Bi-LSTM with 64 units after embedding layer, followed by another Bi-LSTM layer with 32 units, and then two fully-connected layers with 64 and 9 units, respectively. We applied the rectified linear units (ReLU) activation function in the hidden dense layer, and the softmax activation function in the output layer. For the CNN model, after the embedding layer, we have a 1-dimensional convolutional layer followed by a global average pooling layer, and finally, the output layer is a fully-connected layer with 9 units. The CNN-BiLSTM model has a convolutional layer followed by a global average pooling layer, and then the Bi-LSTM part is similar to the above-mentioned Bi-LSTM model. The activation functions in the fully connected hidden and output layers are ReLU and softmax, respectively. We implemented our methods using _scikit-learn_ and _Keras_, and represented the patent text using the _FastText_ pre-trained word-embedding model3. For implementing LRP for the Bi-LSTM network, we followed the method described in [40]4. For the BigPatent dataset, the training, testing, and validation sets are already split. For the AI-Growth-Lab data, the ratio for the training and testing set is 80% and 20%, respectively.
Footnote 3: [https://fasttext.cc/docs/en/crawl-vectors.html](https://fasttext.cc/docs/en/crawl-vectors.html) 4 [https://github.com/ArrasL/LRP_for_LSTM](https://github.com/ArrasL/LRP_for_LSTM)
Footnote 4: [https://github.com/ArrasL/LRP_for_LSTM](https://github.com/ArrasL/LRP_for_LSTM)
### Performance analysis
The performance of the proposed classification models was evaluated using three evaluation metrics, including Precision, Recall, and F1-Score, on two datasets, as shown in Table 1. The results demonstrate consistent performance across most of the deep classification models. Among them, the Bi-LSTM model exhibited better performance in terms of all evaluation metrics on both datasets. However, the performance of the other two models, CNN and CNN-BiLSTM, was also consistent and effective, though slightly lower than the Bi-LSTM model. Specifically, for the first dataset, CNN-BiLSTM performed equally well in terms of Precision (0.69) and F1-Score (0.69),
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Dataset** & **Method** & **Precision** & **Recall** & **F1-Score** \\ \hline \hline \multirow{3}{*}{AI-Growth-Lab} & **Bi-LSTM** & **0.69** & **0.70** & **0.69** \\ \cline{2-5} & **CNN** & 0.62 & 0.63 & 0.62 \\ \cline{2-5} & **CNN-BiLSTM** & **0.69** & 0.68 & **0.69** \\ \hline \hline \multirow{3}{*}{BigPatent} & **Bi-LSTM** & **0.79** & **0.78** & **0.78** \\ \cline{2-5} & **CNN** & 0.75 & 0.76 & 0.76 \\ \cline{1-1} \cline{2-5} & **CNN-BiLSTM** & 0.77 & 0.76 & 0.76 \\ \hline \end{tabular}
\end{table}
Table 1: The performance of different deep patent classification models on two datasets in terms of precision, recall and f1-score. The best result is in **bold**.
while the performance of the CNN-based model was comparatively lower for the AI-Growth-Lab dataset, with a Precision of 62%, which was 7% lower than the best-performing Bi-LSTM model. However, for the BigPatent dataset, the CNN model exhibited considerably better performance, with a Precision of 75%, which was only 4% lower than the Bi-LSTM model. The performance difference between the models for the other two metrics was even lower, at 2%.
The performance of all DNN-based classifiers on the BigPatent dataset is significantly superior compared to the first dataset. This may be attributed to the fact that the BigPatent dataset includes finely-grained abstracts of patents which are generated by human assessors, taking into consideration the patent texts. As a result, the semantic representation of the fine-tuned text in the BigPatent dataset is enriched compared to the raw patent claims in other dataset. We present the performance of Bi-LSTM model by showcasing the class-wise performance on the BigPatent dataset. Table 2 displays the performance across nine different patent classes. The Bi-LSTM model demonstrates favorable and consistent performance across most patent classes, with the exception of the _general_ category. It is hypothesized that the patents in the _"general"_ category may contain more commonly used terms compared to patents in other area-specific categories. Consequently, the captured semantic information may not be sufficient, potentially resulting in lower performance in terms of recall and F1-Score for the _"general"_ class compared to other classes.
We compared the performance of our models with similar models that used _FastText_ embedding for patent text representation. Compared to existing works by Roudsari et al. [9] and Shaobo et al. [2], the performance of our trained models is effective. Roudsari et al. also trained similar models with semantic text representation with a pre-trained _FastText_ word-embedding model. They also develop similar DNN models including Bi-LSTM and CNN-BiLSTM. Shaobo et al. [2]
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Patent Class** & **label** & **Precision** & **Recall** & **F1-score** \\ \hline \hline Human necessities & 1 & 0.79 & 0.91 & 0.85 \\ \hline Performing\_operations & 2 & 0.74 & 0.66 & 0.70 \\ \hline Chemistry & 3 & 0.75 & 0.88 & 0.81 \\ \hline Textiles & 4 & 0.71 & 0.74 & 0.73 \\ \hline Fixed\_constructions & 5 & 0.65 & 0.70 & 0.67 \\ \hline Mechanical\_engineering & 6 & 0.60 & 0.84 & 0.70 \\ \hline Physics & 7 & 0.75 & 0.82 & 0.78 \\ \hline Electricity & 8 & 0.78 & 0.86 & 0.82 \\ \hline General & 9 & 0.71 & 0.46 & 0.41 \\ \hline \end{tabular}
\end{table}
Table 2: Class-wise performance of Bi-LSTM model on BigPatent Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Method** & **Precision** & **Recall** & **F1-Score** \\ \hline \hline Out Method & 0.79 & **0.78** & **0.78** \\ \hline Roudsari et al. [9] (Bi-LSTM) & 0.7825 & 0.6421 & 0.6842 \\ \hline Roudsari et al. [9] (CNN-BiLSTM) & 0.7930 & 0.6513 & 0.6938 \\ \hline Shaobo et al. [2] (DeepPatent) & **0.7977** & 0.6552 & 0.6979 \\ \hline \end{tabular}
\end{table}
Table 3: Performance comparison with related works
introduced CNN-based deep patent modelling employing _FastText_ word-embedding model. The performance of our methods on BigPatent data is higher than their models for all evaluation metrics except Precision. The comparison shows the effectiveness of our methods in classifying patents.
### Generated explanation for prediction
We attempted to unbox the black-box nature of the deep patent classification model by adopting a layer-wise relevance propagation technique to compute the relevance score for each term by back-propagating the prediction score from the output layer to input layers. To represent the explanation per predicted class for a given patent text, we highlighted the related words that contributed to the classifier's prediction. As an example explanation, a patent is classified as _Chemistry_, and the related words that contributed to the prediction are highlighted in red color in Fig 4. The figure shows the explanation highlighting relevant words for the patent that classified as _chemistry_. The intensity of the color represents the contributions of a particular word. The higher the intensity of the color (red), the better the relevancy the word is.
We can see that from the figure, the most relevant words include, _alkali, alkyl, monomer, acid, acrylate, acrylonitrile, acetate, polymer, ether_. We can observed that the highlighted words are completely related to terms used in organic chemistry and the explanation makes sense why this patent has been classified as a chemical patent. The next relevant list of words is _soluble, water, stiffness, enhancing, etc_. These words are directly related to chemistry except _stiffness_ and _enhancing_. Since _enhancing_ the _stiffness_ of the paper or paperboard is the objective of this patent, these words are selected as relevant.
Figure 4: An example explanation for a patent classified as _Chemistry_ patents highlighting relevant words. The higher the intensity of the color, the better the relevancy of the words contributing to the prediction.
For another example patent in the field of _Electricity_, Fig. 6 illustrates the explanations highlighting relevant words that contributed to the classifier to decide that the patent is from _electricity_ field. The most relevant words, in this case, include _power, channel, modem, device, bonded, bandwidth, data,_ etc. We can see that all identified related words are used in _electricity_ literature. The word _device_ is used for common use in some other fields also, but this word also can be used to mention any electrical instrument in electricity-related explanation. However, there are some words selected as relevant for both examples which are not relevant to the specific fields but can be used in literature for any field. One plausible reason is that those also might carry considerable
Figure 5: An example explanation for a patent classified as _Chemistry_ patents highlighting relevant words. The higher the intensity of the color, the better the relevancy of the words contributing to the prediction.
Figure 6: An example explanation for a patent classified as _Electricity_ patents highlighting relevant words. The higher the intensity of the color, the better the relevancy of the words contributing to the prediction.
importance in describing the any scientific object (i.e., explaining chemical reaction) and capture good contextual and semantic information in _FastText_ embedding.
### Limitations
Our model can explain the prediction for multi-label classification. Since the patents are classified in different levels and the patent classification system has a huge set of classes to classify in different levels, it should be explainable for multi-level classification also. This will be more challenging to explain the prediction for different subgroups-level classes. Another limitation is that our utilized pre-trained word-embedding model is not trained on the patent corpus. The local word-embedding model trained with patent corpus might capture better contextual and semantic information for scientific terms and jargon. Hence, the performance might be better than the current approach.
## 5 Conclusion and Future Direction
This paper aimed at explaining the predictions from DNN-based patent classification models with layer-wise relevance propagation technique to identify the relevance of different words in the patent texts for a certain predicted class. Layer-wise relevance propagation technique can capture context-specific explanatory and relevant words to explain the predictions behind certain predicted classes. The experimental results demonstrated the effectiveness of classifying patent documents with promising performance compared to existing works. We observed that the explanations generated by the LRP technique make it easier to understand why a certain patent is classified as a specific patent class. Most of the captured words have high relevancy with the patent domain, even though a few words marked as related are not that relevant (which, however, should also provide useful information to human expert in assessing the predictions). Even though our approach would still need to be evaluation with users, we can observe that the explanations are helpful to understand the question why a certain patent was classified into a specific class, and to assess the results of deep-learning-based complex artificial intelligence-enabled models.
Since patents have a lot of scientific and uncommon words and phrases (i.e., jargon) that are not often used in other texts, we plan to train a local word-embedding model with patent texts to have better representation in our future work. It would be interesting to apply a transformer-based approach for the same purpose. The explanations for sub-group level prediction and capturing the sub-group context will be even more explanatory. However, the generated explanations will need to be evaluated by human experts in the patent industry. Therefore, we plan to have a user-centric evaluation for the generated explanations and elicit more human-centric requirements to be addressed in the future for better adoption real-word applications.
## Acknowledgment
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 955422. |
2309.14866 | On Dissecting Polygons into Rectangles | What is the smallest number of pieces that you can cut an n-sided regular
polygon into so that the pieces can be rearranged to form a rectangle? Call it
r(n). The rectangle may have any proportions you wish, as long as it is a
rectangle. The rules are the same as for the classical problem where the
rearranged pieces must form a square. Let s(n) denote the minimum number of
pieces for that problem. For both problems the pieces may be turned over and
the cuts must be simple curves. The conjectured values of s(n), 3 <= n <= 12,
are 4, 1, 6, 5, 7, 5, 9, 7, 10, 6. However, only s(4)=1 is known for certain.
The problem of finding r(n) has received less attention. In this paper we give
constructions showing that r(n) for 3 <= n <= 12 is at most 2, 1, 4, 3, 5, 4,
7, 4, 9, 5, improving on the bounds for s(n) in every case except n=4. For the
10-gon our construction uses three fewer pieces than the bound for s(10). Only
r(3) and r(4) are known for certain. We also briefly discuss q(n), the minimum
number of pieces needed to dissect a regular n-gon into a monotile. | N. J. A. Sloane, Gavin A. Theobald | 2023-09-26T11:53:32Z | http://arxiv.org/abs/2309.14866v1 | # On Dissecting Polygons into Rectangles
###### Abstract
What is the smallest number of pieces that you can cut an \(n\)-sided regular polygon into so that the pieces can be rearranged to form a rectangle? Call it \(r(n)\). The rectangle may have any proportions you wish, as long as it is a rectangle. The rules are the same as for the classical problem where the rearranged pieces must form a square. Let \(s(n)\) denote the minimum number of pieces for that problem. For both problems the pieces may be turned over and the cuts must be simple curves. The conjectured values of \(s(n),3\leq n\leq 12\), are \(4,1,6,5,7,5,9,7,10,6\). However, only \(s(4)=1\) is known for certain. The problem of finding \(r(n)\) has received less attention. In this paper we give constructions showing that \(r(n)\) for \(3\leq n\leq 12\) is at most \(2,1,4,3,5,4,7,4,9,5\), improving on the bounds for \(s(n)\) in every case except \(n=4\). For the 10-gon our construction uses three fewer pieces than the bound for \(s(10)\). Only \(r(3)\) and \(r(4)\) are known for certain. We also briefly discuss \(q(n)\), the minimum number of pieces needed to dissect a regular \(n\)-gon into a monotile.
## 1 Introduction
Two polygons are said to be _equidecomposable_ if one can be cut into a finite number of pieces that can be rearranged to form the other. Pieces may be turned over, and the cuts must be along simple plane curves. The Bolyai-Gerwien theorem from the 1830s states that any two polygons of the same area are equidecomposable, and the dissection can be carried out using only triangular pieces. Furthermore, the dissection can be carried out using only a straightedge and compass. Boltianskii [3] gives an excellent survey.
A much-studied special case of this problem asks for the minimum number of pieces (\(s(n)\), say) of any shape needed to dissect a regular polygon with \(n\) sides into a square of the same area. Despite its long history [2, 7, 9, 10, 11, 12, 13, 18, 19, 23, 26], surprisin
Figure 1: A 4-piece dissection of an equilateral triangle into a square.
about this problem. The best upper bounds currently known for \(s(n),n=3,4,5,\ldots,16\), are shown in Table 1. The only exact value known appears to be the trivial result that \(s(4)=1\). Even the value of \(s(3)\) is not known. The four-piece dissection of an equilateral triangle into a square shown in Fig. 1 is at least 120 years old (see the discussions in [2], [9, Ch. 12]), but there is no proof that it is optimal. The conjecture that it is impossible to dissect an equilateral triangle into three pieces that can be rearranged to form a square must be one of the oldest unsolved problems in geometry.
In the present article we consider a weaker constraint: what is the minimum number of pieces (\(r(n)\), say) needed to dissect a regular polygon with \(n\) sides into a _rectangle_ of the same area? The proportions of the rectangle can be anything you want. It is clear that \(r(3)=2\) and \(r(6)\leq 3\) (Figs. 2, 3). (Surely it should be possible to prove that no two-piece dissection of a regular hexagon into a rectangle is possible?)
Lindgren [18, 19] gives examples of regular \(n\)-gon to non-square rectangle dissections, but none have fewer pieces than the corresponding \(n\)-gon to square dissections. Frederickson [9, pp.
\begin{table}
\begin{tabular}{|c|c c c c c c c c c c c c c|} \hline \(n\) & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline \(s(n)\leq\) & 4 & 1 & 6 & 5 & 7 & 5 & 9 & 7 & 10 & 6 & 11 & 9 & 11 & 10 \\ \hline \(r(n)\leq\) & 2 & 1 & 4 & 3 & 5 & 4 & 7 & 4 & 9 & 5 & 10 & 7 & 10 & 9 \\ \hline \(q(n)\leq\) & 1 & 1 & 2 & 1 & 3 & 2 & 3 & 2 & 4 & 3 & 4 & 3 & 5 & 4 \\ \hline \end{tabular}
\end{table}
Table 1: \(s(n)\) (resp. \(r(n)\), \(q(n)\)) is the minimum number of pieces needed to dissect a regular \(n\)-sided polygon into a square (resp. rectangle, monotile). Only the values of \(s(4)\), \(r(3)\), \(r(4)\) and \(q(n)\) for \(n=3,4,5,6,8,10\) are known to be exact.
Figure 3: A 3-piece dissection of a regular hexagon into a rectangle.
Figure 2: A 2-piece dissection of an equilateral triangle into a rectangle. One piece must be turned over.
150-151] mentions that in 1926 H. E. Dudeney found a 4-piece octagon to rectangle dissection (probably that shown in Fig. 21).
In April 2023, before the current investigation was begun, the second author (G.A.T.)'s _Geometric Dissections_ database [26] contained several examples of regular \(n\)-gon to rectangle dissections that had fewer pieces than the best square dissections known, including a 5-piece dissection of a pentagon, a 4-piece dissection of an octagon, a 6-piece dissection of a 10-gon, and a 5-piece dissection of a 12-gon. The database also contained the star polygon and Greek cross dissections shown below in SS11 and SS12.1
Footnote 1: [26] now includes all the dissections mentioned in this article.
In May 2023, Adam Gsellman [15] wrote to N.J.A.S. enclosing dissections showing that \(r(5)\leq 4\), \(r(7)\leq 6\), and \(r(8)\leq 4\). As mentioned above, the third of these results was already known, and G.A.T. was aware that \(r(5)\leq 4\), although that result had not yet been mentioned in the literature, but the upper bound on \(r(7)\) appeared to be new. We have since shown that \(r(7)\leq 5\) (see SS4), but Gsellman's dissections of the pentagon and octagon are shown in SS3 and SS5.
Table 1 shows the current best upper bounds on \(s(n)\) and \(r(n)\) for \(n\leq 16\), although to avoid making the paper too long we shall say very little about the cases when \(n>12\).
The recent remarkable discovery [25] of a single polygon, or _monotile_, that tiles the plane, but can only do so in a non-periodic way, reminded us of a question asked by Grunbaum and Shephard in 1986 [14, SS2.6]: what is the minimum number of pieces needed to dissect a regular \(n\)-gon into a monotile that tiles the plane (allowing periodic tilings)? Call this number \(q(n)\). Of course squares and rectangles themselves are monotiles, so \(q(n)\leq r(n)\leq s(n)\). We have included bounds on \(q(n)\) in Table 1. All of \(s(n)\), \(r(n)\), and \(q(n)\) are fundamental geometric quantities, with \(q(n)\) perhaps the most basic of the three. We will give some examples of these monotiles in Remark 1.6, including a dissection of a 9-gon which improves on Grunbaum and Shephard's.
The paper is arranged as follows. The remainder of this section gives some general remarks about our dissections. Section 2 defines some parameters and coordinates that will be generally useful for our dissections. Subsequent sections deal in turn with pentagons, heptagons, octagons, up through 16-gons, followed by sections on star polygons and the Greek cross. Finally, Section 13 gives some examples of dissections where curved cuts appear essential if one wishes to minimize the number of pieces. Dissections given without attribution are believed to be new.
**Remark 1.1**.: _Certifying the dissections._
We have attempted to give detailed descriptions of most of the dissections in the main body of the paper (SS3-SS10), enough at any rate to convince the reader that the dissections are correct. If the dissection begins by cutting up the regular polygon, for example, we have to make sure that the rearranged pieces form a proper rectangle. The pieces must not overlap, there can be no holes; when pieces fit together at a vertex, the sum of the angles must be \(2\pi\) at an interior point, or \(\pi\) or \(\pi/2\) at a boundary point, and so on.
In simple cases the the correctness can be checked "by hand", like solving a jigsaw puzzle. The first pentagon dissection, in SS3.1, is an example.
Many of our dissections were obtained by one of the standard strip or superposition constructions. There are a great many versions of these constructions, and they are described in
most of the books on the subject, and in the _Methods_ section of [26], so we shall not say much about them here.2
Footnote 2: For the mathematical theory underlying these constructions, see [1, 21, 24].
A simple example of a superposition, used to dissect a polygon \(A\) into a polygon \(B\), is to overlay a strip tiled with pieces from \(A\) and a second strip tiled with pieces from \(B\). Then with luck the intersection of the two strips will provide the desired dissection (see for example [9, Ch. 11], [18, Chaps. 2-5].) Since for our problem we can assume \(B\) is a rectangle, we can often dispense with the second strip. All we need then is a strip tiled with pieces from \(A\). We obtain the dissection by cutting out a rectangle of the correct length from the strip. Examples are shown in Figs. 18 and 26.
If a dissection is obtained by one of these standard constructions, it can generally be assumed to be correct. However, one must be careful: with complicated strips like those shown in sections SS6 onwards, it is easy to be mistaken about points coinciding, or when triangular regions shrink to a point (in order to save a piece).
For this and other reasons, we have therefore tried to give _ab initio_ descriptions of the dissections. In most cases we are able to give a straightedge and compass construction, and to sketch a proof that it is correct.
**Remark 1.2**.: _Straightedge and compass constructions._
Given an initial drawing of a regular \(n\)-gon, our dissections can be constructed using only a straightedge and compass. That is, there is no need for a ruler: the construction does not require locating a point which is at some arbitrary irrational distance from another point.3 We need to be given the initial \(n\)-gon, since, for example, a regular heptagon cannot be obtained with only a straightedge and compass.
Footnote 3: Hadlock [16] contains an excellent introduction to straightedge and compass constructions.
Besides its aesthetic appeal, the advantage of a straightedge and compass construction is that it enables us to give explicit coordinates for every vertex in the construction.4 We usually start from from the vertices (2.1) of the \(n\)-gon, and every subsequent vertex is then determined. In SS7 we start from the rectangle, which we assume has width \(\sqrt{5}\) and height \(\cos(\pi/10)\), and again all subsequent vertices are determined.
Footnote 4: Although the Bolyai-Gerwien theorem guarantees that a straightedge and compass dissection of a regular \(n\)-gon to a square exists, we don’t know that this is true for a dissection with the minimum number of pieces.
Although in theory we could find exact expressions for all the coordinates in a straightedge and compass construction in this way, the expressions rapidly become unwieldy. In practice we have found it better to use computer algebra systems such as WolframAlpha and Maple to guess expressions for the coordinates based on 20-digit decimal expansions, knowing that they could be justified if necessary. We shall see examples of this is SS6.
**Remark 1.3**.: _Turning pieces over._
Although the definitions of \(s(n)\) and \(r(n)\) allow pieces to be turned over, this is deprecated by purists. Fortunately all the dissections \(s(n)\) and \(r(n)\) for \(3\leq n\leq 16\) mentioned in Table 1 can be accomplished without turning pieces over, with the single exception of \(r(3)\) (see Fig. 2), which seems to require three pieces if turning over is forbidden.
Another example where turning pieces over appears to be essential to achieve the minimum number of pieces is the seven-piece dissection of \(\{6\}\) into \(\{8\}\) given in [26].
**Remark 1.4**.: _Convex pieces._
The dissections in Figs. 1-4, 11, 12, 21, 43 and 50 use only convex pieces. Other things being equal, we prefer convex pieces, of roughly equal size, that do not need to be turned over. The primary goal however is always to minimize the number of pieces.
**Remark 1.5**.: _Improving on classic dissections._
We were surprised to find that \(r(8)\) is apparently less than \(s(8)\), and \(r(12)\) apparently less than \(s(12)\), since the best octagon to square5 and 12-gon to square constructions are so striking (Figs. 4, 5). One feels that they could not possibly be improved on. Yet if we only want a rectangle, there is a four-piece dissection of the octagon that has essentially the same symmetry as Fig. 4, as we shall see in SS5. Likewise, for the 12-gon, we can save a piece if we only want a rectangle, at the cost however of giving up all symmetry (see SS9).
Footnote 5: The octagon in the well-known Chase Bank logo is different from the octagon in Fig. 4. The Chase octagon has a square surrounded by four trapezoids, whereas Fig. 4 has a square surrounded by four pentagons. The pieces in the Chase logo can be rearranged to form a rectangle, but not a square.
By generalizing the constructions for \(n=8,10,12,14,\ldots\), we can show that \(q(2t)\leq\lfloor t/2\rfloor\) for \(t\geq 2\), and it may even be true that \(q(2t)=\lfloor t/2\rfloor\) holds for all \(t\geq 2\). No similar conjecture is known for \(q(2t+1)\).
**Remark 1.7**.: _Lower bounds._
It seems to be difficult to obtain nontrivial lower bounds on any of \(s(n)\), \(r(n)\), or \(q(n)\).6 References [5, 17] do give some lower bounds, but only for more restricted classes of dissections (only allowing polygonal cuts, or alternatively what are called "glass cuts"). Two negative results we do know of are the easily-proved fact7 that \(s(3)\) cannot be \(2\), and the nontrivial
Figure 8: A three-piece dissection of a \(9\)-gon into a monotile, illustrating \(q(9)\leq 3\). This improves on the dissection in [14, Fig. 2.6.1].
Figure 6: A two-piece dissection of a pentagon into a monotile, illustrating \(q(5)=2\).
Figure 7: A three-piece dissection of a heptagon into a monotile, illustrating \(q(7)\leq 3\)[18].
theorem [6] that a circular disk cannot be dissected into a polygon.
We cannot resist mentioning that the latter question has been in the news recently, because of progress on the problem of dissecting a circular disk into a square _if fractal pieces are allowed_[20]. The new construction involves at most \(10^{200}\) pieces.
**Remark 1.8**.: _The OEIS entry._
The best upper bounds currently known for \(s(n)\), \(r(n)\), and \(q(n)\) for \(n\) up to around 16 or 20 are listed in the _On-Line Encyclopedia of Integer Sequences_ (or _OEIS_) database [22] as sequences A110312, A362939, and A362938, respectively. This is an exception to the usual OEIS policy of requiring that all terms in sequences must be known exactly, but these sequence are included because of their importance and in the hope that someone will establish the truth of some of the conjectured values.
**Remark 1.9**.: _Applications_
These polygon to rectangle dissections have potential applications to lossless source coding (cf. [24]). If a source (a lens, perhaps) repeatedly produces an output which is a point in a regular 12-gon, say, then the dissection could be used to map the point to a more convenient pair \((x,y)\) of rectangular coordinates (compare Figs 43 and 44).
**Remark 1.10**.: _For further information._
The database [26] is the best reference for drawings of dissections mentioned in Table 1 but not included in the present article.
**Remark 1.11**.: _Notation_
For up to eight sides we will use the names triangle, \(\ldots\), hexagon, heptagon, and octagon, but for nine or more sides we will say 9-gon, 10-gon, \(\ldots\). The symbol \(\{n\}\) denotes a regular \(n\)-sided polygon, and \(\{n/m\}\) is a regular star polygon (cf. [4]). \(L_{k,n}\) is the length of the chord joining two vertices of \(\{n\}\) that are \(k\) edges apart (2.3). Decimal expansions have been truncated not rounded.
Figure 9: A two-piece dissection of a 10-gon into a monotile, illustrating \(q(10)=2\).
Regular polygons: coordinates and metric properties
In later sections we will usually begin with a regular \(n\)-sided polygon, with \(n\geq 3\), having sides of length 1, with the goal of cutting it into as few pieces as possible which can be rearranged to form a rectangle.
We often take the polygon to have center \(P_{0}=(0,0)\) at the origin:, and to have vertices \(P_{1},P_{2},\ldots\), \(P_{n}\) labeled counterclockwise, starting with \(P_{1}\) at the apex of the figure (Fig. 10). The angle subtended at the center by an edge is \(2\pi/n\), and we define \(\theta=\pi/n\) and \(\phi=\pi/2-\theta=\frac{n-2}{2n}\pi\), which will be the principal angles used in the formulas.
Since the sides have length 1, the radius of the polygon is \(R=\frac{1}{2\sin\theta}\), and the distance from the center to the midpoint of an edge is \(d=\frac{1}{2\tan\theta}\). The vertices have coordinates
\[\begin{split} P_{1}&=(0,R)\,,\\ P_{k}&=(-R\sin(2(k-1)\theta),R\cos(2(k-1)\theta) )\,,\\ P_{n+2-k}&=(R\sin(2(k-1)\theta),R\cos(2(k-1)\theta ))\,,\end{split} \tag{2.1}\]
for \(k=2,\ldots,\lfloor(n+2)/2\rfloor\). The polygon has area
\[\frac{nd}{2}\ =\ \frac{n}{4}\cot\frac{\pi}{n}. \tag{2.2}\]
Many dissections involve a cut along a chord \(P_{i}P_{i+k}\) joining two vertices. Assuming \(n\geq 3\), \(k\geq 0\), let \(L_{k}=L_{k,n}\) denote the length of the chord joining two vertices that are \(k\) edges apart in \(\{n\}\) and are in the same semicircle. Then \(L_{0}=0,L_{1}=1\), \(L_{k}=L_{k-2}+2\cos(k-1)\theta\), and for \(t\geq 1\),
\[L_{2t}=2\sum_{j=0}^{t-1}\cos(2j+1)\theta\,,\quad L_{2t+1}=1+2\sum_{j=1}^{t} \cos 2j\theta\,. \tag{2.3}\]
Figure 10: Vertices and principal angles and distances for a regular \(n\)-gon.
Four-piece dissections of a pentagon
In 1891 Robert Brodie discovered a six-piece dissection of a regular pentagon into a square ([9, p. 120], [18, Fig. 3.1], [26]), and there has been no improvement since then, so it seems likely that \(s(5)=6\). In this section we give four different four-piece dissections of a regular pentagon into a rectangle, showing that \(r(5)\leq 4\). Rather surprisingly, this result does not seem to have been mentioned in the literature before now. The fact that there are at least four ways to get \(r(5)\leq 4\) makes us wonder if \(r(5)\) might actually be equal to 3.
The first two four-piece dissections (SS3.1, SS3.2) were found by the authors (although they can hardly be new), and have the property that the pieces are convex; the other two (SS3.3, SS3.4) are due to Adam Gsellman [15].
We use the notation introduced in the previous section (taking \(n=5\)). The pentagon has vertices \(P_{1},\ldots,P_{5}\) with sides of length 1. The key angles are \(\theta=\angle\,P_{1}P_{2}P_{5}=36^{\circ}\), \(\phi=54^{\circ}\), \(\angle\,P_{2}P_{1}P_{5}=2\phi\), and \(\angle\,P_{2}P_{3}Q_{4}=\theta/2\). We note the values of
\[\sin 36^{\circ}\ =\ \cos 54^{\circ}\ =\ \sqrt{\frac{5-\sqrt{5}}{8}}\,, \cos 36^{\circ}\ =\ \sin 54^{\circ}\ =\ \frac{\sqrt{5}+1}{4}\,,\] \[\sin 72^{\circ}\ =\ \cos 18^{\circ}\ =\ \sqrt{\frac{5+\sqrt{5}}{8}}\,, \cos 72^{\circ}\ =\ \sin 18^{\circ}\ =\ \frac{\sqrt{5}-1}{4}\,. \tag{3.1}\]
### Pentagon #1
To construct this dissection (see Fig. 11) we draw a perpendicular from \(P_{1}\) to the mid-point \(Q_{1}\) of the opposite side, and draw the chord \(P_{2}-P_{5}\). Let \(Q_{2}\) be the intersection of these two lines, and place \(Q_{3}\) on \(P_{1}-P_{5}\) so that \(Q_{2}P_{5}Q_{3}\) is an isosceles triangle. Finally, draw a perpendicular \(P_{3}-Q_{4}\) from \(P_{3}\) to \(P_{2}-P_{5}\).
To form the rectangle, we first rotate the quadrilateral \(P_{1}P_{2}Q_{2}Q_{3}\) by \(36^{\circ}\) and move it to \(P_{5}P_{4}Q_{6}Q_{5}\), then the triangle \(Q_{2}P_{5}Q_{3}\) is moved to \(Q_{6}P_{4}Q_{7}\), and the triangle \(P_{2}P_{3}Q_{4}\) is
Figure 11: The pentagon \(P_{1}P_{2}\ldots P_{5}\) is cut into four convex pieces which can be rearranged to form the rectangle \(Q_{4}P_{3}Q_{7}Q_{8}\).
moved horizontally to \(Q_{5}Q_{7}Q_{8}\). The two isosceles triangles (blue) have long sides of length \(L_{2,5}/2=(1+\sqrt{5})/4\) and base \(1/2\). The two right triangles (yellow) have sides of lengths \(\sin\theta\), \(\cos\theta\), and \(1\).
To prove that this dissection is correct, we must check that, after the rearrangement, the result is indeed a rectangle, with area equal to that of the pentagon. In particular, we must check that the points \(P_{2},Q_{4},Q_{2},P_{5},Q_{5},Q_{8}\) are collinear, as are \(P_{3},P_{4},Q_{7}\), and \(Q_{5},Q_{6},Q_{7}\), that \(Q_{6}\) bisects \(Q_{5}-Q_{7}\), and also that the difference between the \(x\)-coordinates of \(P_{2}\) and \(P_{3}\) is equal to the difference between the \(x\)-coordinates of \(Q_{5}\) and \(Q_{8}\). Since we have complete information about the points, these checks are easily carried out. The pentagon has area \((5^{3/4}/(8\sqrt{2}))(\sqrt{5}+1)^{3/2}\), and we can check that this is equal to \(|P_{3}Q_{4}|\cdot|P_{3}Q_{7}|\). This completes the proof of the dissection.
It is worth pointing out that the trapezoid \(P_{5}P_{4}Q_{7}Q_{5}\) is symmetric about its vertical axis.
We chose this relatively simple example to illustrate the steps needed to prove that a dissection is correct. In later examples we will just give the basic information needed for the proof and leave the detailed verification to the reader.
### Pentagon #2
This is very similar to the first dissection. Two of the pieces are the same, only now the rest of the pentagon is divided into two pieces that are reflections of each other. The trapezoid from Fig. 11 has moved to the center of the rectangle.
Although logically we are dissecting the polygon _into_ a rectangle, many of our colored illustrations have the rectangle on the left of the picture, as in Figs. 12, 19, 21, etc. This is because of the convention in [26] of starting with the figure having the smaller number of vertices. Also, in the case of strip or tessellation constructions, one often proceeds from the rectangle to the polygon.
### Pentagon #3
The first of Adam Gsellman's four-piece dissections of a pentagon is shown in Fig. 13. We do not know how Gsellman discovered it, but we have found that it can be obtained by a simple slide construction. (It can also be obtained from a strip dissection.)
Figure 12: Similar to the first pentagon dissection, only now the pieces are more nearly equal in size.
Cut the pentagon down the middle into two quadrilaterals \(A\) and \(B\), reflect \(A\) in a vertical mirror, and rotate \(B\) by \(180^{\circ}\) (see Fig. 14). Now slide the pieces towards each other until they overlap in a parallelogram whose diagonal is equal in length to the gap in the top (and the bottom) edge. Cut the parallelogram in the \(A\) piece into two equal isosceles triangles which can be rotated to complete the rectangle.
For the proof that this dissection is correct we label the points as in Fig. 15. \(Q_{1}\) is the midpoint of the side \(P_{3}P_{4}\). The key parameters are \(\theta=\pi/5\), \(R=1/(2\sin\theta)\), and \(d=R\cos\theta=(\sqrt{5}+1)^{3/2}/(4\sqrt{2}\,5^{1/4})\). The pentagon has height \(h=R+d=(5^{1/4}/(4\sqrt{2}))(\sqrt{5}+1)^{3/2}\). This is also the height of the final rectangle, which (since we know the area) has width \(w=\sqrt{5}/2\).
We label the vertices of the rectangle \(a_{1},\ldots,a_{4}\), and let \(b_{1},\ldots,b_{5}\), \(a_{5}\) denote the points in the center of the figure. Finally, let \(s\) and \(b\) denote the side and base of the small isosceles triangles.
Note that \(a_{1}b_{1}b_{3}a_{4}\) is a rotated copy of \(Q_{1}P_{4}P_{5}P_{1}\), and \(a_{2}a_{3}b_{5}a_{5}\) is a reflected copy of \(P_{1}P_{2}P_{3}Q_{1}\). In particular, \(|a_{1}b_{1}|=1/2\), so \(s=w-1/2=(\sqrt{5}-1)/2\). The base of the isosceles triangle is therefore \(b=(3-\sqrt{5})/2\). This implies \(s+b=1\), and we can now check that all the pieces fit together correctly.
### Pentagon #4
Gesellman's second pentagon dissection is shown in Fig. 16. It can be obtained by a similar slide construction.
There is a third version of Gsellman's dissection which has the zigzag cut through the
Figure 14: A construction that produces the dissection in Fig. 13 (see text for details).
Figure 13: Gsellman’s first pentagon dissection.
center of the rectangle going the other way (Fig. 17). These are elegant dissections, but all three have the slight defect of requiring a piece to be turned over.
## 4 A five-piece dissection of a heptagon
Even the great master Harry Lindgren [18] could only show that \(s(7)\leq 9\), but around 1995 G.A.T. found a seven-piece dissection of a heptagon into a square, and it is reasonable to conjecture that indeed \(s(7)=7\). This dissection is described in [9, pp. 128-129] and [26].
For the rectangle problem, we start from a heptagon strip (shown in Fig. 18) that is a modification of a strip used in the square case (compare [9, Fig. 11.28]). By cutting a
Figure 16: Gsellman’s second pentagon dissection.
Figure 15: Names for points and lengths in dissection of Fig. 13.
rectangle from this strip, we obtain a five-piece dissection of a heptagon to a rectangle, shown in Fig. 19.
Rather than following the path by which it was discovered, we will construct this dissection directly from the heptagon, something that can be done using only a straightedge and compass. The vertices of the 7-gon are labeled \(P_{1},\ldots,P_{7}\) (see Fig. 20). Drop a perpendicular from \(P_{1}\) to the midpoint \(Q_{6}\) of \(P_{4}P_{5}\), and draw chords \(P_{3}P_{5}\), \(P_{4}P_{7}\), and \(P_{5}P_{7}\). Let \(Q_{2}\) be the intersection of \(P_{3}P_{5}\) and \(P_{4}P_{7}\), and let \(Q_{3}\) be the midpoint of \(P_{4}Q_{2}\). Draw a line \(Q_{3}Q_{5}Q_{4}\) through \(Q_{3}\) parallel to \(P_{4}P_{5}\).
Using these these lines as guides, we get the five pieces by cutting \(P_{7}P_{5}P_{6}P_{7}\); \(P_{1}Q_{5}Q_{4}P_{7}P_{1}\); \(P_{1}P_{2}P_{3}Q_{2}Q_{3}Q_{5}P_{1}\); \(P_{3}P_{4}Q_{2}P_{3}\); and \(Q_{3}P_{4}P_{5}Q_{4}Q_{3}\).
These pieces can then be rearranged to form the rectangle as shown in Fig. 19, left.
Figure 19: A five-piece dissection of a heptagon to a rectangle.
Figure 17: Another version of Fig. 16.
Figure 18: A heptagon strip used for the heptagon to rectangle dissection.
To help anyone who wishes to verify the correctness of this dissection, we list some key angles and lengths. We set \(\theta=\pi/7\) and note that \(\cos(\theta)=.0.9009\ldots\) has minimal polynomial \(8x^{3}-4x^{2}-4x+1\). Then \(\angle\,P_{1}P_{2}P_{3}=5\theta\), \(\angle\,P_{7}P_{4}P_{5}=2\theta\), \(\angle\,P_{4}P_{3}P_{5}=\angle\,P_{5}P_{7}P_{6}=\theta\), and \(\angle\,P_{4}P_{5}P_{7}=4\theta\). The chord \(P_{7}P_{5}\) has length \(L_{2,7}=2\cos\theta\), and \(|P_{4}Q_{3}|=|Q_{3}Q_{2}|=1/2-(\sec\theta)/4\). The trapezoidal piece has cross-section \(|Q_{5}Q_{6}|=(4\cos\theta-3)/(8\sin\theta)\). The rectangle has height \(7/(8\sin\theta)\) and width \(2\cos\theta\).
## 5 Four-piece dissections of an octagon
It appears that five pieces are needed to dissect an octagon into a square (see Fig. 4), whereas four pieces are enough if we only want a rectangle (Fig. 21). The former has cyclic four-fold symmetry, while the latter has the symmetry of a Klein 4-group.
Adam Gsellman [15] has found two other four-piece dissections, shown in Figs. 22-24. The description in Fig. 23 is self-explanatory (the angles are multiples of \(\pi/8\) and the only irrationality needed is \(\sqrt{2}\)). The second dissection (Fig. 24) is very similar.
non-convex dissections may be needed to get the minimum number of pieces.
## 6 A seven-piece dissection of a 9-gon
Figure 25 shows a seven-piece dissection of a 9-gon into a rectangle, which is two fewer pieces than the best dissection into a square presently known. It was obtained from the strip dissection of a 9-gon shown in Fig. 26, by cutting a rectangle from the strip.
Figure 27: Labels for points in the 9-gon.
Figure 25: A seven-piece dissection of a 9-gon into a rectangle.
As with the heptagon, we will construct this dissection directly from the 9-gon, using only a straightedge and compass.
The vertices of the 9-gon are labeled \(P_{1},\ldots,P_{9}\) (see Fig. 27), and we use the coordinates established in SS2. Also \(\theta=\pi/9\), \(C_{1}=\cos\theta\) has minimal polynomial \(8x^{3}-6x-1\), and \(S_{1}=\sin\theta\) has minimal polynomial \(64x^{6}-96x^{4}+36x^{2}-3\). (For this reason, we write our expressions as rational functions of \(\cos\theta\), with at most linear terms in \(\sin\theta\).)
To obtain the dissection we first draw chords \(P_{2}-P_{4}\), \(P_{3}-P_{7}\), and \(P_{6}-P_{9}\). Then \(Q_{2}\) is the intersection of \(P_{3}-P_{7}\) and \(P_{6}-P_{9}\), \(Q_{3}\) is the midpoint of \(Q_{2}-P_{9}\), and \(Q_{7}\) is the midpoint of \(Q_{2}-P_{7}\). Draw a line segment \(Q_{7}-Q_{6}\) of length \(1/2\) parallel to \(P_{7}-P_{8}\), join \(Q_{3}\) to \(Q_{6}\), and locate \(Q_{4}\) at the intersection of \(Q_{3}-Q_{6}\) and a perpendicular drawn from \(P_{8}\) to the midpoint of \(P_{3}-P_{4}\). Finally \(Q_{5}\) is on \(P_{1}-P_{2}\) at distance \(|Q_{4}Q_{6}|\) from \(P_{1}\), and \(Q_{5}-Q_{1}\) is perpendicular to \(P_{5}-P_{6}\).
To assist in the analysis we define a further point \(Q_{8}\) at the intersection of \(Q_{7}-Q_{6}\) (extended) and \(Q_{4}-P_{8}\). Then \(Q_{4}Q_{6}Q_{8}\) is an isosceles triangle and \(Q_{3}Q_{2}Q_{7}Q_{8}\) is a parallelogram.
The seven pieces of the dissection can now be found by making the cuts indicated by the colored regions on the right of Fig. 25. To complete the proof that the dissection is correct, we must verify that the pieces can be rearranged to form the rectangle on the left of Fig. 25. We will not take the space to do that here, but to assist the reader we give two key lengths. The length
\[|Q_{2}Q_{7}|=|Q_{2}P_{7}|/2=|Q_{4}Q_{6}|=|Q_{4}Q_{8}|=|Q_{8}Q_{3}|=|P_{1}Q_{5} |=|Q_{2}Q_{3}|-\frac{1}{2}=\frac{C_{1}}{2C_{1}+1}=0.3263\ldots \tag{6.1}\]
plays a central role, as does \(|P_{8}Q_{4}|=3/(8S_{1}(C_{1}+1))=0.5652\). The rectangle has width \(2C_{1}=1.8793\ldots\) and height \(9/(8S_{1})=3.2892\ldots\).
Some of the equalities in (6.1) are by no means obvious. They do not follow directly from the geometry, but depend on the fact that \(\cos\theta\) satisfies a cubic equation. As discussed in Remark 1.2, we can find exact expressions for the coordinates of the points. We _could_ do this by solving the appropriate equations, using a computer algebra system such as Maple, but we have found it a lot easier to use another computer algebra system, WolframAlpha, and ask it to find the coordinates for us.
For example, the first step in finding the present dissection is to find \(Q_{2}\). Using Maple, and working to 20 decimal places, we find that
\[Q_{2}\ =\ \left(0.65270364466613930216,-0.50771330594287249271\right).\]
We now ask WolframAlpha to express these two numbers in terms of \(\cos\theta\), \(1/\cos\theta\), \(\sin\theta\), and \(1/\sin\theta\). The result (setting \(C_{1}=\cos\theta\), \(S_{1}=\sin\theta\)) is
\[Q_{2}\ =\ \left(\frac{2C_{1}}{2C_{1}+1},\ -(2S_{1}-\frac{1}{S_{1}}+\sqrt{3}) \right).\]
We do this for all the points. Another example is
\[Q_{8} =\ \left(1.1028685319524432095,0.19446547835755153996\right)\] \[=\ \left(\frac{C_{1}}{2}+\frac{1}{8C_{1}}+\frac{1}{2}.\ \frac{1}{8S_{1}}-\frac{S_{1}}{2} \right). \tag{6.2}\]
The verification of the equalities in (6.1) is then a routine calculation (using Maple's simplify(..., trig); command).
A four-piece dissection of a 10-gon
In the final appendix ("Recent Progress") to his 1964 book [18], Lindgren gave a new strip based on the 10-gon (shown in Fig. 29 below), and used it to obtain several new dissections, including an eight-piece dissection of a 10-gon to a square, and a seven-piece dissection to a golden rectangle. As Frederickson reports in [9, Ch. 11], G.A.T. was then able to show that in the dissection to a square, two of Lindgren's pieces could be merged, leading to a seven-piece dissection to a square, still the record. This dissection is also described in the _Variable Strips_ section of [26]. If we draw vertical lines across Lindgren's strip (without changing it), we obtain a five-piece dissection of a 10-gon to a (non-golden) rectangle, as shown in Fig. 30. There is a small range of possibilities for the positions of these vertical lines. In Fig. 30 they have been placed in the middle of their range, in order to obtain the most symmetric dissection.
Remarkably, if the goal is only to obtain a rectangle, it is possible to modify Lindgren's strip (Fig. 30) to get a four-piece dissection. The modified strip is shown in Fig. 31, and the dissection itself in Fig. 28. To go from Fig. 30 to Fig. 31 we merge the two right-most pieces of the rectangle, forming a church-shaped piece, and compensate by dividing the large piece into two by a zig-zag cut.
This is one of the most complicated dissections in the article, and we give a precise straight-edge and compass construction starting from the rectangle.in Fig. 31.
We first construct an intermediate rectangle with five pieces, and then shift it slightly to save a piece. The angles involved are \(\theta=\pi/10=18^{\circ}\), \(2\theta\), and \(\phi=4\theta\).
The intermediate rectangle has vertices labeled \(2,14,19,5\) in Fig. 32; the final rectangle has vertices \(1,13,18,4\). We place the origin of coordinates for the rectangle near the bottom left corner, at the point \(14=(0,0)\). The 10-gon has area \(\frac{5}{2\tan\theta}\) (see (2.2)), and we take the
Figure 28: A four-piece dissection of a 10-gon into a rectangle.
Figure 29: Lindgren’s 1964 10-gon strip [18].
width of the strip to be \(w=\sqrt{5}\,\cos\theta\). The other dimension of the rectangle is its height \(h=2\sqrt{5}\,\cos 2\theta\). (After a series of relabelings, the rectangle as drawn in Fig. 31 has ended up with height \(w\) and width \(h\). We hope the reader will forgive us!)
The coordinates of the points \(2,14,17,5\) are therefore \((0,w)\), \((0,0)\), \((h,0)\), and \((h.w)\). We draw a network of lines as follows. Starting at point 2, we draw line segments of length 1 from 2 to 6 to 7 to 3, and from 6 to 8, 3 to 21, and 7 to 12 to 17 to 15 to 11 (the angles are indicated in the figure). We then complete the line 3 to 15. We also draw line segments of length 1/2 from 8 to 9 to 14. For the two final lines we join 9 to 21 and draw the perpendicular from 11 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15 to 15. We then draw the parallel lines of length 1 from 15. We then draw the parallel lines of length 1 from 15.
to 10. The coordinates have been chosen so that several coincidences occur.8 The points 8, 11, 12, and 20 (in the adjacent rectangle in the strip) are collinear. Also the distance from 9 to 10 turns out to be equal to \(w\). The angle \(\angle 8,9,21\) is a right angle. The central point 21 has coordinates \((2+\sin\theta,2\sin 2\theta)\). The distance from 10 to 11 is \(x:=\frac{3-\sqrt{5}}{4}\), and we get the final rectangle by shifting the intermediate rectangle to the left by that amount.
Footnote 8: Similar to those in (6.1), but less dramatic, since \(\sin\pi/10\) is only a quadratic irrationality.
We get the four pieces in the dissection as follows. The quadrilateral piece (the "dish") is obtained by cutting along the path \(2,6,7,3,2\). For the hexagon (the "church"), cut along \(3,7,12,17,18,4,3\). For the first 9-gon (the "hammer"), cut along \(6,8,9,10,11,15,17,12,7,6\), and for the second 9-gon (the "triangle"), cut along \(2,1,13,15,11,10,9,8,6,2\). By moving the edge of the rectangle to the left so that it no longer passes through the point 8 we have reduced the number of pieces from five to four.
## 8 Dissecting an 11-gon to a square and to a rectangle
### A ten-piece dissection of an 11-gon to a square
Before the appearance of [26] there had been little work on dissections of the 11-gon: this polygon is not mentioned in any of [9, 18, 19]. G.A.T.'s ten-piece dissection of an 11-gon into a square was given in [26], and was described by Frederickson in [10, 12]. It is shown here in Fig. 33. It can be obtained by taking the 11-gon and constructing the two superpositions of strips shown in Figs. 34 and 35. When the quadrilateral outlined in red in Fig. 35 is combined with the hexagon outlined in red in Fig. 34, the result is the dissected square on the left of Fig. 33.
### Our first nine-piece dissection of an 11-gon to a rectangle
A piece can be saved if our goal is only to dissect the 11-gon into a rectangle. We have found several examples of nine-piece 11-gon to rectangle dissections, two of which are described here and in the next section. Our first construction is similar to the 11-gon to square dissection of SS8.1. The proof of correctness involves an interesting interplay between the two superpositions.
Figure 33: A ten-piece dissection of an 11-gon into a square [26].
A second reason for including this proof is that similar arguments can be used to give a proof of the 11-gon to square dissection mentioned above.
We start from the 11-gon and construct two superpositions of strips (see Figs. 37 and 38), and when the quadrilateral outlined in red in Fig. 37 is combined with the hexagon outlined in red in Fig. 38, the result is the dissected rectangle on the left of Fig. 39.
of length \(L_{11,2}=2\cos\theta\), where \(\theta=\pi/11\). The long cut through the middle of Fig. 36 begins at a point three-fifths of the way along an edge. This parameter could be changed, but \(3/5\) seems to be a good choice. This cut is parallel to the top edge of the polygon, and has length \(L_{11,4}=2(\cos\theta+\cos 3\theta)\).
The heptagonal tadpole-shaped structure at the top of Fig. 36 was the repeating element in Fig. 35 and will be used again in the second superposition (Fig. 38). It can be formed from three pieces cut from the 11-gon along chords, as follows:
The edges in this "tadpole" have length 1, except for \(|GC|\) and \(|AD|\), which have lengths \(L_{11,2}\) and \(L_{11,3}\), respectively. The angle \(\angle\,CGB=\theta\), and the angles \(\angle\,GBA=\angle\,BAF\) are \(4\theta\). The area of the tadpole is \(\frac{5}{2}\sin 2\theta+\sin 4\theta\). The dissection of Fig. 36 and the remaining steps in the formation of Fig. 39 can all be carried out with straightedge and compass (assuming, of course, that we are given an 11-gon to start with).
The wide, almost horizontal, strip in the first superposition (Fig. 37) is actually a double strip. Copies of the large heptagonal piece in Fig. 36 are placed along both the bottom edge of the strip (for example, \(35,30,29,28,26,25,32\)) and the top edge \((5,9,14,15,16,12,8)\). The interior of the strip is filled with copies of two other pieces from Fig. 36. From Fig. 36 we see that \(|32,35|=|18,22|=L_{11,4}\).
We then superimpose a vertical strip, bounded by the lines \(26,13\) and \(24,16\). The angle \(\angle\,26,33,36\) between the two strips is \(5\theta\), and \(\angle\,36,33,37=\pi/2-\theta=\theta/2=\angle\,18,20,19\). Along the vertical line \(26,13\), the segment \(33,26\) is the side of a quadrilateral \(33,26,25,32\) with internal angles \(2\theta\), \(8\theta\), \(6\theta\), and \(6\theta\), so \(|33,26|=4\cos^{2}\theta-2\cos\theta-\frac{3}{5}\). From Fig. 36, \(|26,18|=L_{11,2}-(1-3/5)\), and we know \(|18,13|=3/5\). Adding up the lengths of the segments, we get \(|33,4|=8\cos^{2}\theta-2\cos\theta-1\).
Figure 36: A 5-piece dissection of an 11-gon used to form the superposition of Fig. 37.
The vertical strip has width \(|19,21|=L_{11,4}\cos(\theta/2)=d_{1}\) (say) \(=3.1958\ldots\). This is the width of the final rectangle on the left of Fig. 39. The height \(|1,19|\) of this rectangle is then determined by the area of the 11-gon. This height is \(11\cot\theta/(4d_{1})=d_{2}\) (say) \(=2.9305\ldots\). We easily determine the positions of the points \(6,4^{\prime},7^{\prime},20,19\), and \(21\). The lengths \(|2,10|\) and \(|3,11|\) will be obtained from the second superposition.9 We now have full information about the coordinates in Fig. 37.
Footnote 9: We need the second superposition to find the angle \(\gamma\).
The second superposition (Fig. 38) contains a diagonal double strip, formed from copies of the "tadpole", with a horizontal strip superimposed on it. We start by constructing the strip of tadpoles, and construct the horizontal strip from it. The points are labeled as in Fig. 38. The diagonal strip has period \(2\cos\theta\) along the strip (e.g. \([21,22]\)), and width \(3\sin\theta+2\sin 3\theta\) (as can be easily found from the properties of the tadpole) in the perpendicular direction. The product \(A_{2}\) (say) of these two quantities is the area of a fundamental region for the double strip.
The points 15, 16, and 17 are the midpoints of sides of the tadpoles. With 15 as center, we draw a circle of radius \(d_{1}/2\) (taken from the first superposition), which meets the diagonal strip at points 1 and 24, The line \(3,26\) is constructed similarly. The trapezoid \(1,3,28,27\) will replace the region \(2,10,11,3\) in the first superposition. Let \(\gamma\) denote the angle \(\angle 15,1,3\). The area of this trapezoid can now be found in two ways: it is half of \(A_{2}\), that is, \(\cos\theta(3\sin\theta+2\sin 3\theta)\), and it is also \(d_{1}\cos\theta\sin\gamma\). It is also (area of 11-gon) \(-\)\(d_{1}|6,20|\). After some simplification, we
Figure 37: First superposition used to produce the 9-piece rectangle dissection of Fig. 39.
find that
\[\sin\gamma\ =\ \frac{2\cos^{2}\theta-10\cos\theta+11}{2\cos^{2}\theta+3\cos \theta-4}\ =\ 0.7374\ldots\,.\]
We can now give the two sides of the trapezoid that were needed for the first superposition: they are \(|1,3|=2\cos\theta\) and \(|27,28|=|1,3|\sin\gamma\).
We now also have full information about the coordinates in Fig. 38. For example, by following around the boundary of the region \(0,5,8,12,15,1\), we find that the very short side of the trapezoid is \(|1,3|=2\cos\theta\) and \(|27,28|=|1,3|\sin\gamma\).
Figure 38: Second superposition used to produce the 9-piece rectangle dissection of Fig. 39.
Figure 39: Nine-piece dissection of an 11-gon into a rectangle, obtained by combining superpositions in Figs. 37 and 38.
\(0,1\) of that hexagon has length \(\frac{1}{2}(\cos\theta+2\cos 3\theta-d_{1}\cos\gamma)=\ 0.05532\ldots\). This is the (dark blue) hexagon at the top of the dissected 11-gon in Fig. 39.
### A second nine-piece dissection of an 11-gon to a rectangle
Our second nine-piece 11-gon to rectangle dissection is slightly simpler, as it only uses a single superposition. We describe the construction, and give some of the distances and angles, but leave the detailed verification of its correctness to the reader. The starting point is the simple four-piece dissection of the 11-gon shown in Fig. 40. We draw chords from \(P_{3}\) to \(P_{8}\) and from \(P_{5}\) to \(P_{10}\), intersecting at \(Q\), say. \(Q\) is located at a distance \(\sin(2\theta)/(\cos(\theta)+\cos(2\theta))=0.3002\ldots\) below the center of the 11-gon. The segments \(P_{3}Q\) and \(P_{10}Q\) have length \(2\cos\theta\). We replace \(P_{3}Q\) by a pair of line segments of length 1, \(P_{3}Q_{1}\) and \(Q_{1}Q\), where \(\angle Q_{1}P_{3}Q=\angle Q_{1}QP_{3}=\theta\), with a similar construction for \(Q_{2}\).
Figure 41: Tessellation of plane built from pieces from Fig. 40.
Figure 40: Four-piece dissection of an 11-gon used for tessellation in Fig. 41.
The angles in Fig. 40 are remarkably nice, they are all multiples of \(\theta\): \(\angle\,Q_{1}QP_{5}=5\theta\), \(\angle\,P_{5}QP_{8}=7\theta\), \(\angle\,P_{8}QQ_{2}=3\theta\), and so on. \(Q\) seems to be an auspicious interior point in the 11-gon.
We now use these four pieces to build a tessellation of the plane, as shown in Fig. 41, where some of the points have been labeled. We then cut out the rectangle outlined in red from the tessellation. The vertical edges of this rectangle pass through the points \(A_{6}\) and \(A_{7}\), so the width of the rectangle (see Fig. 40) is \(2\cos\theta\). The height is therefore \(11/(8\sin\theta)\). The rectangle is bounded at the top by a horizontal line through \(A_{11}\), the midpoint of \(A_{10},A_{12}\), and at the bottom by a line through the midpoint \(A_{2}\) of \(A_{1}-A_{3}\), and the midpoint \(A_{4}\) of \(A_{3}-A_{5}\). This is the rectangle on the left of Fig. 42. Finally, the nine pieces in the rectangle can be rearranged to form an 11-gon, as shown on the right in Fig. 42.
## 9 Five-piece dissections of a 12-gon
We start with a 12-gon with edge-length 1 (see Fig. 43). Draw chords from \(P_{1}\) to \(P_{4}\) and \(P_{8}\). Draw a perpendicular from \(P_{4}\) to \(P_{1}-P_{8}\), meeting it at \(R\), and draw an equilateral triangle \(P_{1}QP_{12}\) that touches \(P_{1}-P_{8}\).
The angle \(\angle\,P_{1}P_{4}R\) is \(\pi/6=30^{\circ}\). The lengths of the line segments are as follows: \(|P_{1}P_{4}|=|QP_{8}|=L_{3,12}=1+\sqrt{3}\), \(|P_{4}R|=|RP_{8}|=(3+\sqrt{3})/2\), and \(|P_{1}R|=(1+\sqrt{3})/2\).
After the pieces are rearranged (Fig. 44), the resulting rectangle has width \(w=3+\sqrt{3}\) and height \(h=(3+\sqrt{3})/2\). We then easily check that the product \(wh\) is equal to the area \(3\cot 15^{\circ}\).
A second dissection of the 12-gon (although with a non-convex piece) is shown in Fig. 45.
Figure 42: A second nine-piece dissection of an 11-gon into a rectangle.
## 10 A seven-piece dissection of a 14-gon and a nine-piece dissection of a 16-gon
It is known that \(s(14)\leq 9\) and \(s(16)\leq 10\)[26]. Figure 46 shows a tessellation of the plane based on a 14-gon, and a rectangle superimposed on it which leads to the seven-piece dissection shown in Fig. 47. Figures 48 and 49 play similar roles for the 16-gon.
examples with larger numbers of sides may be found in the _Rectangle Dissections_ section of [26], and in a projected sequel to the present work.
Figure 47: A seven-piece dissection of a 14-gon into a rectangle.
Figure 46: A tessellation based on a 14-gon, with the rectangle that leads to the dissection in Fig. 47.
## 11 Selected dissections of star polygons to rectangles
We give four examples of especially elegant dissections of star polygons to rectangles. These are taken from [26], where many further examples can be found.
## 12 Three-piece dissections of a Greek cross
Most authors who study dissections of polygons include the Greek cross, so we briefly discuss it here. The classical four-piece dissection of a Greek cross into a square can be seen for example
Figure 49: A nine-piece dissection of a 16-gon into a rectangle.
Figure 48: A tessellation based on a 16-gon, with the rectangle that leads to the dissection in Fig. 49.
in [18, Fig. 10.1]. The dissected square has 4-fold rotational symmetry.
Three pieces seem to be the minimal number needed to form a rectangle from a Greek cross. The simplest three-piece construction cuts off two opposite arms from the cross and places them at the ends of the other two arms, forming a \(1\times 5\) rectangle. Eppstein [8] gives a three-piece dissection into non-convex pieces, shown in Fig. 55, and the database [26] gives another (Fig. 55), similar in spirit to the four-piece dissection into a square.
## 13 Curved cuts are sometimes essential
We know of no theorem which will guarantee that polygonal cuts are sufficient to achieve \(s(n)\) or \(r(n)\). The following are three examples of other situations where it seems clear that
minimal dissections can _not_ be achieved using only polygonal cuts.
1. Take a square with a smaller square attached to it, and cut out three small square holes at random positions in the interior. Call this figure \(A\). For figure \(B\), make a circular cut enclosing the three holes, and rotate the interior of the circle by a small random angle. This gives a two-piece dissection of \(A\) to \(B\) which surely cannot be accomplished with a single polygonal cut. This example was suggested by Richard C. Schroeppel and Andy Latto, (personal communication).
2. Figure 57 shows an example due to David desJardins (personal communication) of a three-piece dissection between two simply connected polygonal regions that appears to require a curved piece.
3. Greg Frederickson [9, Fig. 13.6] gives an example of 6-piece dissection of a hexagon into a hexagram which requires that two of the pieces be turned over. This can be modified to avoid turning the pieces over at the cost of adding an extra piece. But if curved cuts are used, this can be accomplished without adding the extra piece, as shown in Fig. 58. We conjecture that a 6-piece hexagon to hexagram dissection that avoids turning pieces over cannot be constructed using only polygonal cuts.
4. In this regard, it is worth pointing out that a rotation by any desired angle that uses a single circular cut (see Fig. 59) can be accomplished by two square cuts and turning a piece over (Fig. 60).
Figure 57: A three-piece dissection that appears to require a curved piece (David desJardins).
Figure 56: A two-piece dissection that can be accomplished by a single circular cut, but surely not by a polygonal cut. The polygon contains three small squares holes (Richard C. Schroeppel and Andy Latto).
## 14 Acknowledgments
Thanks to Adam Gsellman for telling us about his polygon to rectangle dissections, which was the seed that led to the present paper. Thanks also to David desJardins, Andy Latto, Richard C. Schroeppel, and Allan C. Wechsler for helpful comments. The writing of this paper depended heavily on PostScript, LaTeX, Tikz, Maple, WolframAlpha, and email.
|
2302.12803 | PiPar: Pipeline Parallelism for Collaborative Machine Learning | Collaborative machine learning (CML) techniques, such as federated learning,
have been proposed to train deep learning models across multiple mobile devices
and a server. CML techniques are privacy-preserving as a local model that is
trained on each device instead of the raw data from the device is shared with
the server. However, CML training is inefficient due to low resource
utilization. We identify idling resources on the server and devices due to
sequential computation and communication as the principal cause of low resource
utilization. A novel framework PiPar that leverages pipeline parallelism for
CML techniques is developed to substantially improve resource utilization. A
new training pipeline is designed to parallelize the computations on different
hardware resources and communication on different bandwidth resources, thereby
accelerating the training process in CML. A low overhead automated parameter
selection method is proposed to optimize the pipeline, maximizing the
utilization of available resources. The experimental results confirm the
validity of the underlying approach of PiPar and highlight that when compared
to federated learning: (i) the idle time of the server can be reduced by up to
64.1x, and (ii) the overall training time can be accelerated by up to 34.6x
under varying network conditions for a collection of six small and large
popular deep neural networks and four datasets without sacrificing accuracy. It
is also experimentally demonstrated that PiPar achieves performance benefits
when incorporating differential privacy methods and operating in environments
with heterogeneous devices and changing bandwidths. | Zihan Zhang, Philip Rodgers, Peter Kilpatrick, Ivor Spence, Blesson Varghese | 2022-12-01T20:51:47Z | http://arxiv.org/abs/2302.12803v2 | # PipeLearn: Pipeline Parallelism for
###### Abstract
Collaborative machine learning (CML) techniques, such as federated learning, were proposed to collaboratively train deep learning models using multiple end-user devices and a server. CML techniques preserve the privacy of end-users as it does not require user data to be transferred to the server. Instead, local models are trained and shared with the server. However, the low resource utilisation of CML techniques makes the training process inefficient, thereby limiting the use of CML in the real world. Idling resources both on the server and devices due to sequential computation and communication is the principal cause of low resource utilisation. A novel framework PipeLearn that leverages pipeline parallelism for CML techniques is developed to improve resource utilisation substantially. A new training pipeline is designed to parallelise the computations on different hardware resources and communication on different bandwidth resources, thereby accelerating the training process in CML. The pipeline is further optimised to ensure maximum utilisation of available resources. The experimental results confirm the validity of the underlying approach of PipeLearn and highlight that when compared to federated learning: (i) the idle time of the server can be reduced by 2.2x \(-\) 28.5x, (ii) the network throughput can be increased by 56.6x \(-\) 321.3x, and (iii) the overall training time can be accelerated by 1.5x \(-\) 21.6x under varying network conditions for two popular convolutional models without sacrificing accuracy. PipeLearn is available for public download from [https://github.com/blessonvar/PipeLearn](https://github.com/blessonvar/PipeLearn).
Collaborative machine learning, resource utilisation, pipeline parallelism, edge computing.
## I Introduction
Deep learning has found application across a range of fields, including computer vision [1, 2], natural language processing [3, 4] and speech recognition [5, 6]. However, there are important data privacy and regulatory concerns in sending data generated on devices to geographically distant cloud servers for training deep learning models. A new class of machine learning techniques has therefore been developed under the umbrella of collaborative machine learning (CML) to mitigate these concerns [7]. CML does not require data to be sent to a server for training deep learning models. Rather the server shares models with devices that are then locally trained on the device.
There are three notable CML techniques, namely federated learning (FL) [8, 9, 10, 11], split learning (SL) [12, 13] and split federated learning (SFL) [14, 15]. However, these techniques are performance inefficient since they underutilise resources (both compute and network), which results in training times that are impractical for real-world use. The cause of resource under-utilisation in the three CML techniques is considered next.
In FL, each device trains a local model of a deep neural network (DNN) using the data it generates. Local models are uploaded to the server and aggregated as a global model at a pre-defined frequency. However, the workload of the devices and the server is usually imbalanced [16, 17, 15]. This is because the server resources are only employed when the local models are aggregated and remain idle for the remaining time.
In SL, a DNN is usually decomposed into two parts, such that the initial layers of the DNN are deployed on a device and the remaining layers on the server. A device trains the partial model and sends the intermediate outputs to the server where the rest of the model is trained. The training of the model on devices occurs in a round-robin fashion. Hence, only one device or the server will utilise its resources while the other devices or server are idle [14, 7].
In SFL, which is a hybrid of FL and SL, the DNN is split across devices and the server; the devices, however, unlike SL, train the local models concurrently. Nevertheless, the server is required to wait while the devices train the model and transfer data, and vice versa.
Therefore, the following two challenges need to be addressed for improving the resource utilisation of CML techniques:
Sequential execution on devices and server causes resource under-utilisationSince device-side and server-side computations in CML techniques occur in sequence, there are long idle times on both the devices and server.
Communication between devices and server results in resource under-utilisationData transfer in CML techniques is time-consuming [18, 19, 20], during which time no training occurs on both the server and devices. This increases the overall training time.
Although low resource utilisation of CML techniques makes training inefficient, there is currently limited research that is directed at addressing this problem. The motivation of this paper is to address the above challenges by developing a framework, PipeLearn, that leverages _pipeline parallelism_ to improve the resource utilisation of devices and servers in CML techniques when training DNNs, thereby increasing training efficiency. The framework distributes the computation of DNN layers on the server and devices, balances the workload on both the server and devices, and reorders the computation for different inputs in the training process. PipeLearn overlaps the device and server-side computation and communication
between the devices and server, thereby improving resource utilisation, which accelerates CML training.
PipeLearn redesign the training process of DNNs. Traditionally, training a DNN involves the forward propagation pass (or forward pass) and backward propagation pass (or backward pass). In the forward pass, one batch of input data (also known as a mini-batch) is used as input for the first input layer and the output of each layer is passed on to subsequent layers to compute the loss function. In the backward pass, the loss function is passed from the last DNN layer to the first layer to compute the gradients of the DNN model parameters.
PipeLearn divides the DNN into two parts and deploys them on the server and devices like in SFL. Then the forward and backward passes are reordered for multiple mini-batches. Each device executes the forward pass for multiple mini-batches in sequence. The immediate result of each forward pass (smashed data or activations) is transmitted to the server, which runs the forward and backward passes for the remaining layers and sends the gradients of the activations back to the device. The device then sequentially performs the backward passes for the mini-batches. The devices operate in parallel, and the local models are aggregated at a set frequency. Since many forward passes occur sequentially on the device, the communication for each forward pass overlaps the computation of the following forward passes. Also, in PipeLearn, the server and device computations occur simultaneously for different mini-batches. Thus, PipeLearn reduces the idle time of devices and servers by overlapping server and device-side computations and server-device communication.
This paper makes the following contributions:
1. The development of a novel framework PipeLearn that accelerates collaborative training of DNNs by improving resource utilisation. To the best of our knowledge, PipeLearn is the first work to reduce the idling of resources in CML by reordering training tasks across the server and devices.
2. Pipeline parallelism is leveraged to the benefit of CML for the first time to overlap device and server computations and device-server communication, thereby reducing resource idle time. Experiments in a lab-based testbed demonstrate that, compared to FL, the training process can be accelerated by 1.5x - 21.6x, and the utilisation of idle hardware resources and bandwidth resources are increased by upto 28.5x and 321.3x.
3. Development of an optimised strategy for partitioning and scheduling CML workloads across devices and servers to maximise overall training efficiency. Experimental studies demonstrate that our approach can find optimal or near-optimal strategies.
The remainder of this paper is organised as follows. Section II provides the background and related works of this research. The PipeLearn framework and the two approaches that underpin the framework are detailed in Section III. Experiments in Section IV demonstrates the effectiveness of the PipeLearn framework under varying network conditions. Section V concludes this article.
## II Background and Related Work
Section II-A provides the background of collaborative machine learning (CML), and Section II-B introduces the related research on improving the training efficiency in CML.
### _Background_
The training process of the three popular CML techniques, namely federated learning (FL), split learning (SL) and split federated learning (SFL), and their limitation due to resource under-utilisation are presented.
#### Ii-A1 Federated Learning
FL [8, 9, 10, 11] uses a federation of devices coordinated by a central server to train deep learning models collaboratively.
Assume \(K\) devices are involved in the training process as shown in Figure 1(a). In Step 1, the devices train the complete model \(M^{k}\) locally, where \(k=1,2,...,K\). In each iteration, the local model is fed a mini-batch of data, completes the forward and backward passes to compute gradients of all model parameters, and then updates the parameters with the gradients. A training epoch involves training over the entire dataset, which consists of multiple iterations. In Step 2, after a certain number of local epochs, the devices send the local models \(M^{k}\) to the server, where \(k=1,2,...,K\). In Step 3, the server aggregates the local models to obtain a global model \(M\), using the FedAvg algorithm [11]\(M=\sum_{k}\frac{|\mathcal{D}^{k}|}{\sum_{k}|\mathcal{D}^{k}|}M^{k}\), where \(\mathcal{D}_{k}\) is the local dataset on device \(k\) and \(|\cdot|\) is the function to obtain the set size. In Step 4, the global model is then downloaded to the devices and continues the next round of training until the model converges.
Typically, local model training on devices (Step 1) takes most of the time, while the resources with better compute performance on the server are idle. Therefore, PipeLearn utilises the idle resources on the server during training.
#### Ii-A2 Split Learning
SL [12, 13] is another privacy-preserving CML method. Since a DNN consists of consecutive layers, SL splits the complete DNN \(M\) into two parts at the granularity of layers and deploys them on the server (\(M^{s}\)) and the devices (\(M^{c_{k}}\), where \(k=1,2,...,K\)).
As shown in Figure 1(a), the devices train the initial layers of the DNN and the server trains the remaining layers, and the devices work in a round-robin fashion. In Step 3, the first device runs the forward pass of \(M^{c_{1}}\) on its local data, and in Step 2, the intermediate results (also known as activations) are sent to the server. In Step 3, the server uses the activations to complete the forward pass of \(M^{s}\) to obtain the loss. The loss is then used for the backward pass on the server to compute the gradients of the parameters of \(M^{s}\) and the gradients of the activations. In Step 4, the gradients of the activations are sent back to the device, and in Step 5, the gradients of the parameters of the \(M^{c_{1}}\) are computed in the device-side backward pass. Next, the parameters of server-side model and device-side model are updated by their gradients. In Step 6, after a device trains for a certain number of epochs, the next device gets the latest model from the previous device, and starts training its model in Step 7.
Compared to FL, the device-side computation is significantly reduced, because only a few layers are trained on
devices. However, since the devices works in sequence (instead of in parallel as FL), the overall training efficiency decreases as the number of devices increases.
#### Ii-A3 Split Federated Learning
Since FL is computationally intensive on devices and SL works inefficiently on the device side, SFL [14] was developed to alleviate both limitations. Similar to SL, SFL also splits the DNN across the devices (\(M^{c_{k}}\), where \(k=1,2,...,K\)) and the server (\(M^{s}\)) and collaboratively trains the DNN. However, in SFL, the devices train in parallel and utilise a 'Main' server for training the server-side model and a 'Fed' server for aggregation.
The training process is shown in Figure 1(b). In Step 1, the forward pass of \(M^{c_{k}}\), where \(k=1,2,...,K\), are executed on the devices in parallel, and in Step 2, the activations are uploaded to the main server. In Step 3, the main server trains \(M^{s}\), and in Step 4, the gradients are sent back to all devices, before they complete the backward pass in Step 5. At a pre-defined frequency, \(M^{c_{k}}\), where \(k=1,2,...,K\), are uploaded to the Fed server in Step 6. In Step 7, the models are aggregated to the global model \(M^{c}\). In Step 8, \(M^{c}\) is downloaded to the devices and used for the next round of training.
SFL utilises device parallelism to improve the training efficiency of SL [7]. However, the server still waits while the devices are training the model (Step 1) and transmitting data (Step 2), and vice versa, which leads to resource underutilisation. PipeLearn solves this problem by parallelising the steps performed on the server and the devices.
### _Related work_
Existing research aimed at improving the training efficiency of CML techniques focuses on the following four aspects.
#### Ii-B1 Accelerating Model Convergence
Model convergence is usually slow when the data from different devices is not independent and identically distributed (non-i.i.d). To mitigate this problem, recent works proposed new optimisation algorithms for federated learning. FedAc [21] reduced the rounds of synchronisation required for model convergence to one-third of FedAvg. FedReg [22] indicated that the slow convergence is mainly caused by a forgetting issue during the local training stage, and alleviated it by regularising local parameters with previous training data. Momentum [23] and weighting methods [24] were used in the gradient descent to accelerate convergence. Devices selection methods based on data similarity [25] and data distribution [26] were adopted in the aggregation stage to improve model convergence.
These methods accelerate the training process of CML techniques by improving model convergence. However, there is no focus on improving the resource utilisation of the server and devices by reducing idle time.
#### Ii-B2 Reducing the Impact of Stragglers
Stragglers among the devices participating in training increase the overall training time of CML techniques. A device selection method was proposed based on the resource conditions of devices to rule out the stragglers [27]. Some neurons in straggler's model are masked to accelerate its computation [28]. Local gradients are aggregated hierarchically to accelerate FL for heterogeneous devices [29]. FedAdapt [15] balances the workloads on heterogeneous devices by offloading some layers of DNN to the server.
These methods alleviate the impact of stragglers but do not address the fundamental challenge of sequential computation and communication between the devices and server that results in low resource utilisation.
#### Ii-B3 Reducing Communication Overhead
In limited bandwidth environments, communication overhead limits the training efficiency of CML techniques. To reduce the communication traffic in FL, a relay-assisted two-tier network was used [30]. Models and gradients were transmitted simultaneously and aggregated on the relay nodes. Pruning, quantisation and selective updating were used to reduce the model size and thus reduce the computation and communication overhead [31]. The communication involved in the backward pass of SFL is improved by averaging the gradients on the server
Fig. 1: The training process of CML methods, assuming that we have \(K\) devices. The training steps are explained in Section II-A
side and broadcasting them to the devices instead of unicasting the unique gradients to devices [32].
These methods are effective in reducing the volume of data transferred over the network, thus reducing the communication overhead. However, the network throughput (the volume of data transferred in a given time frame) remains unchanged since the network is not effectively used.
#### Ii-A4 Improving Resource Utilisation by Parallelisation
Although the above methods improve the training efficiency of CML techniques, they do not surmount the challenge of under-utilisation of resources. Parallelisation techniques have therefore been proposed to improve the utilisation of compute and bandwidth resources. GPipe [33] and PipeDream [34] proposed pipeline parallelism to distribute a deep model to multiple computing nodes and parallelise the computations on different nodes. Both reduced the idle time of computing resources. However, the network topology of computing nodes is sequential rather than centralised as in CML techniques, where all the raw data is fed into one node and flows to the others. Thus, they cannot be used in CML techniques where data resides on every device. In addition, they can only operate in a homogeneous environment, where all compute nodes have similar hardware and assumes the availability of significant computing power. Thus, they are less suitable for use in relatively resource-constrained IoT environments. Overlap-FedAvg [35] was proposed to decouple the computation and communication during training and overlap them to reduce idle resources. However, the use of computing resources located at the server is not fully leveraged.
Given the above limitations, we therefore, propose a novel framework PipeLearn that fully utilises the computing resources on the server and devices and the bandwidth resources between them, thereby significantly improving the training efficiency of CML.
## III PipeLearn
This section develops PipeLearn, a framework that improves the resource utilisation of CML, such as in the context of FL and SFL. PipeLearn accelerates the execution of sequential DNNs for the first time by leveraging pipeline parallelism to improve the overall resource utilisation (of both compute and network) in centralised CML.
The PipeLearn framework is underpinned by two approaches, namely _pipeline construction_ and _pipeline optimisation_. The first approach constructs a training pipeline to balance the overall training workload by (a) reallocating the computations for different layers in DNN on the server and devices, and (b) reordering the forward and backward passes for multiple mini-batches of data and schedules them onto idle resources. Consequently, not only is the resource utilisation improved by using PipeLearn, but also the overall training of the DNN is accelerated. The second approach of PipeLearn enhances the performance of the first approach by automatically selecting the optimal control parameters (such as the point at which the DNN must be split across the device and the server and the number of mini-batches that can be executed concurrently in the pipeline).
### _Motivation_
The following three observations in relation to low resource utilisation in the training process of DNNs in CML motivates the development of PipeLearn.
_The server and devices need to work simultaneously_: The devices and server work in an alternating manner in the current CML methods, which is a limitation that must be addressed for improving resource utilisation. In FL, the server starts to aggregate local models only after all devices have completed training their local models. In SL/SFL, the sequential computation of DNN layers results in the sequential working of the devices and the server. To reduce the resulting idle time on the resources, the dependencies between server-side and device-side computations need to be eliminated. PipeLearn attempts to make the server and the devices work simultaneously by reallocating and reordering training tasks.
_Compute-intensive and I/O-intensive tasks need to be overlapped_: Compute-intensive tasks, such as model training, involves large-scale computation that need to be performed by computing units (CPU/GPU), while IO-intensive tasks refer to input and output tasks of disk or network, such as data transmission, which usually do not have a high CPU requirement. A computationally intensive and an IO-intensive task can be executed in parallel on the same resource without mutual dependencies. However, in the current CML methods, both server-side and device-side computations are paused when communication is in progress, which creates idle time on compute resources. PipeLearn improves this by overlapping compute-intensive and IO-intensive tasks.
_Workloads on the server side and client side need to be balanced_: Idle time on resources are also caused due to unbalanced workloads on the server and devices. In current CML methods, servers or clients may have heavier workload than the other. PipeLearn balances the workloads on the server side and device side by split the neural network carefully.
### _Pipeline Construction_
Assume that \(K\) devices and a server train a sequential DNN collaboratively by using data residing on each device. Conventionally, the dataset on each device is divided to multiple mini batches that is fed to the DNN in sequence. Training on each mini-batch involves a forward pass that computes a loss function and a backward pass that computes the gradients of the model parameters. A training epoch ends after the entire dataset has been fed to the DNN. To solve the problem of low resource utilisation faced by the current CML methods, PipeLearn constructs a training pipeline that reduces the idle time on resources during collaborative training.
Each forward and backward pass of CML methods comprises four tasks: (i) the device-side compute-intensive tasks, such as model training; (ii) the server-side compute-intensive task, such as model training (only in SL and SFL) and model aggregation; (iii) the device-to-server IO-intensive task, such as data uploading; (iv) the server-to-device IO-intensive task, such as data downloading. The four tasks can only executed in sequence in current CML methods, resulting in idle resources. To solve this problem, a pipeline is developed to balance and
parallelise the above tasks. The pipeline construction approach involves three phases, namely neural network splitting, training stage reordering and multi-device parallelisation.
Phase 1 - Neural Network SplittingThe aim of the approach is to overlap the above four tasks to reduce idle time on computing resources on the server and devices as well as idle network resources. Since this approach does not reduce the actual computation and communication time in each task, it needs to balance the time required by the four tasks to avoid straggler tasks from increasing the overall training time. For example, in FL the device-side compute-intensive task is most time-consuming, while the other three tasks consume relatively less time. In this case, overlapping the four tasks will not significantly reduce the overall training time. Therefore, it is more suitable to split the DNN and divide the training task across the server and the devices (similar with the previous works [14, 15]). In addition, since the output of each DNN layer has a variable size, different split points of the DNN will result in different volumes of transmitted data. Thus, changing the splitting point based on the computing resources and bandwidth can also balance the I/O-intensive tasks with compute-intensive tasks. The selection of the best splitting point is presented in Section III-C.
Splitting neural networks does not affect model accuracy, since it does not alter computations rather the resource on which they are executed. In FL, each device \(k\), where \(k=1,2,...,k\), trains a complete model \(M^{k}\). \(\mathtt{PipeLearn}\) splits \(M^{k}\) to a device-side model \(M^{ck}\) and a server-side model \(M^{sk}\) represented as:
\[M^{k}=M^{sk}\oplus M^{ck} \tag{1}\]
where the binary operator \(\oplus\) stacks the layers of two partitions of a deep learning model as a complete model.
There are \(k\) pairs of \(\{M^{ck},M^{sk}\}\), where \(M^{ck}\) is deployed on device \(k\) while all of \(M^{sk}\) are deployed on the server. This is different from SL and SFL where only one model is deployed on the server side. Assume the complete model \(M^{k}\) contains \(Q\) layers, \(M^{ck}\) contains the initial \(P\) layers and \(M^{sk}\) contains the remaining layers, where \(1\leq P\leq Q\).
Splitting the neural network maintains the consistency of the training process and does not change the model accuracy (refer to Appendix A in Supplementary Material).
Phase 2 - Training Stage ReorderingAfter splitting the neural networks and balancing the four tasks, the idle resources in the training process need to be utilised. This is achieved by reordering the computations for different mini-batches of data.
Figure 2(a) shows the pipeline of one training iteration of a split neural network for one pair of \(\{M^{sk},M^{ck}\}\) (the device index \(k\) is not shown). Any forward pass (\(f\)), backward pass (\(b\)), upload task (\(u\)) and download task (\(d\)) for each mini-batch is called a _training stage_.
The idle time on the device exists between the forward pass \(f^{c}\) and the backward pass \(b^{c}\) of the device-side model. Thus, \(\mathtt{PipeLearn}\) inserts the forward pass of the next few mini-batches into the device-side idle time to fill up the pipeline. As shown in Figure 2(b), in each training iteration, the forward passes for \(N\) mini-batches, \(f_{1}^{c}\) to \(f_{N}^{c}\), are performed on the device in sequence. The activations of each mini-batch are sent to the server (\(u_{1}\) to \(u_{N}\)) once the corresponding forward pass is completed, which utilises idle network resources. Once the activations of any mini-batch arrive, the server performs the forward and backward passes, \((f_{1}^{s},b_{1}^{s})\) to \((f_{N}^{s},b_{N}^{s})\), and sends the gradients of the activations back to the device (\(d_{1}\) to \(d_{N}\)). After completing the forward passes of the mini-batches and receiving the gradients, the device performs the backward passes, \(b_{1}^{c}\) to \(b_{N}^{c}\). Then the model parameters are updated and the training iteration ends. A training epoch ends when the entire dataset has been processed, which involves multiple training iterations.
Figure 2(b) shows that compared to conventional training (Figure 2(a)), the degree to which the four tasks can be overlapped is high and it is possible to significantly reduce the idle time of the server and the devices.
To guarantee a similar model accuracy as in classic FL, it must be ensured that the gradients are obtained from the same number of data samples when the model is updated. This requires that the number of data samples involved in each training iteration in \(\mathtt{PipeLearn}\) should be the same as the original batch size in FL. Since \(N\) mini-batches are used in each training iteration, the size of each mini-batch \(B^{\prime}\) is reduced to \(1/N\) of the original batch size \(B\) in FL.
\[B^{\prime}=\lfloor B/N\rfloor \tag{2}\]
Reordering training stages does not impact model accuracy (refer to Appendix B in Supplementary Material).
Phase 3 - Multi-Device ParallelisationThe workloads of multiple devices involved in collaborative training needs to be coordinated. On the device side, each device \(k\) is responsible for training its model \(M^{ck}\), and \(\mathtt{PipeLearn}\) allows them to train in parallel for efficiency. On the server side, the counterpart \(K\) models (\(M^{s_{1}}\) to \(M^{s_{K}}\)) are deployed and trained simultaneously. However, this may result in compute resources contention.
Figure 3(a) shows the case of single device that is the same as Figure 2(b) but not shows communication, while Figure 3(b) and 3(c) shows the case of multiple devices. Figure 3(b) gives
Fig. 2: Pipelines for one training iteration in conventional training and \(\mathtt{PipeLearn}\) when using a split neural network. "Comp" is an abbreviation for “computation”. \(f\), \(b\), \(u\) and \(d\) represent forward pass, backward pass, upload and download, respectively. The superscripts indicate server-side (\(s\)) or client-side (\(c\)) computation or communication.
one straightforward solution to train the server-side models sequentially. However, the server-side models that are trained relatively late will cause a delay in the backward passes for the corresponding device-side models, for example, \(b_{n}^{c_{2}}\), where \(n=1,2,...,N\), in Figure 3(b).
Alternatively, data parallelism can be employed. The activations from different devices are deemed as different inputs and the server-side models are trained in parallel on these inputs. This is shown in Figure 3(c). It is worth noting that, compared to training a single model, training multiple models at the same time may result in longer training time for each model on a resource-limited server. This approach, nonetheless, mitigates stragglers on devices.
At the end of each training epoch, the device-side models \(M^{c_{k}}\) are uploaded to the server, and will constitute the entire models \(M^{k}\) when combined with the server-side models \(M^{s_{k}}\) (Equation 1). The complete model \(M^{k}\) of each device is aggregated to obtain a complete global model \(M\), using the FedAvg algorithm [11].
\[M=\sum_{k=1}^{K}\frac{|\mathcal{D}^{k}|}{\sum_{k=1}^{k}|\mathcal{D}^{k}|}M^{k} \tag{3}\]
where \(\mathcal{D}^{k}\) is the local dataset on device \(k\) and \(|\cdot|\) is the function to obtain the size of the given dataset. The server-side global model \(M^{s}\) and device-side global model \(M^{c}\) are split from \(M\), using
\[M=M^{s}\oplus M^{c} \tag{4}\]
The devices download \(M^{c}\) to update the local models for the subsequent training epochs, and the server-side models are updated by \(M^{s}\).
It has been proved in the previous phases that the model accuracy of each local model \(M^{k}\) in PipeLearn is not affected. In this phase, the FedAvg algorithm is used in PipeLearn to generate the global model \(M\) by aggregating \(M^{k}\), where \(k=1,2,...,K\), which is the same as in classic FL. Therefore, PipeLearn maintains similar model accuracy with FL.
_Training Process Overview:_ The entire training process of PipeLearn in shown in Algorithm 1 and 2.
All devices train simultaneously using Algorithm 1. On device \(k\), the device-side model \(M^{c_{k}}\) is initially built given the split point (Line 1). Line 2 to Line 19 shows the complete training process until the model converges. In each training epoch (Line 3 to Line 18), the entire dataset is processed. A training epoch consists of multiple training iterations, each processing \(B^{\prime}N^{k}\) data samples. In each training iteration (Line 4 to Line 13), the forward passes of \(N^{k}\) mini-batches are executed in sequence (Line 6), and the activations are sent to the server (Line 7). Their gradients are then received from the server (Line 10), and the backward passes are executed sequentially to compute the gradients of the weights of \(M^{c_{k}}\) (Line 11). At the end of a training iteration, the model is updated based on the gradients (Line 13). After all training iterations are completed, the signal "stop epoch" and \(M^{c_{k}}\) is sent to the server (Line 15 to Line 16). The device then receives a global device-side model \(M^{c_{k}}{}^{\prime}\) from the server (Line 17) and uses it to update the current model (Line 18). When the model converges, the client sends a "stop training" signal to the server, thus completing the training process (Line 20).
Algorithm 2 is executed on the server side. The server first builds \(K\) models \(M^{s_{k}}\), where \(k=1,2,...,K\) (Line 1), and starts training the models until a signal "stop training" is received from all devices (Line 2). In each training epoch (Line 3 to Line 24), the \(K\) models are trained simultaneously (Line 3 to Line 19) and aggregated into a global model (Line 20 to Line 23). A training epoch of model \(k\) does not end until receive a signal "stop epoch" (Line 5) from device \(k\), which involves multiple training iterations. During a training iteration (Line 6 to Line 15), the server receives the activations and labels from device \(k\) (Line 7), and uses them to compute the loss function (Line 9 to Line 10). After that, the gradients of activations and model weights are computed (Line 11 to Line 12). The former is then sent to device \(k\) (Line 13), and the latter is used to update \(M^{s_{k}}\) at the end of the training iteration (Line 15). After receiving the "stop epoch" signal, the server receives the device-side model \(M^{c_{k}}\) from device \(k\) (Line 17) and makes up a complete model \(M^{k}\) (Line 18). The \(K\) models \(M^{k}\), where \(k=1,2,...,K\), are aggregated into a global model \(M\) (Line 20). \(M\) is then split into a server-side model \(M^{s_{k}}{}^{\prime}\) and a device side model \(M^{c_{k}}{}^{\prime}\) (Line 21). \(M^{c_{k}}{}^{\prime}\) is sent to device \(k\) (Line 22), and \(M^{s_{k}}{}^{\prime}\) is used to update \(M^{s_{k}}\) (Line 23). A training epoch ends. Training is completed when the "stop training" signal is received from all devices.
### _Pipeline Optimisation_
To ensure that the pipeline can efficiently utilise the idle resources, we propose an approach that optimises the pipeline. Two important parameters that significantly impact the performance of the pipeline need to be considered:
_a) Split point:_ The split point of a neural network is denoted as \(P\). All layers with indices less than or equal to \(P\) are
Fig. 3: PipeLearn using single and multiple devices. “Comp” is an abbreviation for “computation” \(f\), \(b\), \(u\) and \(d\) represent forward pass, backward pass, upload and download, respectively. The superscripts \(s_{k}\) and \(c_{k}\) represent the index of the model \(M^{s_{k}}\) and \(M^{c_{k}}\), \(k=1,2\), respectively.
deployed on the device and the remaining layers are deployed on the server. The number of layers determines the amount of computation on a server/device, and the volume of data output from the split layer determines the communication traffic. Therefore, finding the most suitable value for \(P\) for each device will balance the time required for computation on the server and the device as well as the communication between them.
```
/* Run on Client \(k\). */ Input: local dataset \(\mathcal{D}^{k}\); batch size \(B^{\prime}\); learning rate \(\eta\); model split point \(P^{k}\); number of mini-batches in each iteration \(N^{k}\) Output: Device-side models \(M^{c_{k}}\)
1 Build \(M^{c_{k}}\) based on \(P^{k}\)whilemodel has not convergeddo // Start a training epoch for\(i=1\)to\(|\mathcal{D}^{k}|/B^{\prime}N^{k}|\)do // Start a training iteration for\(n=1\)to\(N^{k}\)do Load a mini-batch \(\mathbf{x}_{n}\) of the size \(B^{\prime}\) from \(\mathcal{D}^{k}\) Compute the activation \(\mathbf{a}_{n}\) using Equation 3 in the Supplementary Material Send \(\mathbf{a}_{n}\) and labels \(\mathbf{y}_{n}\) to the server end forfor\(n=1\)to\(N^{k}\)do Receive \(g(\mathbf{a}_{n})\) from the server Compute the gradients of model weights \(g(M^{c_{k}}|g(\mathbf{a}_{n}))\) using Equation 8 in the Supplementary Material end for Update \(M^{c_{k}}\gets M^{c_{k}}-\frac{\eta}{N^{k}}\sum_{n=1}^{N^{k}}g(M^{c_{k}}|g( \mathbf{a}_{n}))\) end for Send "stop epoch" signal to the server Send \(M^{c_{k}}\) to the server Receive \(M^{c_{k}}{}^{\prime}\) from the server Update \(M^{c_{k}}\gets M^{c_{k}}{}^{\prime}\) end for Send "stop training" signal to the server Return\(M^{c_{k}}\)
```
**Algorithm 1**Device-Side Training in PipeLearn
_Parallel batch number:_ the number of mini-batches that are used to concurrently train the model in each iteration of training is called parallel batch number and denoted by \(N\). It is important that the computations for mini-batches fill up the pipeline. Therefore, it is necessary to decide the number of mini-batches involved in each training iteration.
The above two parameters, namely spilt point and parallel batch number, of the pipeline significantly influence the efficiency of PipeLearn, and the approach required for obtaining the optimal values for \(P\) and \(N\) is developed. These parameters will change for different DNNs, servers/devices combinations and network conditions. An exhaustive search for optimal values based on actual model training would be time and resource consuming. Therefore, the approach developed rely on estimating the training time for different parameters.
This approach aims to select the best pair of \(\{N^{k},P^{k}\}\) for each device \(k\) to minimise the idle resources by three phases. Firstly, we need to know how much they affect the pipeline. A number of training iterations is profiled to identify size of the output data and the training time for each layer of the DNN. Secondly, the training time for each epoch can be estimated using the above information, given a pair of \(\{N^{k},P^{k}\}\). Thirdly, the candidates for \(\{N^{k},P^{k}\}\)
are shortlisted. Since the training time can be estimated for every candidate, the one with the lowest training time will be selected. The three phases are explained in detail as follows.
Phase 1 - ProfilingIn this phase, the complete model is trained on each device and server for a pre-defined number of iterations. The following information is empirically collected:
Time spent in the forward/backward pass of each layer deployed on each device and server.Assume that \(\tilde{f}_{q}^{c_{k}}\), \(\tilde{b}_{q}^{c_{k}}\), \(\tilde{f}_{q}^{s}\) and \(\tilde{b}_{q}^{s}\) denote the forward and backward pass of layer \(q\) on device \(k\) and server, and \(t()\) denotes time. Then, \(t(\tilde{f}_{q}^{c_{k}})\), \(t(\tilde{b}_{q}^{c_{k}})\), \(t(\tilde{f}_{q}^{s})\) and \(t(\tilde{b}_{q}^{s})\) are the time taken for the forward and backward pass on the devices and server.
Output data volume of each layer in the forward and backward pass.\(\tilde{v}_{q}^{f}\) and \(\tilde{v}_{q}^{b}\) denote the output data volume for layer \(q\) in forward and backward pass.
Phase 2 - Training Time EstimationTo estimate the time spent in each training epoch of \(\{M^{c_{k}},M^{s_{k}}\}\), given the pairs of \(\{N^{k},P^{k}\}\) for device \(k\), where \(k=1,2,...,K\), the time for each training stage must be estimated first.
Assume that \(f_{n}^{c_{k}}\), \(b_{n}^{c_{k}}\), \(f_{n}^{s_{k}}\) and \(b_{n}^{s_{k}}\) is the time spent in the forward and backward pass of the \(M^{c_{k}}\) and \(M^{s_{k}}\) for mini-batch \(n\), where \(n=1,2,...,N^{k}\). The time spent in each stage is the sum of the time spent in all relevant layers. Since the size of each mini-batch in \(\tt{PipeLearn}\) is reduced to \(1/N^{k}\), the time required for each layer is reduced to around \(1/N^{k}\). The time of each training stage is estimated by the following equations:
\[t(f_{n}^{c_{k}})=\sum_{q=1}^{P^{k}}\frac{t(\tilde{f}_{q}^{c_{k}})}{N^{k}} \tag{5}\]
\[t(f_{n}^{s_{k}})=\sum_{q=P^{k}+1}^{Q}\frac{t(\tilde{f}_{q}^{s_{k}})}{N^{k}} \tag{6}\]
\[t(b_{n}^{c_{k}})=\sum_{q=1}^{P^{k}}\frac{t(\tilde{b}_{q}^{c_{k}})}{N^{k}} \tag{7}\]
\[t(b_{n}^{s_{k}})=\sum_{q=P^{k}+1}^{Q}\frac{t(\tilde{b}_{q}^{s_{k}})}{N^{k}} \tag{8}\]
Assume that \(u_{n}^{k}\) and \(d_{n}^{k}\) are the time spent in uploading and downloading between device \(k\) and the server for mini-batch \(n\), where \(n=1,2,...,N^{k}\), and \(w_{u}^{k}\) and \(w_{d}^{k}\) are the uplink and downlink bandwidths. Since the size of transmitted data is reduced to \(1/N^{k}\):
\[t(u_{n}^{k})=\frac{\tilde{v}_{P^{k}}^{f}}{w_{u}^{k}N^{k}} \tag{9}\]
\[t(d_{n}^{k})=\frac{\tilde{v}_{P^{k}}^{b}}{w_{d}^{k}N^{k}} \tag{10}\]
The time consumption of all training stages is estimated using the above equations, then the training time of each epoch can be estimated using dynamic programming. Within each training iteration, a training stage has previous stages and next stages (exclusions for first and last stages). Table I shows the previous and next stages of all stages in each training iteration. The first stage is \(f_{1}^{c_{k}}\) and the last stage is \(b_{N}^{c_{k}}\). We use \(T(r)\) to denote the total time from the beginning of the training iteration to the end of the stage \(r\), and \(t(r)\) to denote the time spent in the stage \(r\). Thus, the overall training time is \(T(b_{N}^{c_{k}})\). Since any stage can start only if all of its previous stages have completed, we have
\[T(b_{N}^{c_{k}})=t(b_{N}^{c_{k}})+\max_{r\inprev(b_{N}^{c_{k}})}T(r) \tag{11}\]
\[T(r)=t(r)+\max_{r^{\prime}\in prev(r)}T(r^{\prime}) \tag{12}\]
\[T(f_{1}^{c_{k}})=t(f_{1}^{c_{k}}) \tag{13}\]
where \(prev()\) is the function to obtain all previous stages of the input stage. Since \(t(b_{N}^{c_{k}})\) is already obtained in Phase 2, Equation 11 to 13 can be solved by recursion. The overall time of one training iteration can then be estimated.
Phase 3 - Parameter DeterminationIn this phase, the candidates of \(\{N^{k},P^{k}\}\) are shortlisted. Since the training time can be estimated for each candidate, the one with lowest training time can be selected.
Assuming that the DNN has \(Q\) layers, including dense layers, convolutional layers, pooling layers, etc., the range of \(P^{k}\) is \(\{P^{k}|1\leq P^{k}\leq Q,P^{k}\in\mathbb{Z}^{+}\}\), where \(\mathbb{Z}^{+}\) is the set of all positive integers.
Given \(P^{k}\), the idle time of the device \(k\) between the forward pass and backward pass of each mini-batch (the blank between \(f^{c}\) and \(b^{c}\) in Figure 2(a)), need to be filled up by the forward passes of the following multiple mini-batches. As a result, the original mini-batch and the following mini-batches are executed concurrently in one training iteration.
For example, as shown in Figure 2(a), the device idle time between \(f^{c}\) and \(b^{c}\) is equal to \(t(u)+t(f^{s})+t(b^{s})+t(d)\). Thus, the forward passes or backward passes of the subsequent \(\lceil\frac{t(u)+t(f^{r})+t(b^{s})+t(d)}{\min\{t(f^{s}),t(b^{c})\}}\rceil\) mini-batches can be used to fill in the idle time, making the parallel batch number \(N=1+\lceil\frac{t(u)+t(f^{s})+t(b^{s})+t(d)}{\min\{t(f^{c}),t(b^{c})\}}\rceil\). Since the batch size used in \(\tt{PipeLearn}\) is reduced to \(1/N\), the time required for forward and backward pass of each layer, uploading and downloading are reduced to around \(1/N\). The parallel batch number for device \(k\) is estimated as:
\[N^{k} =1+\lceil\frac{t(u_{n}^{k})+t(f_{n}^{s_{n}})+t(b_{n}^{s_{k}})+t(d _{n}^{k})}{\min\{t(f_{n}^{c}),t(b_{n}^{c_{k}})\}}\rceil\] \[=1+\lceil\frac{\frac{\tilde{v}_{P^{k}}^{f}}{w_{u}^{k}N^{k}}+\sum_{q =1}^{P^{k}}\frac{t(\tilde{f}_{q}^{s})}{N^{k}}+\sum_{q=P^{k}+1}^{Q}\frac{t(\tilde {b}_{q}^{s})}{N^{k}}+\frac{v_{P^{k}}^{b}}{w_{d}^{k}N^{k}}}{\min\left\{\sum_{q=1 }^{P^{k}}\frac{t(\tilde{f}_{q}^{s})}{N^{k}},\sum_{q=1}^{P^{k}}\frac{t(\tilde{b}_{q }^{s})}{N^{k}}\right\}}\rceil\] \[=1+\lceil\frac{\frac{\tilde{v}_{P^{k}}^{f}}{w_{u}^{k}}+\sum_{q=1 }^{P^{k}}t(\tilde{f}_{q}^{s})+\sum_{q=P^{k}+1}^{Q}t(\tilde{b}_{q}^{s})+\frac{ \tilde{v}_{P^{k}}^{b}}{w_{d}^{k}}}{\min\left\{\sum_{q=1}^{P^{k}}t(\tilde{f}_{q }^{c_{k}}),\sum_{q=1}^{P^{k}}t(\tilde{b}_{q}^{c_{k}})\right\}}\rceil \tag{14}\]
For each device \(k\), the best \(\{N^{k},P^{k}\}\) can be selected from the shortlisted candidates by estimating the training time.
Since the training time of \(\tt{PipeLearn}\) with parameter pair \(\{N^{k},P^{k}\}\) is estimated based on profiling data from training
complete models with the original batch size, this approach does not guarantee to select the optimal parameters. However, our experiments in Section IV-D show that the parameters selected by this approach are similar to optimal values.
## IV Experiments
This section quantifies the benefits of PipeLearn and demonstrate its superiority over existing CML techniques. Therefore, the training efficiency (Section IV-B) and the model accuracy and convergence (Section IV-C) of PipeLearn are compared against existing CML techniques. The performance of the optimisation techniques is then evaluated in Section IV-D.
### _Experimental Setup_
The test platform consists of one server and four devices. An 8-core i7-11850H processor with 32GB RAM is used as the server that collaboratively trains DNNs with four Raspberry Pi 3B devices each with 1GB RAM.
Three network conditions are considered: _a) 4G:_ 10Mbps uplink bandwidth and 25Mbps downlink bandwidth; _b) 4G+:_ 20Mbps uplink bandwidth and 40Mbps downlink bandwidth; _c) WiFi:_ 50Mbps uplink bandwidth and 50Mbps downlink bandwidth.
The neural networks used in these experiments are VGG5 [36] and ResNet18 [37]. Their architectures are shown in Table II."CONV-A-B-C" represents a convolutional layer with the kernel size of A\(\times\)A and the number of output channels B, followed by a pooling layer C if applicable. "MaxPool" and "AvgPool" denotes a max pooling layer and a average pooling layer, respectively. "FC-A" represents a fully connected layer with the output size A. "RES-A-B-C" denotes a residual block that consists of two convolutional layers with the kernel size of A\(\times\)A and the number of output channels B, followed by a pooling layer C if applicable. The output of each residual block is the output of the last inner convolutional layer plus the input of the residual block. For convenience, the activation functions and batch normalisation functions are not shown in the table.
The dataset used in these experiments is CIFAR-10 [38, 39]. All images in CIFAR-10 have the shape of \(32\times 32\), and are classified into 10 classes. Each devices owns a training dataset with 10,000 data samples. The validation dataset and test dataset have 2000 and 8000 data samples, respectively. In the training process, the data samples are input into the neural network in form of mini batches. The size of each mini batch (batch size) is 100 for each device in FL and SFL. In PipeLearn, the batch size is \(\lfloor 100/N^{k}\rfloor\), where \(N^{k}\) is the parallel batch number for device \(k\) and \(k=1,2,3,4\).
### _Efficiency Results_
The experiments in this section compares the efficiency of PipeLearn with FL and SFL. Although SL is a popular CML technique, it is significantly slower than SFL since each device operates sequentially. Hence, SL is not considered in these experiments. All possible split points for SFL are benchmarked (similar to the benchmarking method adopted in Scission [40]), and the efficiency of SFL with the best split point is reported. The split point and parallel batch number for PipeLearn are selected by the optimisation technique proposed in Section III-C.
#### Iv-B1 Comparing Efficiency
The efficiency of the CML techniques is measured by _training time per epoch_. Since a fixed number of images are trained in each training epoch, training time per epoch effectively captures the time taken by a CML technique to process one image.
Figure 4 shows the training time per epoch of VGG5 and ResNet18 for FL, SFL and PipeLearn under 4G, 4G+ and WiFi network conditions. It is immediately evident that the training time per epoch for PipeLearn is lower than FL and SFL in all cases.
When training VGG5 models (Figure 4(a)), FL requires similar time under three network conditions, because the devices only upload and download once at the end of the training epoch (it requires less communication compared to
SFL and PipeLearn). However, FL trains the entire model on each device, which requires longer computational time. When the bandwidth is low (for example, 4G), FL outperforms SFL, because the latter requires more communication time. However, under 4G+ and WiFi, SFL has shorter training times because of fewer device-side computations. Under all network conditions, PipeLearn outperforms FL and SFL. It is noteworthy that the benefits of PipeLearn are evident when training needs to occur in a limited bandwidth environment since more computations can be overlapped with communication (communication takes more time under limited bandwidth). PipeLearn accelerates FL by 1.5x - 2.3x and SFL by 1.1x - 1.6x.
FL is slow when training ResNet18 (Figure 4(b)), because it is a deeper network with more layers that need to be trained on devices. Both of SFL and PipeLearn significantly outperform FL. PipeLearn has shortest training time per epoch under all network conditions. PipeLearn accelerates FL by 19.52x - 21.55x and SFL by 1.1x - 1.42x.
#### V-B2 Comparing Resources Utilisation
Two metrics are used to compare the utilisation of hardware and bandwidth resources:
Idle time of server/device (seconds) is the time that the server/device does not contribute to training models in each training epoch. The device-side idle time is the average of idle time for all devices. Lower idle time means higher hardware resource utilisation. Since the devices are homogeneous, it is assumed that there is a negligible impact of stragglers.
Average network throughput (Mbps) is the average amount of data transmitted through the network per second when the model is trained. A higher network throughput means higher bandwidth resource utilisation.
As shown in Figure 5, PipeLearn reduces the server-side idle time under all network conditions when training both VGG5 and ResNet18. Since the the server has more computing resources than the devices, model training is faster on the server. Hence, reducing the server-side idle time takes precedence over reducing the device-side idle time. Since FL trains complete models on the devices, the devices are usually rarely idle. However, the server is idle for a large proportion of time when the model is trained. Compared to FL, SFL utilises more resources on the server, because multiple layers are trained by the server. PipeLearn further reduces the server-side idle time by parallelising the server-side computation with device-side computation and communication between the server and the devices. Compared to FL and SFL, the server-side idle time using PipeLearn's reduced up to 28.5x and 1.81x, respectively. PipeLearn also reduces the device-side idle time of SFL upto 12.87x in all cases.
In terms of bandwidth utilisation (Figure 6), PipeLearn is more superior in performance than FL and SFL under all network conditions. FL has a low average network throughput, because communication is only required at the aggregation stage. A higher amount of data is transferred between the server and the devices in SFL than PipeLearn. However, by parallelising communication and computation, PipeLearn increases the network throughput by 321.29x and 1.52x compared to FL and SFL, respectively.
### _Model Accuracy and Convergence Results_
It is theoretically proven in Section III that PipeLearn achieves similar model accuracy and convergence as FL. It is empirically demonstrated that PipeLearn does not adversely impact convergence and accuracy of models.
The convergence curves and test accuracy of VGG5 and ResNet18 using FL, SFL and PipeLearn are reported. Since the network conditions do not affect model convergence and accuracy in FL and SFL, the results for only 4G are reported. PipeLearn selects different split points and parallel batch numbers under different network conditions, so the results are reported for all network conditions.
#### V-C1 Comparing Convergence
As shown in Figure 7, the loss curves of the five techniques on validation dataset have very similar trends for both VGG5 and ResNet18. Both VGG5 and ResNet18 converge within 50 epochs. Figure 8 shows that the five accuracy curves have significant overlap and demonstrate that PipeLearn does not impact model convergence.
#### V-C2 Comparing Accuracy
In Table III, the test accuracy of VGG5 and ResNet18 trained using FL, SFL and PipeLearn under different network conditions are reported after 50 epochs. PipeLearn under three network conditions achieves a similar accuracy for both VGG5 and ResNet18 as FL and SFL on the test dataset. In short, PipeLearn does not sacrifice model accuracy.
Fig. 4: Training time per epoch for FL, SFL and PipeLearn under different network conditions.
Fig. 5: Idle time per epoch on the server and devices in FL, SFL and PipeLearn under different network conditions. Server-side idle time are the upward bars, and device-side idle time are the downward bars. The device-side idle time is the average of the idle time for all participating devices.
Fig. 8: Validation accuracy for FL, SFL and PipeLearn under different network conditions.
Fig. 6: Average network throughput for FL, SFL and PipeLearn under different network conditions.
Fig. 7: Validation loss for FL, SFL and PipeLearn under different network conditions.
### _Optimisation Experiments_
The results presented here are to demonstrate the effectiveness of the pipeline optimisation approach in \(\mathtt{PipeLearn}\). The experiments in this section will exhaustively benchmark all possible parameters, and prove that parameters selected by \(\mathtt{PipeLearn}\) is similar to the optimal values for the parameters.
The control parameters of \(\mathtt{PipeLearn}\), namely the split point \(P^{k}\) and parallel batch number \(N^{k}\) for device \(k\), where \(k=1,2,...,K\), which affects the training efficiency were presented in Section III-C. Section IV-B demonstrated that \(\mathtt{PipeLearn}\) with the parameters selected by our optimisation approach outperforms FL and SFL in terms of training efficiency.
The split point \(P\) and parallel batch number \(N\) for every device is the same as the experiments are carried out with homogeneous devices.
The number of split points are limited. As shown in Table II, VGG5 and ResNet18 consist of 5 and 10 sequential parameterised layers, respectively. Hence, we have \(P\in[1,5]\) for VGG5 and \(P\in[1,10]\) for ResNet18. The parallel batch numbers are also limited. As mentioned in Section IV-A, the batch size for \(\mathtt{PipeLearn}\) is \(\lfloor 100/N\rfloor\), where \(N\) is the parallel batch number for homogeneous devices. To guarantee that the batch size is at least 1, we have \(N\in[1,100]\). Due to the limited number of the split points and parallel batch numbers, the optimal pair \(\{P_{opt},N_{opt}\}\) is determined by exhaustive search. Specifically, VGG5 and ResNet18 are trained using all possible \(\{N,P\}\) pairs in \(\mathtt{PipeLearn}\), and the pair with the shortest training time is considered optimal.
\(T_{P,N}\) denotes the training time for each epoch given \(P\) and \(N\). Equation 15 is used to score the selected parameters.
\[score=\frac{T_{P_{opt},N_{opt}}}{T_{P,N}} \tag{15}\]
where \(0<score\leq 1\). The higher the score the better. When the selected parameters are optimal, the score is 1.
The experimental results for the parameters selected by our approach and the optimal parameters are shown in Table IV. \(\mathtt{PipeLearn}\) selects the optimal parameters in 4 out of 6 experiments. In only two experiments (VGG5 under WiFi and ResNet under 4G), the parameters selected by our approach are not the optimal ones. However, in these cases, our approach selects near-optimal split points, and their scores are closes to 1 (0.96 and 0.98).
It is worth noting that the optimal value of \(P\) is always 1 under different conditions. This is because the devices have substantially less computing resources than the server. As a result, more layers are deployed on the server side. To explore whether \(\mathtt{PipeLearn}\) finds the optimal parameters when \(P\neq 1\), we fixed \(P\) and search for \(N\) using our approach. Figure 9 shows the scores of selected parameters given different values of \(P\) under different network conditions. In most cases, the approach finds the optimal parameters (\(score=1\)). Only three exceptions are noted; the selected parameters have high scores (from 0.93 to 0.98) although they are not optimal.
The experimental results highlight that the pipeline optimisation approach of \(\mathtt{PipeLearn}\) is able to find the optimal or near-optimal parameters that maximise the training efficiency.
## V Conclusion
Deep learning models are collaboratively trained using paradigms, such as federated learning, split learning or split federated learning on a server and multiple devices. However, they are limited in that the computation and communication across the server and devices is inherently sequential. This results in low compute and network resource utilisation and leads to idle time on the resources. We propose a novel framework, \(\mathtt{PipeLearn}\), that addresses this problem for the first time by taking advantage of pipeline parallelism, thereby accelerating the entire training process. A novel training pipeline is developed to parallelise server-side computation, device-side computation and server-device communication. In the training pipeline, the neural network is split and deployed on the server and devices, and the training process on different mini-batches of data is re-ordered. An optimisation approach is then proposed to maximise the resource utilisation of the pipeline by selecting control parameters. Consequently, when compared to existing paradigms, our pipeline significantly reduces idle time on compute resources by up to 28.5x and network resources by up to 321.3x in training popular convolutional neural networks under different network conditions. An overall training speed up of up to 21.6x is observed.
## Acknowledgment
This work was sponsored by Rakuten Mobile, Inc., Japan.
Fig. 9: Scores of selected parameters given the split points using VGG5 and ResNet18, under 4G, 4G+ and WiFi network conditions. |
2304.00151 | Clustering and visualization tools to study high dimensional parameter
spaces: B anomalies example | We describe the applications of clustering and visualization tools using the
so-called neutral B anomalies as an example. Clustering permits parameter space
partitioning into regions that can be separated with some given measurements.
It provides a visualization of the collective dependence of all the observables
on the parameters of the problem. These methods highlight the relative
importance of different observables, and the effect of correlations, and help
to understand tensions in global fits. The tools we describe also permit a
visual inspection of high dimensional observable and parameter spaces through
both linear projections and slicing. | Ursula Laa, German Valencia | 2023-03-31T21:57:49Z | http://arxiv.org/abs/2304.00151v1 | # Clustering and visualization tools to study high dimensional parameter spaces: B anomalies example
###### Abstract:
We describe the applications of clustering and visualization tools using the so-called neutral B anomalies as an example. Clustering permits parameter space partitioning into regions that can be separated with some given measurements. It provides a visualization of the collective dependence of all the observables on the parameters of the problem. These methods highlight the relative importance of different observables, and the effect of correlations, and help to understand tensions in global fits. The tools we describe also permit a visual inspection of high dimensional observable and parameter spaces through both linear projections and slicing.
Introduction
Many problems in physics contain large numbers of parameters and/or large numbers of predictions that are hard to visualize. Here we discuss how tours can assist with the visualization of these problems. Another issue that arises in multi-parameter problems is that of mapping different parameter regions to different prediction regions. To address this question we propose a partitioning of parameter space based on clustering predictions in observable space. To be specific we discuss the application of these tools to the so-called "neutral B-anomalies" problem, illustrating what can be learned beyond the usual global fits [1].
The results from the tour methods that we use are usually presented as movies or animations which are not visible in the pdf file. Some of the animations that we mention here can be generated by running the example in the Shiny app [https://github.com/uschiLaa/pandemonium](https://github.com/uschiLaa/pandemonium). For the remainder, you can contact one of us directly. Short movies showing the animations referenced here can also be obtained from the arXiv version of this document.
As we know, there are multiple observables (several hundred binned branching ratios and decay distributions) in B-meson decay modes originating from the quark level transition \(b\to s\ell^{+}\ell^{-}\) where the leptons are muons or electrons. These have received a considerable amount of attention due to persistent deviations from the standard model (SM), although recently the discrepancies in two of the observables (\(R_{K}\) and \(R_{K^{*}}\)) seem to have disappeared [2]. This system has been studied using global fits of the hundreds of observables in terms of between two and six parameters. The results that one can obtain from that type of exercise include
* finding the best-fit (BF) parameters
* measuring the goodness of the fit and comparing it to the SM
* model selection to find the subset of parameters that can best describe the data
* finding confidence level intervals for the fitted parameters
The latter already corresponds to a partitioning of parameter space based on a single distance to a reference point (the experimental values of the observables), as illustrated in the left panel of Figure 1. These results are very useful for physics studies to determine whether a given model is a suitable description of the data. Even for these existing studies, a visualization of the high dimensional confidence level regions can provide information beyond what is observed by considering two-dimensional projections. As an example, we show in the right panel of Figure 1, the result of a guided tour used to find the projection illustrating the largest separation between the SM point and the best fit to the data from a six-parameter fit from 2019 [3]. This view indicates that the apparent deviation from the SM occurs along the \(C_{9}\) direction in parameter space. Even more intuition can be gained from animation 1, which shows a grand tour of the 6D region near the BF and we have marked the SM, the 6D BF point and several one and two-dimensional fits described in [3].
## 2 Beyond global fits
Some of the new insights into a data set that can be obtained from clustering are
* A partitioning of parameter space into clusters uses all inter-point distances. It does not depend on a specific reference point, such as an experimental measurement that may not yet exist (or that may change, as was recently the case with \(R_{K}\)). Different clustering parameters are suitable to emphasize different aspects of the problem.
* The number of clusters, or different groups, in the space, reflects the resolving power of a specific data set.
* The clustering results can help isolate trends and effects from subsets of observables.
In addition, high-dimensional visualization tools can offer new perspectives. For example, they
* Permit a visual inspection of the collective dependence of the observables on the parameters.
* Provide a graphic display of observable spaces with more than three dimensions.
* Highlight the relative importance of different observables which can help prioritize further studies.
* Provide a virtual assessment of the impact of correlations, dominant observables, tensions in global fits, and others.
### The B-anomaly example
For conceptual clarity and to simplify the visualization, we first select a subset of the observables and parameters that have been used in the literature to discuss the \(b\to s\ell^{+}\ell^{-}\) system. Most existing global fits treat the Wilson coefficients (WC) in an effective Hamiltonian as free parameters. We will first illustrate our methods with a two-dimensional case where \(C_{9}^{\mu}\) and \(C_{10}^{\mu}\) are the parameters, later on, we add two more parameters \(C_{9^{\prime}}^{\mu}\) and \(C_{10^{\prime}}^{\mu}\) for a four-dimensional example. The effective weak Hamiltonian responsible for the \(b\to s\ell^{+}\ell^{-}\) transitions at the B-mass scale is usually written
Figure 1: Left panel: partitioning of parameter space using confidence level regions from a global fit. Right panel: optimal projection of the 6d parameter space of a global fit obtained with a guided tour showing that the best fit deviates from the SM mostly along the \(C_{9}\) direction.
as
\[\mathcal{H}_{\rm eff}= -\frac{4G_{F}}{\sqrt{2}}V_{tb}V_{ts}^{\star}\sum_{i}C_{i}^{\ell}( \mu)\mathcal{O}_{i\ell}(\mu) \tag{1}\] \[\mathcal{O}_{9}^{\ell}= \frac{e^{2}}{16\pi^{2}}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell} \gamma^{\mu}\ell),\;\mathcal{O}_{9^{\prime}}^{\ell}=\frac{e^{2}}{16\pi^{2}}( \bar{s}\gamma_{\mu}P_{R}b)(\bar{\ell}\gamma^{\mu}\ell),\] (2) \[\mathcal{O}_{10}^{\ell}= \frac{e^{2}}{16\pi^{2}}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell} \gamma^{\mu}\gamma_{5}\ell),\;\mathcal{O}_{10^{\prime}}^{\ell}=\frac{e^{2}}{16 \pi^{2}}(\bar{s}\gamma_{\mu}P_{R}b)(\bar{\ell}\gamma^{\mu}\gamma_{5}\ell). \tag{3}\]
where we have singled out the four operators we discuss here. This set of operators, with real WC, only allows CP-conserving new physics and affects only the muons. Our notation is such that these WC refer exclusively to new physics, they are 0 in the SM, and the SM effects are accounted for separately.
The dimensionality of observable space also needs to be reduced for clarity. We select a subset of fourteen observables based on the ranking analysis of [3]. These observables are listed in Table 1, where the last column gives the ID that this observable had in [3]. We note, however, that some definitions of the observables are not identical: the sign of \(P_{2}\) is reversed here, and in some cases, different experimental measurements are being averaged as we rely on flavio[4] for this study. We choose the observables marked with a \(\star\) which were singled out as the most important ones for the determination of \(C_{9}^{\mu}\) and \(C_{10}^{\mu}\) in the global fits. We also include the ones marked with a \(\star\) which were singled out as important for \(C_{9^{\prime}}^{\mu}\) and \(C_{10^{\prime}}^{\mu}\). The remainder \(P_{2}\) and \(P_{5}^{\prime}\) bins are chosen to complete the \(q^{2}\) distributions for these two observables. Note that \(R_{K}\) and \(R_{K^{\star}}\) are the ones whose experimental values have recently changed and this will provide us with a chance to evaluate this change within this study. The experimental values are taken from: for \(P_{5}^{\prime}\) LHCb [5], CMS [6] and ATLAS [7]; \(P_{2}\) LHCb [5]; \(R_{K}\) LHCb [8] and Belle [9]; \(R_{K^{\star}}\) LHCb [10] and Belle [11]. The corrected values of \(R_{K}\) and \(R_{K^{\star}}\)[2]. Unless specifically stated otherwise, all plots and results will use the "old" values of \(R_{K}\) and \(R_{K^{\star}}\). The 2D BF to this dataset as obtained from flavio[4] is the point \((C_{9}^{\mu},C_{10}^{\mu})=(-0.8,0.1)\), and lies \(3.7\sigma\) from the SM. These two points are marked with an \(*\) and an \(\circ\) in most of the plots. The BF after the change in \(R_{K}\) and \(R_{K^{\star}}\) to this same dataset becomes \((C_{9}^{\mu},C_{10}^{\mu})=(-0.4,-0.1)\)..
For our study, we will generate models (sets of 14 predictions) on a grid of values for \((C_{9}^{\mu},C_{10}^{\mu})\). The original Shiny app requires the grid to be uniform but this is not needed in general. All the predictions are generated with flavio[4] and the grid is chosen to be large enough to contain both the SM and the BF points.
## 3 Clustering
To partition the continuous parameter space we consider model points \(M_{k}\) defined by their coordinates \((C_{9}^{\mu},C_{10}^{\mu})_{k}\) in parameter space and by their coordinates \((O_{1},\cdots,O_{14})_{k}\) in observable space. It is easier (but not necessary) to use a distance function that can be calculated from coordinates. To this effect, we define the coordinates of each model point in observable space to be
\[Y_{ki}=\sum_{j}\Sigma_{ij}^{-1/2}(X_{kj}-R_{j})\approx\sum_{j}\frac{1}{\sqrt{ (\Sigma^{-1})_{ii}}}(\Sigma^{-1})_{ij}(X_{kj}-R_{j}), \tag{4}\]
where \(X_{kj}\) is the prediction of model \(k\) for observable \(O_{j}\), \(R_{j}\) is the "origin" or reference point for that observable, and \(\Sigma_{ij}\) is the total covariance matrix including both theoretical and experimental uncertainties and correlations. The origin, \(R_{j}\) is arbitrary but would typically be chosen as a special point. In this example that could be the experimentally observed point \(E_{i}\), the SM prediction, or any other preferred model. These coordinates thus measure the distance from the reference point in units of combined theoretical and experimental uncertainty. Using these coordinates, we define the (square of the) distance **between models** as
\[d_{\chi^{2}}(X_{k},X_{l})=\sum_{i,j}[X_{ki}-X_{li}](\Sigma^{exp}+\Sigma^{th})^ {-1}_{ij}[X_{kj}-X_{lj}]=\sum_{i}(Y_{ki}-Y_{li})^{2}. \tag{5}\]
The last equality follows if \(\Sigma\) does not depend on the model, which is an often-used approximation particularly when the experimental errors dominate. In this case, the clustering results will not depend on the reference point. In particular, they **would not change** as a result of the recent change in the central values of \(R_{K}\) and \(R_{K^{\star}}\).
This definition of distance is just the Euclidean distance with the coordinates defined by Eq. 4, and it can be interpreted as a \(\Delta\chi^{2}\). We exploit this interpretation to construct the partitioning by first defining a centroid and a radius for each cluster. The centroid \(c_{j}\) of cluster \(C_{j}\) is the member of the cluster which minimizes \(f(c,C_{j})=\sum_{x_{i}\in C_{j}}d(c,x_{i})^{2}\) and the radius of the cluster is \(r_{j}=\max_{x_{i}\in C_{j}}d(c_{j},x_{i})\). The centroids are meant to be a representative point for each cluster that can serve as a benchmark for further studies. With these definitions, one can use "one-sigma" clusters, for example, to obtain the partitioning. The interpretation, in this case, is that if a future BF to all experiments falls at one of the centroids, the corresponding cluster contains all the points lying in the \(1\sigma\) region, \(\Delta\chi^{2}\leq 2.3\) for two parameters. Note however that no one centroid is singled out
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline ID & Observable & Exp. & ID in [3] \\ \hline \(1\star\) & \(P_{5}^{\prime}(B\to K^{*}\mu\mu)[0.1-0.98]\) & \(0.52\pm 0.10\) & \(20\) \\ \(2\) & \(P_{5}^{\prime}(B\to K^{*}\mu\mu)[1.1-2.5]\) & \(0.36\pm 0.12\) & \(28\) \\ \(3\) & \(P_{5}^{\prime}(B\to K^{*}\mu\mu)[2.5-4]\) & \(-0.15\pm 0.14\) & \(36\) \\ \(4\star\) & \(P_{5}^{\prime}(B\to K^{*}\mu\mu)[4-6]\) & \(-0.39\pm 0.11\) & \(44\) \\ \(5\star\) & \(P_{5}^{\prime}(B\to K^{*}\mu\mu)[6-8]\) & \(-0.58\pm 0.09\) & \(52\) \\ \(6\) & \(P_{5}^{\prime}(B\to K^{*}\mu\mu)[15-19]\) & \(-0.67\pm 0.06\) & \(60\) \\ \(7\) & \(P_{2}(B\to K^{*}\mu\mu)[0.1-0.98]\) & \(0\pm 0.04\) & \(17\) \\ \(8\) & \(P_{2}(B\to K^{*}\mu\mu)[1.1-2.5]\) & \(-0.44\pm 0.10\) & \(25\) \\ \(9\) & \(P_{2}(B\to K^{*}\mu\mu)[2.5-4]\) & \(-0.19\pm 0.12\) & \(33\) \\ \(10\star\) & \(P_{2}(B\to K^{*}\mu\mu)[4-6]\) & \(0.10\pm 0.07\) & \(41\) \\ \(11\star\) & \(P_{2}(B\to K^{*}\mu\mu)[6-8]\) & \(0.21\pm 0.05\) & \(49\) \\ \(12\star\) & \(P_{2}(B\to K^{*}\mu\mu)[15-19]\) & \(0.36\pm 0.02\) & \(57\) \\ \(13\star\star\) & \(R_{K}(B^{+}\to K^{+})[1.1-6]\) & \(0.86\pm 0.06\) & \(98\) \\ & new value & \(0.949^{+0.047}_{-0.046}\) & \\ \(14\star\star\) & \(R_{K^{*}}(B^{0}\to K^{0*})[1.1-6]\) & \(0.73\pm 0.11\) & \(100\) \\ & new value & \(1.027^{+0.077}_{-0.073}\) & \\ \hline \end{tabular}
\end{table}
Table 1: List of observables used to cluster measurements with an underlying \(b\to s\ell^{+}\ell^{-}\) quark transition.
by a global fit: at this stage, there is no need for a fit (or even measurements) to exist. Similarly, we can require that any two centroids be separated by at least \(\Delta\chi^{2}>2.3\). Two caveats are important: there will always be points as close to each other as we want that, nevertheless, sit on different clusters, and the boundaries between clusters will shift if the parameter range that is being studied is changed. This clustering method is sketched in the left two panels of Fig. 2, and the results for our example are then shown in the third panel in observable space.
The distance between clusters is referred to as linkage, and here our focus is on Ward.D2 linkage which defines clusters by minimizing a within-cluster dissimilarity function. To decide on the number of clusters we compute both the maximum cluster radius and the minimum distance between centroids as a function of the number of clusters. The concept of a cluster as a set of points that are indistinguishable from each other at some level of confidence fixes the maximum radius and thus the minimum number of clusters. For the centroids to differ at some level of confidence, the minimal distance between them must also be fixed and this condition results in a maximum number of clusters. These combined requirements lead to there being five clusters in this example as illustrated in Fig. 3. The resolving power of a given data set depends on the parameter space volume, the range of predictions for a given observable over that region of parameter space, and the size of the uncertainty in both measurements and predictions. It is possible to increase the resolving power by adding observables or by increasing the precision of a measurement. The latter happened with the latest measurements of \(R_{K}\) and \(R_{K^{\star}}\), and including this updated experimental error would improve the resolution of this set to six clusters. These changes in \(R_{K}\) and \(R_{K^{\star}}\) have minimal effect on the results of our clustering exercise so we proceed with the results as obtained in [1]. We later show what changes occur when the new values of \(R_{K}\) and \(R_{K^{\star}}\) are used.
The resulting clusters are shown in Fig. 4. The left panel shows the partition of parameter space. The boundaries between clusters fall approximately along lines of constant \(R_{K}\) and the significance of this will be discussed below. The right panel is a parallel coordinate (PC) plot representation of the observable space. This PC plot has been rendered after centering the coordinates. Doing this removes the information about distance from the reference point, but allows a better comparison of the relative size of variations in the predictions for each observable. If one is more interested in following the models across the plot than in the relative size of the variations, a PC plot that is centered and scaled can be used. This option also exists in the tool pandemonium. A grand tour
Figure 2: Partitioning the (continuous) parameter space by measuring the distance between two models \(M_{1}\) and \(M_{2}\) in observable space (left two panels). The result in this example is shown in the right panel and is obtained as described in the text.
view of the clusters in observable space along with the experimental point (black dot) can be seen in animation 2. From that animation one can see, for example, that the experimental point is separated from the hyperplane of predictions for all values of the two parameters.
In Fig. 5 we illustrate how the result of the clustering exercise helps visualize the collective dependence of all observables on the parameters. In the left panel, we show the dependence of two observables, \(R_{K}\) (red lines mark constant values), and one bin of \(P^{\prime}_{5}[4-6]\) (black lines). When there are many observables a plot like that is not very useful, instead one may want to look at combinations of observables with different weights, as illustrated in the central panel where we show the lines with constant averages of the two. The clustering exercise shown on the right panel effectively combines all the observables with different weights that can be altered by choosing a distance function and linkage. We have superimposed on this last panel the lines of constant \(R_{K}\) and \(P^{\prime}_{5}[4-6]\) to show how the boundaries between clusters follow lines of approximately constant \(R_{K}\). This simply reflects that this observable is completely dominant in this case. This can also be seen in the PC plot of Fig. 4. The large spread seen in \(O_{13}\) in that plot reflects that, in units of uncertainty, this observable varies the most across this region of parameter space. One can also see in the same plot that \(R_{K}\) is dominant in determining the separate clusters (almost no overlap between the colors along the \(O_{13}\) coordinate. The same reasoning shows that \(O_{2},O_{8},O_{14}\) are also separating the clusters cleanly.
Figure 4: Clustering result using Ward.D2 linkage (which minimizes the variance within clusters) and Euclidean distance (left panel), and the corresponding centered parallel coordinates (PC) for all 14 observables (right panel) with matching color codes. The darker line for each color in the PC plot marks the cluster benchmark (also indicated on the left, with an open diamond symbol). A projection of the 14d observable space is shown in the last panel of Fig. 2.
Figure 3: Maximum cluster radius and minimum distance between centroids as a function of the number of clusters determine the optimal choice for this example which is five clusters.
Sub-leading effects can be observed by adding a sixth cluster, for example. In Fig. 6 we see the sixth cluster in yellow separating from the light green by an approximately horizontal partition that indicates sensitivity to \(C_{10}\) in the region away from the SM. The arrow points to the PC plot where one can see that it is mostly \(O_{11,12}\) (\(P_{2}[6-8]\) and \(P_{2}[15-19]\)) that are most important for determining the separation between yellow and pink clusters. We should caution here that numerical accuracy affects small details which at some level become just noise.
Another way to study sub-leading effects is to remove the dominant observable, in this case, \(R_{K}\). The result is shown in Fig. 7 where we use the fact that the resolving power has been reduced to only 3 clusters. The dominant operator in the remaining set is \(R_{K^{*}}\) but its effect is not as important as that of \(R_{K}\). This is evident both from the size of its variation in the PC plot and from the shape of the inter-cluster boundaries. Without \(R_{K}\), this observable set is mostly sensitive to \(C_{9}\). The cluster separation, in this case, can be seen in the PC plot to be a collective effect due to many observables. The brown cluster is mostly due to \(P_{5}^{\prime}\) and this can be seen in the PC plot which shows this cluster overlapping with others for the \(P_{2}\) observables. Notice, of course, that the BF (\(*\)) has also shifted when we removed \(R_{K}\).
It is possible to enhance or suppress effects by changing the clustering parameters. To increase the importance of a dominant observable one can use maximum distance with complete linkage
Figure 5: The left panel shows how two observables vary across the parameter region, the center panel how an average of these two varies, and the right panel the collective behavior of all 14 observables captured by the clustering result.
Figure 6: When increasing the number of clusters to six we split one region, which now appears in light green and yellow. Connecting the parameter region plot (left) with the PC plot (right) we find that two observables are important for the separation of the new yellow cluster.
instead of Euclidean distance with ward linkage. The left panel of Fig. 8 illustrates this with a sketch in which two models, \(A\) and \(B\) are separated by a distance of 3 along one observable and by a distance of 1 along the other observable. Using the maximum distance removes the sub-leading observable from the picture whereas using the Manhattan distance increases its relative importance. In the center panel, we show the result of clustering our set of 13 observables (with \(R_{K}\) removed) using maximum distance and complete linkage. This increases the weight of \(R_{K^{\star}}\) as reflected by the change in boundary shape from that seen on the left panel of Fig. 7. The right panel is the result of clustering the full set of 14 observables but using the Manhattan distance (with Ward linkage), the clusters are now due to a collective effect.
We end this section by using the new values of \(R_{K}\) and \(R_{K^{\star}}\) as recently reported by LHCb [2]. According to our discussion, we do not expect the change in central value to alter the clustering as this does not depend on the reference point. On the other hand, the new numbers have smaller errors and this will enhance the importance of these two observables. Since they were already dominant, we do not expect any major differences. This is confirmed by comparing Fig. 9 to Fig. 4, the shape and size of the clusters are similar but \(R_{K}\) is even more dominant than before, the position of the BF (\(*\)) has, of course, changed.
Figure 8: Left panel: sketch illustrating the difference between different distances. Center panel: observables with \(R_{K}\) removed clustered with Chebyshev (maximum) distance and complete linkage. Right panel: all 14 observables clustered with Manhattan distance.
Figure 7: Clustering result after removing the dominant observable \(R_{K^{\star}}\), the thirteenth coordinate now becomes \(R_{K^{\star}}\).
## 4 Visualization
The PC plots can be used to visualize other aspects of observable space if the coordinates are not centered or scaled. This is illustrated in Fig. 10 where the horizontal line labeled "Exp" fixes the origin to the position of the experimental measurement (central value as the uncertainties are accounted for in the definition of the coordinates). This figure allows for visual inspection of several points:
* We see which observables are in tension with model predictions, for example, \(O_{1}\) cannot match the experimental value for any values of the parameters in the region of study (within some uncertainty that we quantify in the vertical axis of Fig. 14).
* We see which observables are insensitive to the parameters \(C_{9}\) and \(C_{10}\), they are \(O_{6}\) and \(O_{7}\) as they exhibit minimal variation across the range studied.
* We observe the tensions in the fit: for example, the BF lies on the boundary between purple and light green clusters. The PC plot shows that \(O_{4}(P_{5}^{\prime}[4-6])\) and \(O_{5}(P_{5}^{\prime}[6-8])\), which are the \(P_{5}^{\prime}\) bins that show the largest discrepancy between the SM and experiment, prefer models within the light green cluster which have larger negative \(C_{9}\). Recall that the experimental value of \(P_{5}^{\prime}[4-6]=-0.39\pm 0.11\), and thus lies outside, to the left, of the parameter region plotted. On the other hand, the pre-2022 value of \(R_{K}\) prefers the purple cluster. One can further see that the model points that take \(P_{5}^{\prime}[4-6]\), \(P_{5}^{\prime}[6-8]\) closest to their experimental value, take \(R_{K}\) furthest away. Interestingly this tension has only become **worse** with the new value of \(R_{K}\) which agrees with the SM and would sit on the dark green cluster in this plot.
The sensitivity of the observable set to given directions in parameter space can be studied and correlated with the variation of specific observables across the parameter range. For example, in Fig. 11, the superimposed lines show that the set is mostly sensitive to models with \(C_{10}\approx 0.2C_{9}\), and that it has almost no sensitivity to models where \(C_{10}=C_{9}\). Both of these features were already known from the results of global fits and this approach offers a clear visual picture. The right panel shows \(O_{11}\) which varies across the parameter range in an orthogonal manner (this one is selected from the interactive tool pandemonium which displays all of them), indicating that one way to increase sensitivity to models with \(C_{10}=C_{9}\) is to improve the precision in the measurement of \(O_{11}\) (\(P_{2}[6-8]\)).
Figure 9: Clustering result matching Fig. 4 but using the new experimental values of \(R_{K}\) and \(R_{K^{\star}}\).
Tours allow us to visualize the high-dimensional (14 in this example) observable space and see how models compare to the measurement. On the left panel of Fig. 12 we illustrate a typical 2D plot in parameter space and contrast it with the corresponding 2D plot in observable space. The two convey complementary information, with the latter revealing the relative position of a model prediction and the measurements. To do this in high dimensions is possible using PC plots such as the one in the right panel of Fig. 10, but also using tours. Tours give a more intuitive idea of the full space as can be seen in animation 2. In the right panel of Fig. 12 we show one projection from the grand tour of the animation. This indicates that this parameter space cannot reach the experimental point.
## 5 The case with four parameters
From the physics perspective, including the two additional parameters \(C_{\varphi}^{\mu}\) and \(C_{10^{\prime}}^{\mu}\), allows the exploration of models with right-handed quark currents. These are interesting in their own right but are disfavored by global fits. From the visualization perspective, the problem is complicated by the presence of two high-dimensional (more than three) spaces. This additional complication requires
Figure 11: The left panel shows the lines \(C_{9}=-5C_{10}\), \(C_{10}=C_{9}\) and \(C_{10}=-C_{9}\) superimposed on the clustering result of Fig. 4. The right panel shows variation of \(O_{11}\) (\(P_{2}[6-8]\)) across the parameter range.
Figure 10: The left panel shows lines of constant \(P_{5}^{\prime}[4-6]\) and \(R_{K}\) superimposed on the clustering result of Fig. 4. The right panel shows the PC plot but without centering or scaling illustrating how each observable deviates from its experimental value.
the introduction of slicing tools [12, 13] to inspect 2D projections of thin slices in the orthogonal space, as suggested in Fig. 13.
In our B-anomalies example, we enlarge our parameter space of study choosing ranges for the two new parameters that cover both the SM and at least their \(1\sigma\) ranges around the BF found in global fits. With the new parameter space and the same 14 observables, the resolution is only four clusters and we compare this case to the two-parameter case using PC plots for both cases in Fig. 14. We can immediately see that the extended range of predictions increases the overlap with the experiments (both plots have the same vertical scale). One can see, in particular, that the range of predictions for \(O_{4,5,6}\) extends towards the origin with the enlarged parameter space. This would be evidence (within errors, of course) for \(C_{9}^{\mu}\) and \(C_{10^{\prime}}^{\mu}\) being necessary to account for the data. Looking at \(O_{13}\) we see that \(R_{K}\) no longer cleanly separates the clusters. We also observe a reduced tension between \(P_{5}^{\prime}\) and \(R_{K}\).
We now turn to visualize the parameter space for this 4D case. In Fig. 15 we show on the left panel a \(C_{9}-C_{10}\) projection which shows how the correlations between \(C_{9}-C_{10}\) due to \(R_{K}\) are still dominant. The right panel shows a projection from observable space where it is clear that this 4D volume of models also does not contain the experimental point. The center panel is a thin slice projected onto the \(C_{9}-C_{9^{\prime}}\) that illustrates correlations between these two parameters that are not visible without slicing, obtained with the tool described in [14]. The clusters in parameter and
Figure 12: The left and center panels contrast the information that can be conveyed by parameter and observable space displays. The right panel is a projection of the 14-dimensional observable space partitioned into five clusters that shows clearly how the experimental point (black \(\blacklozenge\)) is separated from all the models parameterized by this range.
Figure 13: Sketch of how a slice of high-dimensional data can be selected based on the orthogonal point distance from the projection plane.
observable spaces for this case can be better visualized with animations 3 and 4. Animation 5 shows the effect of slicing through the SM point and projecting onto the \(C_{9}-C_{10}\) plane (the interactive tool mmtour allows one to change the slice height and the projection plane). Animation 6 shows what happens when varying the slice height while projecting onto the \(C_{9}-C_{9}\) plane. The latter reveals correlations between these two parameters that are only visible in thin slices and obscured in any projection.
## 6 Including more observables
As we know hundreds of observables have been discussed in connection with the \(b\to s\ell^{+}\ell^{-}\) transitions. Here we look at the 89 that we selected in [1], with the first 14 being those in Table 1. Using all of them, the resolving power of this data set is between 8 and 10 clusters. We will illustrate the main results using only five clusters. The centered PC plot of Fig. 16 can be used to select additional ones that may be important. In particular \(O_{86}\) (\(B(B_{s}\to\mu^{+}\mu^{-})\)) stands out. If we use the average experimental error computed by flavio, \(B(B_{s}\to\mu^{+}\mu^{-})=(2.81\pm 0.24)\times 10^{-9}\), this observable alone explains most of the difference in the clusters obtained with the set of 89 observables and with only the first 14. This can be seen by comparing the left two panels in Fig. 17.
For a different application of these results, we turn our attention to \(O_{44}\) which Fig. 16 shows to have moderate importance. In the third panel of Fig. 17 we show the coordinate variation of this observable. It suggests that it can constrain directions missed by the current overall picture if its significance can be enhanced. Currently, this observable has the experimental value \(O_{44}=P_{4}^{\prime}[0.1-0.98]=0.135\pm 0.118\). We can study what happens if the uncertainty in this measurement
Figure 14: PC plots for two parameters and five clusters (left) and four parameters and four clusters (right) obtained with Ward linkage and Euclidean distance. The plots are aligned to match the vertical scale.
Figure 15: Selected projections from tours in parameter space (left and center) and observable space (right) of the clusters resulting with four parameters. The color code matches the one in the right panel of Fig. 14.
can be reduced in the future. For example, the right panel of Fig. 17 shows the effect of adding just this observable to the original set of 14 but assumes that its experimental error can be **reduced by a factor of four**.
## 7 Conclusions
Using the example of the B anomalies we have demonstrated how to investigate the relationship between parameter and observable space using a group of related displays to interpret different clustering outcomes. This analysis is facilitated by the interactive environment of the tool pandemonium, which allows for easy comparison of clustering results with different parameter settings. By choosing different settings, specific observables can be emphasized or suppressed. The tool provides information to decide what is the optimal number of partitions for a given data set, and which observables should be emphasized to explore specific directions in parameter space. In this talk, we applied these methods to discuss a well-known B physics problem, which provides feedback for using these methods for other cases.
For the B anomalies example, our study highlights the importance of \(R_{K}\) and how this is connected to the precision of the measurement. With the new, more precise, measurement this observable becomes even more dominant. Even though global fits will be closer to the SM with the
Figure 16: Centered PC plot for the 89 observables listed in [1], the first 14 correspond to those in Table 1.
Figure 17: The left panel shows 5 clusters in parameter space with only two parameters, \(C_{9},C_{10}\), and 89 observables. The second pane from the left shows the 5 clusters including only the first 14 observables plus \(B(B_{s}\rightarrow\mu^{+}\mu^{-})\) as described in the text. The third panel shows the variation of \(O_{44}\) with \(C_{9},C_{10}\) and the last panel the 5 clusters that would be obtained using only the first 14 observables plus \(P_{4}^{\prime}[0.1-0.98]\) with an experimental error four times smaller than it currently is. |
2305.19729 | OVNS: Opportunistic Variable Neighborhood Search for Heaviest Subgraph
Problem in Social Networks | We propose a hybrid heuristic algorithm for solving the Heaviest k-Subgraph
Problem in online social networks -- a combinatorial graph optimization problem
central to many important applications in weighted social networks, including
detection of coordinated behavior, maximizing diversity of a group of users,
and detecting social groups. Our approach builds upon an existing metaheuristic
framework known as Variable Neighborhood Search and takes advantage of
empirical insights about social network structures to derive an improved
optimization heuristic. We conduct benchmarks in both real life social networks
as well as synthetic networks and demonstrate that the proposed modifications
match and in the majority of cases supersede those of the current
state-of-the-art approaches. | Ville P. Saarinen, Ted Hsuan Yun Chen, Mikko Kivelä | 2023-05-31T10:48:10Z | http://arxiv.org/abs/2305.19729v1 | # OVNS: Opportunistic Variable Neighborhood Search for Heaviest Subgraph Problem in Social Networks
###### Abstract
We propose a hybrid heuristic algorithm for solving the Heaviset k-Subgraph Problem in online social networks - a combinatorial graph optimization problem central to many important applications in weighted social networks, including detection of coordinated behavior, maximizing diversity of a group of users, and detecting social groups. Our approach builds upon an existing metaheuristic framework known as Variable Neighborhood Search and takes advantage of empirical insights about social network structures to derive an improved optimization heuristic. We conduct benchmarks in both real life social networks as well as synthetic networks and demonstrate that the proposed modifications match and in the majority of cases supersede those of the current state-of-the-art approaches.
\({}^{1}\)Faculty of Social Sciences, University of Helsinki, Helsinki, Finland
\({}^{2}\)Department of Computer Science, Aalto University, Espoo, Finland
\({}^{3}\)Department of Environmental Science and Policy, George Mason University, Fairfax, USA
[email protected], [email protected], [email protected]
## Introduction
Identification of dense subgraphs has increasingly important applications in social networks. Contemporary application areas include detection of structures such as communities [5, 10], events [17], coordinated bot networks [1], and treatment spillover between individuals in exposure networks [1]. Another interesting development considers using dense subgraph finding for selecting a set of items such that relevant attributes are maximally diversified - a problem that has potential application in selecting a diverse set of users for panels or decision-making bodies [14].
The combinatorial graph optimization problem central to all of these applications is known as the _Heaviest k-Subgraph Problem_ (HSP) which involves finding the densest subgraph in a given weighted network. Notably, this problem has been shown to be _NP-hard_, implying that in general it is computationally unfeasible to solve optimally. Particularly in large networks, approximation algorithms and heuristic frameworks are necessary for producing non-optimal yet still high quality solutions to the problem.
_Metaheuristics_ is the area of study that focuses on generalizing heuristic approaches to frameworks that can be applied to any given optimization problem. These approaches are inspired by a diverse set of ideas and disciplines ranging from physics to biology, and chemistry to psychology [15, 16]. Generally, metaheuristics have been successfully applied in many areas of science and engineering, and many new methods have been proposed [1], but serious limitations have also been recognized, including too much focus on synthetic benchmarking data sets with limited applicability to real world problems [15].
Focusing on developing heuristic algorithms for real-world networks offers a largely unexplored avenue for improving heavy subgraph finding algorithms. More specifically, large social networks exhibit types of structures that could be exploited and targeted by the search algorithms: they are typically sparse networks with large amounts of degree heterogeneity, degree assortativity, clustering, communities, core-periphenries, specific types of link weight embedding, and many other structural features [1, 1, 13]. Algorithms which take into consideration the large heterogeneity in degrees have been particularly successful in other domains before. For example, degree heterogeneity has been utilized in finding influential substructures such as cores and communities [16] and identification of vital nodes has enabled better understanding about the spreading dynamics that has direct applications in mitigating epidemic outbreaks [14, 15].
Heuristic approaches leveraging heterogeneity in degrees and link weights are particularly interesting candidates for dense subgraph identification algorithms. The distributions of degrees and link weights in social networks have been reported to be close to power-law, log-normal, or other heavy-tailed distributions [1, 17, 18]. While the exact shape of the reported distributions have been challenged [13, 14], for the purpose of finding heavy subgraphs the interest is mostly on the amount of variation in degrees and weights. For example, power-law tails in the range of exponents that are typical of degree and link weight distributions social networks would mean huge amounts of variance that in practical terms grows with the network size. Thus, these features seem promising candidates to consider when identifying members of
the most influential subgraphs. Further, as such features are pronounced with network scaling it also makes them especially amenable for designing efficient heuristics for large networks where further reduction in the size of the search space of the given optimization problem is required.
In this paper we propose a set of improvements to heuristic design, particularly for finding heavy subgraphs in weighted networks, which integrates established heuristic design principles with structural insights derived from empirical network science. More specifically, our approach leverages the heavy-tailed degree distributions characteristic to large social networks and is thus specifically suitable for application areas where networks with heavy-tailed degree distributions, e.g. scale-free networks, is the subject of study. Finally, in order to demonstrate the validity of our approach, we conduct extensive benchmarks in 38 empirical social networks and 41 synthetic networks. Results show that the proposed modifications lead to increased performance against prior variable neighborhood search heuristics as well as more recent state-of-the-art heuristics.
The contribution of our paper is threefold. i) We introduce a heuristic with state-of-the-art performance in large social networks, ii) we demonstrate that insights from empirical network science can be leveraged to improve the design of optimization heuristics in networks, and iii) we produce an efficient and easy to use open source implementation in python, which includes both the original (BVNS) algorithm (Brimberg et al., 2009), as well as our own improved version (OVNS) of it. By this we aim to make such dense subgraph finding heuristics more readily available and accessible to the broader community of scholars.1
Footnote 1: Source code available at [https://github.com/Decitizen/OVNS](https://github.com/Decitizen/OVNS)
## Background
### Heavy subgraph finding
In this section we introduce the necessary concepts related to finding heavy subgraphs in networks, more specifically in the context of combinatorial optimization problems.
The _heaviest k-subgraph problem_ (HSP) is a constrained combinatorial optimization problem that can be defined as: Given a weighted graph \(G\) with a weighted adjacency matrix \(A\), determine a subset \(U\subseteq V\) of size \(k\) such that the total edge weight \(\sum_{i,j\in U}A_{ij}\) of the subgraph induced by the \(U\) on \(G\) is maximized. More formally, in HSP the task is to find set \(U\subseteq V\) such that
\[\max_{U}\left(\sum_{i,j\in U}A_{ij}\right)\ \ s.t.\ \ |U|=k. \tag{1}\]
HSP has been studied extensively (Billionnet, 2005), and it is closely related to multiple auxiliary graph problems such as the _Densest k-Subgraph Problem_ (DSP), which is the special case of HSP where all weights of the graph are either 0 or 1; the _Maximum Diversity Problem_ (MDP), another special case of HSP where edge weights are pairwise positive distances such as euclidean distances in \(p\)-dimensional space; and a well-known combinatorial _Knapp-sack_ optimization problem. Contexts in which solving HSP would be beneficial are manyfold, which explains why the naming convention is rather loose. In the literature it is also known under the names _k-cluster problem_, _maximum edge subgraph problem_, _maximum problem_, _k-dispersion problem_, and _k-degree-sum problem_ (Billionnet, 2005).
The decision versions of DSP and its generalized version, HSP, are variants of the original Clique problem that has been shown to be _NP-complete_(Karp, 1972). This implies that the optimization versions of the problems are _NP-hard_(Corneil and Perl, 1984). In practice, exact solutions of the optimization problem can be guaranteed to be tractable only for small and sparse graphs, and a small range of \(k\) values which is severely restricting for many real-life applications. Letsios et al. (2016) have proposed a _branch and bound_ algorithm which is able to output exact solutions to instances of HSP with \(k\) values up to 15 and networks with billions of edges, which translates to order of \([10^{4},10^{5}]\) nodes in dense graphs. However, for solving larger instances one needs to turn to approximate algorithms (Asahiro et al., 2000; Letsios et al., 2016; Brimberg et al., 2009; Marti et al., 2013; Hansen et al., 2019). Marti et al. (2022; 2013) offer two comprehensive surveys to heuristic approaches to solving HSP in the context of MDP.
### Variable Neighborhood Search
A variety of metaheuristics have been applied successfully in the context of HSP, including the computationally inspired _Tabu Search_, _Variable Neighborhood Search_, and _Greedy Randomized Adaptive Search Procedure_, as well as _Simulated Annealing_, and _Scatter Search_ and _Memetic Search_(Marti et al., 2022; Marti et al., 2013).
The Variable Neighborhood Search (VNS) is a metaheuristic framework that was first introduced in Mladenovic and Hansen (1997) and has been since successfully applied to both polynomial problems with large polynomial constants and _NP-hard_ problems (Brimberg et al., 2009; Aringhieri and Cordone, 2011; Marti et al., 2013; Marti et al., 2022). VNS is attractive both because of its conceptual simplicity as well as good general performance in related combinatorial optimization problems in graphs. In the specific domain of HPS and closely related problems such as MDP and DSP, VNS based heuristics have been shown to have extremely good performance superseded only by some of the more recent and more complex heuristics, most notably the _Opposition-based Memetic Search_(Zhou, Hao, and Duval, 2017).
The core idea that characterizes VNS is the complementing of the greedy local search phase with a randomized diversification phase. Once the algorithm converges to the local optimum, the search is diversified by replacing portion of the existing solution with randomly selected elements from the set of all elements. For each unsuccessful search attempt, the size of the replaced portion is incrementally increased until all elements in the original solution are replaced. This variability in the size of the perturbation is what gives VNS its name.
Let \(H\) be the solution with highest known objective function value at iteration \(t\). VNS then proceeds to iteration \(t+1\)
by executing the following two steps
1. **Neighborhood change** (exploration): perturbates \(H\) with the aim of finding a solution candidate \(H^{\prime}\) of the search space and thus allows escape from the local optimum of \(H\).
2. **Neighborhood search** (exploitation): given \(H^{\prime}\) finds a local optimum by greedily exploring the space of possible solutions in the local neighborhood of \(H^{\prime}\).
In the rest of this work, we will refer to this two-step procedure as the _optimization cycle_ (also, one iteration of the algorithm). At the end of each optimization cycle, the objective value of the new solution candidate \(f(H^{\prime})\) is evaluated against \(f(H)\), and if an improvement is found, \(H^{\prime}\) is adopted as the currently known best solution. Then the next optimization cycle is initiated. This procedure is repeated until a stopping criteria is satisfied. This is typically implemented as either upper bound for execution time or, an upper bound for the number of iterations after the last successful update.
### Variable Neighborhood Search in the context of networks
In the context of combinatorial graph optimization problems such as HSP, both main routines of the VNS are typically implemented as node swapping operations. During the neighborhood search (NS) just one, while in neighborhood change (NC) one or more of the nodes in the current best known solution \(H\) are replaced with equal amount of nodes selected from the complement of \(H\). More specifically for NS, the replacement node is always selected from the immediate proximity of \(H\), while in NC the proximity requirement is relaxed and the domain of selection is extended to cover the whole network. Node selection is typically based on completely random selection, and heuristic approaches aim to exploit network's structural properties or a combination of the two [10, 11, 12].
Next, we will introduce some formal notation necessary for describing how VNS operates with respect to HSP. Let \(\delta_{k}(i)\) to be the set of nodes \(k\)-hops away from node \(i\) in the network, and l-neighborhood \(\mathcal{N}_{l}^{H}\) a set of possible solutions that can be achieved by replacing \(l\) nodes in the solution \(H\) with \(l\) nodes from the set of nodes adjacent to \(H\), in other words \(\{\bigcup_{i\in H}\delta_{1}(i)\}\setminus H\). More formally
\[\mathcal{N}_{l}^{H}=\{\ H^{\prime}\ |\ H^{\prime}=(\ H\setminus B\,)\cup B^{ \prime}\ \}\, \tag{2}\]
where \(B\subseteq H\), \(B^{\prime}\subset\{\bigcup_{i\in H}\delta_{1}(i)\}\setminus H\) and \(|B|=|B^{\prime}|=l\). As can be observed from the Algorithm 1, typical VNS starts by reacting a NC procedure in the immediate neighborhood \(\mathcal{N}_{p}^{H}\) of H such that size of the perturbation is \(p=1\), and then by each iteration increasing \(p\) by \(p_{step}\) until either a successful update is found or \(p=p_{max}\). At each iteration of the algorithm both NC and NS phases are executed in succession which yields a new solution candidate \(H^{\prime}\). The NS is typically terminated based on a selected improvement strategy; either _first improvement_ in which the first improvement over the currently known best solution is returned, or _best improvement_ in which the complete \(\mathcal{N}_{1}^{H}\)-neighborhood is explored and the solution with best objective function value is returned.
In prior work, VNS has been applied to HSP by brimberg2009multimultiplier with an improved version proposed by aringhieri2011multiplier. In a thorough comparison, marti2013multiplier compared 315 competing algorithms in various benchmarking data sets. Despite their simplicity, both brimberg2009multiplier.'s BVNS and the modified AVNS variant by aringhieri2011multiplier achieved performance that was second only to the latest opposition based memetic search by zhou2017multiplier.
```
input :\(G,H,p_{min},p_{max},t_{max},p_{step}\) output : Optimized solution H' while\(t<t_{max}\)do
1\(p\gets p_{min}\) while\(p\leq p_{max}\)do
2\(H^{\prime}\leftarrow\)NeighborChange (G, H, p)
3\(H^{\prime}\leftarrow\)NeighborSearch (G, H')
4if\(f(H)<\tilde{f}(H^{\prime})\)then
5\(H\gets H^{\prime}\) break
6\(p\gets p+p_{step}\)
7\(t\gets t+\Delta t\)
```
**Algorithm 1**BVNS algorithm [11]. Input parameters include the input network \(G=(V,E)\) and the initial solution candidate \(H\subseteq V\) such that \(|H|=k\). \(H\) is assumed to have been initialised prior to the execution of BVNS. Other input parameters control the running time \(t_{max}\in\mathbb{N}\), maximum size of the perturbation \(p_{max}\in[1,\min{(\{k,n-k\})}]\) and the step size by which the perturbation size is incremented, \(p_{step}\in[1,k-1]\).
```
input :\(G,H,p_{min},p_{max},t_{max},p_{step}\) output : Optimized solution H' while\(t<t_{max}\)do
8\(p\gets p_{min}\) while\(p\leq p_{max}\)do
9\(H^{\prime}\leftarrow\)NeighborChange (G, H, p)
10\(H^{\prime}\leftarrow\)NeighborSearch (G, H')
11if\(f(H)<\tilde{f}(H^{\prime})\)then
12\(H\gets H^{\prime}\) break
13\(p\gets p+p_{step}\)
14\(t\gets t+\Delta t\)
```
**Algorithm 2**BVNS algorithm [11]. Input parameters include the input network \(G=(V,E)\) and the initial solution candidate \(H\subseteq V\) such that \(|H|=k\). \(H\) is assumed to have been initialised prior to the execution of BVNS. Other input parameters control the running time \(t_{max}\in\mathbb{N}\), maximum size of the perturbation \(p_{max}\in[1,\min{(\{k,n-k\})}]\) and the step size by which the perturbation size is incremented, \(p_{step}\in[1,k-1]\).
## Methods
In this section we describe our modified variant of VNS optimization heuristic.
### Proposed improvements
In this section, we introduce our main contribution, a modification of BVNS heuristic that we call the _Opportunistic Variable Neighborhood Search_ (OVNS). OVNS gains its name by the way it aims to take advantage of the well-established empirical fact that many real-world social networks exhibit heavy-tailed degree distributions which can be explained by the _preferential attachment_ mechanism [1, 12]. To this end, we introduce following modifications
1. **Initialization scheme** that constructs the initial solution using the _drop heuristic_.
2. **Neighborhood change scheme** that employs the _preferential attachment_ style stratified sampling for selecting new nodes into the solution.
3. **Neighborhood search scheme** that exploits the heaviest edges in the \(\delta_{1}(i)\) neighborhood.
Figure 1 uses a small undirected network to depict the internal workings of the drop heuristic (panel a), a modified NC (panel b), and a modified NS (panel c) schemes implemented in the OVNS.
For the initialization scheme (1) we employ the _drop heuristic_(Asahiro et al., 2000). In the drop heuristic, we start with the candidate solution \(H\) consisting of all nodes in the graph, remove the node that contributes least to the sum of weights in \(H\), after which we repeat the removal \(n-k\) times until \(|H|=k\) (Figure 1, panel a). For this algorithm, the worst-case approximation ratio for \(k=\frac{n}{2}\) has been shown to be bounded to \(9/4\pm\mathcal{O}(1/n)\). See Asahiro et al. (2000) for more in depth analysis about other cases. Brimberg et al. (2009) found empirically that this initialization strategy performs well even without further optimization, which is especially true for small and sparse graphs.
Next, we improve the neighborhood change scheme (2) by picking replacement nodes from the complement of \(H\) and stratifying the picking probability proportional to the weighted degree of the node (Figure 1, panel b). For constructing the sampling distribution, we take advantage of the preferential attachment mechanism (Barabasi and Albert, 1999) by setting the probability \(\pi_{u}\) of new node \(u\) being selected as
\[\pi_{u}=\frac{s_{u}}{\sum_{i\in H^{C}}s_{i}}, \tag{3}\]
where \(s_{i}\) is the weighted degree (strength) for node \(i\), \(H^{C}\) is the set \(V\setminus H\) and \(u\in H^{C}\).
Finally, we modify the neighborhood search scheme (3) by iterating through the nodes in the solution \(H\) and for each \(u\in H\) exploring adjacent nodes \(v\in\delta_{1}(u)\) in descending ranking order, where the ranking is based on edge weights \(w(u,v)\) (Figure 1, panel c). For accessing the neighboring nodes in ranking order, we produce an arg-sorted copy of the adjacency matrix during initialization of the algorithm, as shown in Algorithm 2 (line 3).
In large networks, we can further speed up the optimization cycle and reduce convergence time (at the expense of solution quality), by coupling this greedy search strategy with a min-thresholding of the edge weights in the network. To this end, we introduce a quantile-based edge weight control parameter \(q=1-P(W<w_{q})\) where \(w_{q}\) is the threshold corresponding to \(q\) value and \(W\) is the random variable distributed according to the edge weight distribution of the input network. This is applied at line 2 shown in Algorithm 2. For example, setting \(q=1\) will include the complete set of weights, while \(q=0.01\) only includes the top one percent of edge weights. However, evaluation of this optimization step is out of scope for this work and thus we leave it for future inquiry.
## Benchmarks
In order to demonstrate effectiveness of the OVNS, we run it with varied settings against two other HSP heuristics developed in the _Operations Research_ literature for the _Maximum diversity problem_ (MDP). Algorithms are the _Variable neighborhood search_(BVNS) developed by Brimberg et al. (2009) and one of the more recently introduced top performing MDP-heuristics _Opposition Based Memetic Search_(OBMA) by Zhou, Hao, and Duval (2017). In order to be
Figure 1: Operating principles of OVNS illustrated in a small network. Panel a) shows the drop initialization procedure for three iterations (\(t\in\{1,2,n-k\}\)). In small and sparse networks, the drop heuristic is generally likely to find a high quality solution by itself. Panel b) describes the neighborhood change scheme that utilizes stratified sampling with preferential attachment weighting where high degree nodes have selection probability proportional to their degree. Panel c) illustrates the greedy neighborhood search scheme that exploits the heaviest edges in the local neighborhood of the focal node. More specifically, when we iterate through the neighboring nodes of the focal node (hexagon marker), the nodes are explored in descending order based on the weight between them and the focal node, resulting in more efficient neighborhood search and faster convergence to local optima.
able to measure performance of each approach independent of implementation details, we implemented all algorithms in python. Further, to speed up the implementations, we JIT compiled the code base using the Numba library2. All benchmarks were run in the same computing environment.
Footnote 2: Source code for the implementation that uses python and numba libraries is available at [https://github.com/Decitizen/OVNS](https://github.com/Decitizen/OVNS)
First, we run the three algorithms in a set of 38 social networks with the intention of measuring the general performance of our approach (see Appendix 13 for data description). We bound the size of these networks to range \([10^{3},10^{5}]\) nodes. In order to measure the performance across different difficulty levels we vary the size of the targeted subgraph as \(k\in\{125,250,500,1000,2000\}\).
Second, to demonstrate performance in relatively large and dense networks, we use 5 transformed networks - two \(10^{4}\) node networks which have been generated using a weighted preferential attachment model (Barrat, Barthelemy, and Vespignani, 2004), and three differently sized real life networks that are based on Twitter retweet data (Chen et al., 2021). Before running the benchmark runs, we transform the networks into dense networks using the non-backtracking version of the Katz communicability (Arrigo et al., 2022).
Finally, we run the algorithms in 41 synthetic benchmarking instances from the mdplib2.0 library (Marti et al., 2021). These networks are commonly used in the MDP heuristics literature for comparing heuristic performance and should therefore give a standard baseline of OVNS performance. In the mdplib2.0 library, each network is accompanied by a single predetermined \(k\) value. Note that in contrast to many social networks with heavy-tailed edge weight and degree distributions, in MDP networks the edge weights are typically normally distributed.
In general, each algorithm is run 20 times per combination of \(k\) value and network, with each run taking 10 minutes. However, for larger and denser instances we accommodated for longer convergence times by finetuning the parameter choices further. For the largest social network (\(N=33,696\)) we extended the range of \(k\) values to cover \(\{4000,8000,10000,12000\}\). Additionally, to ensure satisfying degree of convergence, we redistributed the computing resources for this network such that we ran 15 runs with each run having 30 minute time budget. Similarly for the dense social networks, we run 20 runs, each run having 60 minute budget (see Appendix 13 for more details).
For all three benchmarking data sets, we compute both average and median relative deviations for each algorithm. Similarly, we compute ranking in terms of objective function values such that for each combination of network and parameter setting, the runs are combined to a pool and ranked. Finally, we combine all runs at the data set level and compute median and mean ranks for each algorithm. We define relative deviation as \(d(f^{*},f^{A}_{H})=100\%(f^{*}-f^{A}_{H})/f^{*}\), where \(f^{*}\) is the best objective function value for the given combination of network and parameter \(k\) value, and \(f^{A}_{H}\) is the objective function value for the given solution \(H\) produced by algorithm \(A\).
## Results
### Results on small \(k\) values
In both sparse and dense social networks, for \(k<1000\) values OBMA and OVNS find exactly the same optima. This is indicated by median deviations that approach 0 and suggests low problem difficulty which is further corroborated by the fact that for \(k<10^{3}\), OVNS converges in a fraction of the available time. For OVNS convergence to final value takes on average \(37\pm 108\) seconds in sparse and \(266\pm 515\) seconds in dense social networks (large variance is explained by the large variation in network sizes). In order to find demonsrable differences in performance between OBMA and OVNS, we focus our attention on large problem instances with the additional constraint that \(10^{3}\leq k\leq n/2\). This upper bound ensures that OBMA has comparable performance to the results obtained in Zhou, Hao, and Duval (2017) where the opposition based search was not implemented for \(k>n/2\). With these constraints, the number of sparse networks considered in the final analysis is reduced to \(20\).
### Overall OVNS performance
Table 1 depicts the median and average deviations as well as median and average rankings for each algorithm in each of
the three benchmarking data sets while Figure 2 shows the relative deviations as a function of network size. Overall, OVNS demonstrates the best overall performance; it scores best in 2 of the 3 benchmarking data sets, stays within 1% mean relative deviance in all data sets, and scores best in 8 out of the 12 aggregate measures across different data sets. We can observe that it clearly excels in dense social networks, followed by sparse social networks. As expected, in MDP networks OBMA is able to outperform it achieving best results in all related 4 aggregate measures.
From Figure 2 we can observe the effect of network size to the performance profiles of the algorithms. In sparse networks, once the network size grows beyond \(n>5000\) OBMA's performance starts to deteriorate leading to a visible 1-2 order of magnitude difference in relative deviation between the two algorithms. A similar but even more pronounced trend can be observed in dense networks where OVNS consistently achieves best solutions with both mean and median relative deviation approaching zero while for OBMA the corresponding measures are 2.86% and 0.02% respectively. The distributions for the two algorithms differ significantly both in sparse and dense networks. Corresponding significance tests, in sparse (Wilcoxon-Mann-Whitney \(U=11,751\), \(n_{1}=n_{2}=240\), p-value \(\ll 0.05\), one-tailed) and dense (Wilcoxon-Mann-Whitney \(U=52,768\), \(n_{1}=n_{2}=550\), p-value \(\ll 0.05\), one-tailed). However, in the MDP benchmarking instances OBMA still retains its performance edge achieving mean and median relative deviations of 0.02% and 0.01%, compared to 0.47%, and 0.44% for OVNS (Wilcoxon
\begin{table}
\begin{tabular}{l r r r r} \hline \hline OVNS & \(\bar{\mu}_{r}\) & \(M_{r}\) & \(\bar{\mu}_{d}\) & \(M_{d}\) \\ \hline Sparse (\(N=20\)) & **2.44** & **2.0** & **0.97** & **0.02** \\ Dense (\(N=5\)) & **6.10** & **5.0** & **0.00** & **0.00** \\ MDP (\(N=41\)) & 21.25 & 21.00 & 0.47 & 0.44 \\ \hline OBMA & & & & \\ \hline Sparse (\(N=20\)) & 6.94 & 6.00 & 7.15 & 0.2 \\ Dense (\(N=5\)) & 15.12 & 15.00 & 2.86 & 0.02 \\ MDP (\(N=41\)) & **5.49** & **3.0** & **0.02** & **0.01** \\ \hline BVNS & & & & \\ \hline Sparse (\(N=19\)) & 22.36 & 21.0 & 36.47 & 37.21 \\ Dense (\(N=5\)) & 33.83 & 34.0 & 36.84 & 39.77 \\ MDP (\(N=41\)) & 40.72 & 41.0 & 4.92 & 5.26 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table listing the results for the three benchmarking data sets; both sparse and dense social networks (results are subsetted \(k\geq 1000\)), as well as synthetic MDP networks. For each combination of algorithm and benchmarking data set, means (\(\bar{\mu}\)) and medians (\(M\)) are reported for both ranking (\(r\)) and relative deviation (\(d\)). Measured value of best performing algorithm for each data set is highlighted by bolding. Overall, OVNS demonstrates best performance in 8 out of 12 measures, clearly excelling in large and dense social networks, while OBMA achieves the best performance in MDP networks.
Figure 2: Benchmark results in sparse social networks (top), dense social networks (middle) and synthetic MDP networks (bottom). Relative deviations (\(\Delta d\)) are shown as a function of network size \(n\). Smaller values indicate better performance. In small sparse networks OBMA and OVNS demonstrate similar performance, while for \(n>5000\), performances diverge and OVNS consistently outperforms OBMA. Differences are further pronounced in dense networks while in MDP networks OBMA outperforms OVNS. BVNS has worst performance across all network types and sizes, but has smallest deviation in MDP networks.
Mann-Whitney \(U=672,398\), \(n_{1}=n_{2}=820\), p-value \(\ll 0.05\), one-tailed). The difference between BVNS and OVNS performance is also lower here which further suggests that in these network OVNS does not benefit from the stratified sampling to the same degree as in the two earlier data sets.
### Improved performance over BVNS
In terms of relative deviance, BVNS results range between 20-40% in social networks, but decrease to around 5% in MDP networks. Somewhat unexpectedly, BVNS instances struggle to converge within the given time limits already at \(k<10^{3}\) in sparse social networks. Further diagnostics shows difficulties in keeping up in terms of iteration speeds - the number of iterations at the end of each run is on average only \(6\%\)\((1.5\cdot 10^{5})\) relative to OVNS.
To estimate the performance difference independent of time limit, we conducted 30 additional runs of both OVNS and BVNS algorithms with a budget constraint of 50,000 iterations in one of the sparse Twitter networks ("Left 3", \(N=1226\)). In order to control for the effect of the drop initialization, we also ran a modified version of OVNS which uses BVNS's randomized initialization routine.
Here again controlling for time, regular OVNS achieves the best performance with mean relative deviation of \(0.02\%\pm 0.02\), while for BVNS the corresponding relative deviation is \(8.17\%\pm 0.82\) (Wilcoxon-Mann-Whitney \(U=0.0\), \(n_{1}=n_{2}=30\), p-value \(\ll 0.05\), one-tailed). The main finding is that, when we control for the iterations speeds, the BVNS performance difference drops from the 20-40% range to below 10%, yet it still remains significant. If we further control for the effect of the initialization by the drop heuristic, OVNS scores \(0.08\pm 0.06\) which is only \(0.06\) pp, yet still statistically significant difference (Wilcoxon-Mann-Whitney \(U=198\), \(n_{1}=n_{2}=30\), p-value \(\ll 0.05\), one-tailed), to the OVNS initialized with drop heuristic. This reveals a secondary empirical finding; the effect of drop initialization to the performance of OVNS is negligible when we compare it to the difference in relative deviations between BVNS and regular OVNS. When combined these findings suggest that the updated NC and NS routines produce significant performance improvements over BVNS.
## Discussion
### Conclusions
In this paper we have proposed an improved version of VNS metaheuristic for subgraph finding, and we have shown that these improvements result in significant performance gains. Particularly in networks with log-normal to heavy-tailed degree and edge weight distributions, OVNS exhibits significant improvement in performance against the original BVNS algorithm, and more importantly, it supersedes the performance of current state-of-the-art heuristic algorithms. OVNS achieves this by both systematically exploiting heaviest ties in the local neighborhood, and by successfully applying what Hussain et al. (2019) describe as _intelligent sampling_, i.e. a stratified sampling method is used to restrict or guide the search in the problem space. For this, OVNS incorporates key structural insights from empirical studies of networks, namely by preferentially targeting the heterogeneity in the degree structures of the networks. However, the benefit of preferential sampling only applies where such structures exist, which is demonstrated by the benchmarks in the MDP networks. In these networks OBMA is still able to produce slightly better results, though the margins are small.
### Limitations
Although we expected improvement over BVNS performance, the observed difference in performance in favor of OVNS was significantly high. We showed that it is likely that large parts of this difference is explained by the decreased iteration rates of BVNS. This is somewhat unexpected since both OVNS and BVNS use partially overlapping source code and call exactly the same functions at the implementation level. This might suggest that the OVNS benefits from additional compiler or runtime memory optimizations. Specifically, the neighborhood search improvements and the related arg-sorted indexing matrix can be exploited for fast caching during execution (only the most frequently accessed neighbors need to be kept in the memory) while BVNS does not have this feature. However, we also showed that even if we control for the iteration rates, OVNS retains significant performance edge over BVNS. The generalizability of this finding is to be taken with caution, as the performance was evaluated in only a single network.
We should also mention that two of the dense networks were created using the weighted preferential attachment model for which the generation mechanism is almost identical to the updated stratified sampling implemented in OVNS. It is therefore to be expected that OVNS performs well in these networks. However, we did not observe noticeable drops in performance for the other dense networks, which suggests that OVNS is able to generalize beyond the specific generative model. Also, it should be mentioned that for OBMA, we needed to considerably relax the number of iterations during each tabu search (see Appendix 13). This will likely affect overall performance negatively. The fact that this relaxation was necessary in MDP networks which were used in Zhou, Hao, and Duval (2017) is indicative of the fact that our python implementation of OBMA is significantly slower than the original C++ implementation (Zhou, Hao, and Duval 2017). However, this bottle neck on performance applies to OVNS as well - i.e. we expect that implementing OVNS in high performance languages such as Julia, C, or C++ can produce considerable speedups.
### Directions for future research
Benchmarking on even larger networks is a potential future research direction, and it would reveal more differences in the scaling of the heuristics. Other future directions include testing multiple simultaneous moves during the neighborhood search phase, implementing efficient parallelization, and guiding the search using memory structures such as tabu lists. In their survey, Hussain et al. (2019) also foresee a promising avenue of research for a new adaptive techniques that allow heuristics to self-tune their parameter values based on objective function values during exe
cution. In OVNS, we already introduce the \(q\) parameter for min-thresholding the weights of the input network. In addition, parameters such as \(p_{step}\) size, and neighborhood search mode can be taken advantage of for fine-tuning and self-adapting the search during execution. In OVNS, we combine two heuristics (drop and BVNS) into a hybrid heuristic that surpasses both of their individual performance. Similar hybridization techniques could be further explored with the aim of producing new heuristics with state-of-the-art performance. For instance, OBMA constructs the opposite solution candidate by picking nodes uniformly in random from the rest of the network's nodes. Further work could study the potential performance gains if stratified sampling proportional to degrees would be adopted in OBMA.
## Appendix
Source code for the implementation of OVNS, OBMA and BVNS is available at [https://github.com/Decitizen/OVNS](https://github.com/Decitizen/OVNS)
### Data sources
Table 3 describes used social networks. We obtained data sets from two sources, i) 5 social networks from the open access data repository _networkrepository.com_(Rossi and Ahmed, 2015), as well as 33 Twitter retweet networks introduced in (Chen et al., 2021). For networkrepository.com we narrowed the search to weighted, multi-edged or temporal social networks with a focus on online human communication networks such as email correspondence networks. We limited the size of the networks to range \([10^{2},10^{5}]\), and reduced all temporal or multilayer networks to static undirected single layer networks. For temporal networks, we derived weights by aggregating temporal edges between nodes. For MDP networks we used the subset of mdplib 2.0 benchmarking set (Marti et al., 2021). To allow easy comparisons, we used the same MDG-a 20-40 and MDG-c 20-40 instances that Zhou et al. used in their OBMA paper (Zhou, Hao, and Duval, 2017).
### Parameters in benchmarks
Table 2 shows the parameter settings for BVNS and OVNS heuristics. Setting \(p_{step}=\frac{k}{10}\) makes the step size dependent on the problem difficulty and aims to allow rapid diversification also in cases where \(k\) is large.
For OBMA, the standard parameter values described in (Zhou, Hao, and Duval, 2017) were used with the only difference being the number of iterations \(n_{tabu}\) allowed for each
\begin{table}
\begin{tabular}{l r r r} \hline \hline & \(p_{step}\) & search & shake & \(q\) \\ \hline BVNS & 1 & _first_ & _uniform*_ & 1.00* \\ OVNS & \(\lfloor k/10\rfloor\) & _first_ & _preferential_ & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Table listing the parametrizations for BVNS and OVNS for the benchmarking runs. Additionally, OVNS uses the drop heuristic for initialization while BVNS uses random initialization (best out of 1000 draws). (* these parameter values don’t exist in BVNS, but corresponding parameter values in OVNS have been added for reference.)
\begin{table}
\begin{tabular}{l r} \hline \hline network & \(n\) \\ \hline \multicolumn{3}{c}{Twitter networks (Chen et al., 2021) (\(N=33\))} \\ \hline Centre 1 & 1600 \\ Centre 2 & 1488 \\ Centre 3 & 1030 \\ Climate 1 & 15,061 \\ Climate 2 & 6951 \\ Climate 3 & 6567 \\ Economicpolicy 1 & 3328 \\ Economicpolicy 2 & 2734 \\ Economicpolicy 3 & 2828 \\ Education 1 & 7563 \\ Education 2 & 4312 \\ Education 3 & 4323 \\ Finns 1 & 1680 \\ Finns 2 & 1685 \\ Finns 3 & 1743 \\ Green 1 & 2225 \\ Green 2 & 1537 \\ Green 3 & 1757 \\ Immigration 1 & 3373 \\ Immigration 2 & 2040 \\ Immigration 3 & 2879 \\ Left 1 & 1226 \\ Left 2 & 783 \\ Left 3 & 531 \\ National 1 & 2703 \\ National 2 & 1466 \\ National 3 & 877 \\ Sdp 1 & 2084 \\ Sdp 2 & 1424 \\ Sdp 3 & 732 \\ Socialsecurity 1 & 7629 \\ Socialsecurity 2 & 3816 \\ Socialsecurity 3 & 3597 \\ \hline \hline \multicolumn{3}{l}{Network repository (Rossi and Ahmed, 2015) (\(N=5\))} \\ \hline EMAIL-DNC-1 & 906 \\ EMAIL-ENRON-L* & 33,696 \\ EMAIL-DNC-2 & 1891 \\ SOC-WIKI-ELEC & 7118 \\ SOC-WIKI-VOTE & 889 \\ \hline \multicolumn{3}{l}{Dense transformed networks} & \multicolumn{1}{c}{(\(N=5\))} \\ \hline NBKT-Climate 1** & 15,061 \\ NBKT-Education 1** & 7563 \\ NBKT-Immigration 1** & 3373 \\ NBKT-BRBV1** & 10,000 \\ NBKT-BRBV2** & 10,000 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sparse and dense benchmarking networks and their sizes. Number of runs for each algorithm and each parameter setting was 20, time budget per run was 10 minutes. Exceptions: (*) 15 runs, 30 minutes, (**) 20 runs, 60 minutes.
round of tabu search neighborhood search. For the python implementation, the original default value reported in the paper of 50,000 was too high resulting in OBMA running over the given time limit. We relaxed this parameter value and determined it dynamically based on network size \(N\), by using the following heuristic: \(n_{tabu}=1.5\cdot 10^{6}/N\).
### Broader perspectives and ethical considerations
Our work focuses on improving existing heuristic approaches for dense subgraph finding with structural insights derived from real-life social networks, and while our method offers a significant performance improvement in social networks, similar results can be achieved by committing more computational resources to the task. Therefore, it is unlikely that our approach will have any disruptive or harmful impact on society caused by threats that were made possible by this research. However, despite the potential for many beneficial societal impacts related to social dynamics, epidemiology, law enforcement, managerial processes, urban planning, logistics, ecology, and biology, the identification of groups in social networks through the use of methods such as ours must be approached with caution and requires careful consideration of potential second and third order consequences in the given application context.
When analyzing human social networks, we strongly encourage users of our approach to consider the right to privacy of the individuals represented in the network. It is crucial to properly anonymize network data to ensure that the privacy and data rights of identified individuals are not violated. We would also like to emphasize that especially in the context of experimental application in social networks and socially embedded systems, researchers should seek for informed consent whenever it is possible given the constraints of the application context. Both of these practices are of paramount importance in order to ensure that potential beneficial impacts of the research are maximized while minimizing any negative consequences.
## Acknowledgments
We would like to thank both the Aalto Science-IT project for their computational resources as well as the Aalto SciComp for technical assistance and support. This work was supported by the Academy of Finland (grant numbers 320780, 320781, and 349366).
|
2309.09837 | Frame-to-Utterance Convergence: A Spectra-Temporal Approach for Unified
Spoofing Detection | Voice spoofing attacks pose a significant threat to automated speaker
verification systems. Existing anti-spoofing methods often simulate specific
attack types, such as synthetic or replay attacks. However, in real-world
scenarios, the countermeasures are unaware of the generation schema of the
attack, necessitating a unified solution. Current unified solutions struggle to
detect spoofing artifacts, especially with recent spoofing mechanisms. For
instance, the spoofing algorithms inject spectral or temporal anomalies, which
are challenging to identify. To this end, we present a spectra-temporal fusion
leveraging frame-level and utterance-level coefficients. We introduce a novel
local spectral deviation coefficient (SDC) for frame-level inconsistencies and
employ a bi-LSTM-based network for sequential temporal coefficients (STC),
which capture utterance-level artifacts. Our spectra-temporal fusion strategy
combines these coefficients, and an auto-encoder generates spectra-temporal
deviated coefficients (STDC) to enhance robustness. Our proposed approach
addresses multiple spoofing categories, including synthetic, replay, and
partial deepfake attacks. Extensive evaluation on diverse datasets
(ASVspoof2019, ASVspoof2021, VSDC, partial spoofs, and in-the-wild deepfakes)
demonstrated its robustness for a wide range of voice applications. | Awais Khan, Khalid Mahmood Malik, Shah Nawaz | 2023-09-18T14:54:42Z | http://arxiv.org/abs/2309.09837v1 | # Frame-to-Uttterance Convergence: A Spectra-Temporal Approach for Unified Spoofing Detection
###### Abstract
Voice spoofing attacks pose a significant threat to automated speaker verification systems. Existing anti-spoofing methods often simulate specific attack types, such as synthetic or replay attacks. However, in real-world scenarios, the countermeasures are unaware of the generation schema of the attack, necessitating a unified solution. Current unified solutions struggle to detect spoofing artefacts, especially with recent spoofing mechanisms. For instance, the spoofing algorithms inject spectral or temporal anomalies, which are challenging to identify. To this end, we present a spectra-temporal fusion leveraging frame-level and utterance-level coefficients. We introduce a novel local spectral deviation coefficient (SDC) for frame-level inconsistencies and employ a bi-LSTM-based network for sequential temporal coefficients (STC), which capture utterance-level artifacts. Our spectra-temporal fusion strategy combines these coefficients, and an auto-encoder generates spectra-temporal deviated coefficients (STDC) to enhance robustness. Our proposed approach addresses multiple spoofing categories, including synthetic, replay, and partial deepfake attacks. Extensive evaluation on diverse datasets (ASVspoof2019, ASVspoof2021, VSDC, partial spoofs, and in-the-wild deepfakes) demonstrated its robustness for a wide range of voice applications.
Voice Spoofing Detection, Unified Detection, Spectral Temporal, audio Deepfake
## I Introduction
Voice authentication methods are mainstream solutions for identity verification systems, but the increasing prevalence of voice spoofing, including logical, physical, and deepfake attacks, poses a significant threat to their effectiveness [1]. Existing methods often focus on mitigating individuals or a subset of these attacks, leaving systems vulnerable to others. For example, a recent study [2] shows the limitations of existing systems, especially in detecting partial and full deepfake attacks. Notably, while existing systems successfully identify replay and synthetically fake speech samples [3, 4, 5], they lack the ability to detect partial deepfake samples, as indicated in Table I. These results show that the best-performing countermeasure in the ASVspoof\(2019\) challenge demonstrates a substantial drop in performance when evaluated on partially spoofed samples. In another experiment, training the same system on the partial-spoof dataset and evaluating its performance on the ASVspoof\(2019\) dataset yielded an intriguing result: performance on the development dataset remained competitive, but for the evaluation subset, the performance notably deteriorated. As shown in Fig. 1, unlike a spectrogram of replay or synthetic speech samples, a partial spoofing spectrogram indicates heterogeneous spectral artifacts. This may lead to significant performance deterioration of existing spoofing detection mechanisms, including utterance-level [6, 7, 8], transformer-based [9, 10] or deep learning [11, 12, 4, 13]. This is even true when these systems are trained on a specific dataset of partial spoofs, indicating the need for a solution adept at detecting frame-level disparities in partial deepfake scenarios.
Earlier anti-spoofing methods were developed to prevent either physical or logical [1, 14, 5]. However, more recent methods are focused on creating a unified solution based on utterance-level features capable of detecting physical and logical attacks (LA) [11, 13, 4, 8]. These unified solutions tend to be biased in favor of either detecting logical or physical attacks (PA). Thus, there is a need for an unbiased unified solution. Moreover, other work has explored the detection of partial and fully deepfake attacks in a unified solution based on segment-level features [14, 15, 16], however, these methods often fail to identify physical attacks. Thus, full and partial deepfake attacks necessitate a comprehensive approach that accounts for both segment-level and utterance-level artifacts. To this end, we propose a spectra-temporal approach that involves extracting frame-oriented spectral deviated coefficients (SDC), along with utterance-oriented sequential temporal coefficients (STC), using a Bidirectional Long Short-Term Memory (Bi-LSTM) network. These components collectively capture intricate patterns at both the utterance and frame levels, forming a strong foundation for our unified approach. In particular, the presented methodology unifies the detection of physical, partial, and fully deepfake attacks, resulting in a robust voice
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{ASV} & \multicolumn{2}{|c|}{PSF} \\ \hline & Train & Dev. & Eval. & Dev. & Eval. \\ \hline EER(\%) & ASV & 0.21 & 2.65 & 9.59 \(\uparrow\) & 13.96 \(\uparrow\) \\ & PSF & 4.28 \(\uparrow\) & 5.38 \(\uparrow\) & 3.68 & 6.19 \(\uparrow\) \\ \hline min-tDCF & ASV & 0.006 & 0.064 & 0.185 & 0.300 \(\uparrow\) \\ & PSF & 0.115 & 0.171 & 0.100 & 0.164 \\ \hline \end{tabular}
\end{table} TABLE I: The best architecture from the ASVspoof challenge [2] is evaluated in terms of generalizability using ASVspoof\(2019\)-LA and Partialspoof\(2021\) datasets. (Lower is better). The symbol \(\uparrow\) signifies the performance changes.
spoofing detection method.
The main contributions of this paper are as follows: 1) We introduce a spectra-temporal-based unified method for the detection of different voice spoofing categories. 2) We proposed spectral deviated coefficients for segment-level artifact extraction and employed a bi-LSTM network to capture sequential temporal artifacts within speech signals. Through rigorous experimentation using diverse datasets, we demonstrate the effectiveness of the proposed method. To the best of our knowledge, this is the first ever attempt to tackle four different types of voice spoofing with a single system.
## II Proposed Method
The proposed method is divided into three sections, as shown in Fig. 2. It consists of Spectral Deviated Coefficients (SDC), Sequential Temporal Coefficients (STC), and Spectra-Temporal Deviation Coefficients (STDC). These sections collectively form a unified method for the reliable detection of voice spoofing.
#### Ii-1 Spectral Deviated Coefficients (SDC)
We used the raw input speech signal \(s(t)\) to extract SDC, consisting of both higher and lower frequencies across various time frames:
\[s(t)=h*sin(2\pi f_{1}t)+l*sin(2\pi f_{2}t) \tag{1}\]
where \(h\) and \(l\) represent the amplitudes of frequencies, and \(f_{1}\) and \(f_{2}\) denote the higher and lower frequencies, respectively. Next, we use Hamming windows, which minimizes the spectral leakage by tapering frame edges and preventing abrupt truncation:
\[w[n]=\alpha-\beta\cdot\cos\left(\frac{2\pi n}{N-1}\right) \tag{2}\]
\[y[n]=s[t]\cdot w[n] \tag{3}\]
where \(s[t]\) denotes the input signal, \(w[n]\) represents the Hamming window with a size of \(N\), and \(\alpha\) and \(\beta\) are the window center and edge coefficients, respectively. The resulting segmented signal, after applying windowing and framing, is denoted as \(y[n]\). Next, we transform the obtained \(y[n]\) to the frequency spectra using a log-Mel spectrogram and fast Fourier transform (FFT) with the following parameters (hop length = \(512\), mels = \(128\), fft = \(2048\)) as follows:
\[S[mk]=\log\left(1+\sum_{n=0}^{N-1}|X[n]|^{2}\cdot H_{m}[k,f_{n}]\right) \tag{4}\]
where \(S[mk]\) represents the log-Mel spectrogram at Mel frequency \(m\) and frame \(k\), \(X[n]\) stands for the Short-Time Fourier Transform (STFT) at time \(n\), and \(H_{m}[k,f_{n}]\) represents the Mel filterbank at frequency \(f_{n}\) corresponding to Mel frequency \(m\). The obtained log-transformed Mel spectrogram is then subjected to the Local Deviated Pattern (LDP) operator, which captures the local higher and lower frequency spectrum as follows:
\[LDP(S_{mk(c)},S_{mk(n)},\mu_{t})=\begin{cases}1&S_{mk(n)}\geq S_{mk(c)}+\mu_{t },\\ -1&S_{mk(n)}\leq S_{mk(c)}-\mu_{t},\\ 0&S_{mk(c)}-\mu_{t}\leq S_{mk(n)},\\ 0&S_{mk(n)}\leq S_{mk(c)}+\mu_{t}\end{cases} \tag{5}\]
where \(LDP(S_{mk(c)},S_{mk(n)},\mu_{t})\) represents the Local Deviated Pattern at position \((c,n)\) with \(S_{mk(c)}\) and \(S_{mk(n)}\) representing the central and neighboring window values, and \(\mu_{t}\) refers to the central tendency average of the window. We determine the conditioning threshold by considering both \(S_{mk(c)}\) and \(\mu_{t}\), rather than relying solely on the central window value. It enhances the extraction of LDP features by capturing deviations from the central value, revealing patterns indicative of underlying acoustic traits.
To efficiently handle spatial frequencies, we separately process the higher and lower frequencies of \(S[mk]\). The LDP employs triplicate conditions to extract both higher and lower patterns. These patterns are further categorized into two sets: local higher spectra (LHS) and local lower spectra (LLS). Before computing \(LHS\) and \(LLS\), we transform negative values into positive ones, as shown in Eqs. 6 and 7. For \(LHS\), we convert all '-1' values to '0' while leaving the other values unchanged, as described in Eq. 6 This results in a set of positive higher-order patterns in \(S[mk]\). Similarly, LLS patterns are derived by replacing '1' with '0' and '-1' with '1' in \(LDP(S_{m}k(c),S_{m}k(n),mu_{t})\) as follows:
\[LHS=LDP(S_{mk(c)},S_{mk(n)},\mu_{t})=-1\to 0 \tag{6}\]
\[LLS=\begin{cases}LDP(S_{mk(c)},S_{mk(n)},\mu_{t})=1\to 0\\ LDP(S_{mk(c)},S_{mk(n)},\mu_{t})=-1\to 1\end{cases} \tag{7}\]
The binary bit streams, denoted as LHS and LLS, are converted into decimal values through a bit extraction process. We begin by extracting bits from the eastern direction and proceed in a counter-clockwise manner to obtain the decimal equivalents as shown in the equation below:
\[HL_{(int)}=\sum_{i=0}^{K-1}HL(C_{rn})\times 2^{i-1} \tag{8}\]
Fig. 1: Spectrogram comparison of bona fide (first-left), fully synthesized (second), partially deep fake (third), and replay (fourth) speech samples.
where \(HL\) denotes the higher and lower coefficients obtained from Eqs. 6 and 7, \(C_{rn}\) represents the right neighbour at each position, and \(K\) is the total number of bits. Next, we extract deviated tendency patterns from the obtained \(HL_{(int)}\) to ensure the presence of spectral artifacts in both lower and higher spectral coefficients. Later, we only extract coefficients that exist in both higher and lower coefficients and neglect the rest of the values. We perform this task in a two-step process. First, we compute the mean vector of both higher and lower integrals separately as follows:
\[MV_{(\delta)}=\frac{1}{n}\sum_{i=1}^{n}HL_{(int)} \tag{9}\]
where \(MV_{(\delta)}\) refers to the mean vector from higher and lower integrals \(HL_{(int)}\). Next, we compute the central tendency vector from the obtained mean vectors \(HT_{(\delta)}\) as follows:
\[CTV_{(\delta)}=\frac{1}{n}\sum_{i=1}^{n}MV_{(\delta)} \tag{10}\]
where \(CTV_{(\delta)}\) denotes the central tendency mean value from the obtained mean vectors in Eq. 9. By calculating the mean from the mean vectors, we confirm the presence of higher frequencies in both higher and lower integrals, combining them into a single optimal SDC. We retained values that are higher than their mean values and added them to derive the optimal robust spectral features, as demonstrated in Eq. 11.
\[SDC_{(coff)}=[HL_{(int)}>CTV_{(\delta)}] \tag{11}\]
where \(SDC_{(coff)}\) represents the spectral deviated coefficients. Finally, a discrete Fourier transform (DFT) is applied to the LDP-transformed \(SDC_{(coff)}\) coefficients to obtain robust 128D spectral features. The upper right side of Fig. 2 shows the extraction of SDC patterns.
#### Ii-B2 Sequential Temporal Coefficients (STC)
We employed a bidirectional long-short-term memory (Bi-LSTM) network to extract sequence-based utterance-level features. Bi-LSTM's bidirectional processing, unlike traditional LSTMs, considers both backward and forward context, enhancing complex temporal relationships. In this work, a two-layer Bi-LSTM configuration was employed to improve temporal feature extraction, yielding \(128\)-dimensional temporal features.
#### Ii-B3 Spectra-Temporal Deviation Coefficients (STDC)
In this section, we focus on converging SDC and STC to create the Spectra-Temporal Deviation Coefficients (STDC) feature set. Given the distinct natures of SDC and STC, we address the range disparity by applying a tailored normalization technique that ensures both sets of coefficients are within a compatible range. The normalized coefficients are then processed through an autoencoder-decoder network, which distils the robust representation of spectra-temporal cues. The reconstruction process of the STDC feature set also aids in alleviating the challenges posed by sparsity in STC features before normalization.
## III Experimentation and Results
### _Dataset and Implementation Details_
We used several challenging datasets (ASVspoof\(2019\)[7], ASVspoof\(2021\)[17], VSDC [18], partial spoofs [2]
Fig. 2: Architectural diagram of the proposed solution (left). The right upper subset in blue dotted line represent the extraction mechanism of frame level Spectral Deviated Coefficients. The right lower subset in green presents the extraction mechanism of utterance level Sequential Temporal Coefficients.
(Uttrance-based), and in-the-wild audio deepfakes (IWA) [19]) to evaluate the proposed method. We used EER and accuracy to evaluate and compare the performance of the proposed method. We performed experiments on four NVIDIA Tesla V\(100\)\(16\) GPUs, coupled with \(192\) GB of RAM and \(48\) CPU cores operating at a clock speed of \(2.10\) GHz. To address the data imbalance in ASVspoof\(2019\) and partial spoof datasets, we applied five augmentation techniques from [20, 21]: high-pass filtering, low-pass filtering, compression, time and pitch shift, and reverberation. For our backend classifiers, we used a batch size of \(32\), the Adam optimizer with an initial learning rate of \(1e^{-4}\) and a weight decay of \(0.001\). Models were trained for \(50\) epochs using cross-entropy loss.
### _Experimental Results_
#### Iii-B1 Performance Analysis of the SDC with Different Classifiers
We have evaluated the performance of the proposed SDC features with different machine learning (ML) and residual-based classifiers, and the results are presented in Table II. It is observed from the results that SDC features performed well with both ML and residual classifiers, with the best performance achieved with Ensemble and SE-ResNext\(18\) classifiers. The lower EERs show the efficiency of the presented coefficients and their potential standalone use for voice spoofing attack detection.
#### Iii-B2 Performance Analysis of STDC with Different Voice Spoofing Datasets
We choose the best-performing back-end classifier (SE-ResNeXt18) from Table II and evaluate the performance of the proposed system with different datasets. Results are shown in Table III, indicating performance improvement when spectral coefficients converge with temporal coefficients. Specifically, EER improves from \(0.25\) to \(0.22\), \(0.60\) to \(0.52\), \(3.70\) to \(3.50\), and so on. These results show the significance of incorporating both spectral and temporal coefficients.
#### Iii-B3 Comparison with Existing Methods
We evaluate our proposed methods against recent voice spoofing countermeasures, addressing four distinct attack types: LA, PA, and fully and partially deepfake. To our knowledge, this is one of the first comprehensive approach to tackle these four attack categories simultaneously. Moreover, we compared our solution to specific attack-focused methods, such as ASVspoof\(2019\) (LA+PA) in Table IV, ASVspoof\(2021\) in Table V, partial-spoof in Table VI, and IWA in Table VII. Our method outperforms existing state-of-the-art methods. Though the performance of the method on specific dataset (IWA) and some attacks (PSF) is slightly higher, it exhibits superior generalizability across a wide range of attacks, providing a holistic defense mechanism with enhanced detection capabilities. |
2309.10631 | Battery-Electric Powertrain System Design for the HorizonUAM Multirotor
Air Taxi Concept | The work presented herein has been conducted within the DLR internal research
project HorizonUAM, which encompasses research within numerous areas related to
urban air mobility. One of the project goals was to develop a safe and
certifiable onboard system concept. This paper aims to present the conceptual
propulsion system architecture design for an all-electric battery-powered
multirotor electric Vertical Takeoff and Landing (eVTOL) vehicle. Therefore, a
conceptual design method was developed that provides a structured approach for
designing the safe multirotor propulsion architecture. Based on the concept of
operation the powertrain system was initially predefined, iteratively refined
based on the safety assessment and validated through component sizing and
simulations. The analysis was conducted within three system groups that were
developed in parallel: the drivetrain, the energy supply and the thermal
management system. The design process indicated that a pure quadcopter
propulsion system can merely be designed reasonably for meeting the European
Union Aviation Safety Agency (EASA) reliability specifications. By adding two
push propellers and implementing numerous safety as well as passivation
measures the reliability specifications defined by EASA could finally be
fulfilled. The subsequent system simulations also verified that the system
architecture is capable of meeting the requirements of the vehicle concept of
operations. However, further work is required to extend the safety analysis to
additional system components as the thermal management system or the battery
management system and to reduce propulsion system weight. | Florian Jäger, Oliver Bertram, Sascha M. Lübbe, Alexander H. Bismark, Jan Rosenberg, Lukas Bartscht | 2023-09-19T14:15:02Z | http://arxiv.org/abs/2309.10631v2 | # Battery-Electric Powertrain System Design for the HorizonUAM Multirotor Air Taxi Concept
###### Abstract
The work presented herein has been conducted within the DLR internal research project HorizonUAM, which encompasses research within numerous areas related to urban air mobility. One of the project goals was to develop a safe and certifiable onboard system concept. This paper aims to present the conceptual propulsion system architecture design for an all-electric battery-powered multirotor electric Vertical Takeoff and Landing (eVTOL) vehicle. Therefore, a conceptual design method was developed that provides a structured approach for designing the safe multirotor propulsion architecture. Based on the concept of operation the powertrain system was initially predefined, iteratively refined based on the safety assessment and validated through component sizing and simulations. The analysis was conducted within three system groups that were developed in parallel: the drivetrain, the energy supply and the thermal management system. The design process indicated that a pure quadcopter propulsion system can merely be designed reasonably for meeting the European Union Aviation Safety Agency (EASA) reliability specifications. By adding two push propellers and implementing numerous safety as well as passivation measures the reliability specifications defined by EASA could finally be fulfilled. The subsequent system simulations also verified that the system architecture is capable of meeting the requirements of the vehicle concept of operations. However, further work is required to extend the safety analysis to additional system components as the thermal management system or the battery management system and to reduce propulsion system weight.
**Keywords: Urban Air Mobility, Conceptual Aircraft Design, Model-Based Safety Assessment, Propulsion System, Multirotor, eVTOL**
## Nomenclature
\begin{tabular}{l l l} ARP & Aerospace Recommended &
\begin{tabular}{l} (\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\hphantom{\phantom{\
eVTOL & electric Vertical Takeoff & Landing
design to facilitate the development of the propulsion system architecture and its certification for similar eVTOL vehicle. The research presented herein has been conducted by the Safety-Critical Systems and Systems Engineering department of the DLR Institute of Flight Systems within the DLR internal project HorizonUAM.
### Research questions and methodological approach
In contribution to the aim of this work, the following research questions will be covered:
1. How should the conceptual design process of the propulsion system be carried out for an all-electric multirotor VTOL vehicle that is transporting passengers over congested areas so that the safety goals of EASA SC-VTOL can be met?
2. What is the impact of the EASA SC-VTOL reliability requirements on the conceptual design of a multirotor propulsion system?
3. Which implications does an all-electric battery-powered eVTOL have on the propulsion system architecture besides the safety requirements?
4. Which requirements must be met by a thermal management system of the developed all-electric multirotor propulsion system?
To address these research questions, chapter 2 describes the methodological approach for the conceptual design of a safe propulsion system for a quadcopter eVTOL vehicle. In chapter 3, the methodological approach is applied using an exemplary eVTOL use case of the HorizonUAM project. Up to section 3.2, an initial propulsion system concept is developed, which is further detailed and refined within section 3.3 based on the safety design process. For the derived propulsion system architecture, the power and drive system, the thermal management system (TMS) and the electrical system, are then sized and simulated and final architecture adjustments deducted within sections 3.4.1, 3.4.2 and 3.4.3. The chapter 3.5 presents the final propulsion architecture. Within chapter 4 the main findings are summarized and the initial research questions answered. The paper is completed by deriving current limitations of the applied methodology and giving an outlook for further research within chapter 5.
### State of the Art
At first, a literature research was conducted to identify the current state of the art regarding conceptual design methodologies for developing the propulsion system for multirotor vehicles, that also take safety requirements into account. Only little literature could be found that addresses this research area so far.
**Conceptual design methods for the multirotor propulsion systems**
In 2021, Bertram et al. [16] developed a sizing loop which supports the initial multirotor vehicle sizing process based on flight mission requirements and the propulsion technology to be used. However, this method does not provide any detailed information about the propulsion system architecture design and its reliability or failure probabilities. In the work of Liscouet et al. [15] from 2022, a method for the conceptual design of multirotors is presented which includes a controllability analysis, a sizing optimization as well as a safety assessment. However, the controllability analysis does not take flight phase transitions and handling quality aspects into account and so far, the applicability of the safety assessment was only shown based on the Unmanned Aerial System (UAS) regulations. The applicability of this approach for manned eVTOL vehicles therefore is still due.
**Currently achieved failure rates of multirotor eVTOL vehicle architectures**
In 2019, Darmstadt et al. [13] conducted several safety analyses for the propulsion systems of in total four VTOL configurations, including a tilt-wing, quadcopter, lateral-twin and lift & cruise configuration. For all developed propulsion system architectures a failure rate in the magnitude of \(10^{-4}\) per flight hour was identified, with the quadcopter configuration having the highest failure rate of \(7.97\cdot 10^{-4}\) per hour. The major challenges for multirotors in meeting the EASA SC-VTOL are that "a single failure must not have a catastrophic effect upon the aircraft" (VTOL.2550) and that a catastrophic event must not happen more often than once every \(10^{9}\) flight hours [12]. The work of Liscouet et al. [15] also came to the conclusion that their unmanned quadcopter failure rate lies in the magnitude of \(1.44\cdot 10^{-4}\) per hour and can effectively be reduced by adding more rotors.
By using at least eight rotors - which implies a coaxial quadcopter or octocopter configuration - the EASA SC-VTOL could be fulfilled according to their studies.
In 2021, Darmstadt et al. [14] renewed the propulsion architectures from 2019 focusing on the challenging multirotor configurations to improve their reliability and additionally expanded their safety assessment. The failure probability for the quadcopter configuration experiencing catastrophic events could be improved to \(1.78\cdot 10^{-9}\) per hour when using cross-shafts that connect all four main rotor drives. Without using a cross-shaft solution the highest failure probability increases up to \(1.75\cdot 10^{-5}\) per hour. Only by adding numerous redundancies the failure probability could be reduced to \(1.06\cdot 10^{-9}\) per hour which may be a suitable solution. Therein, the thermal management system was identified as a critical supplementary system, which needed to be dual redundant. However, the consequences of adding these redundancies on vehicle mass, complexity and feasibility of the design haven't been further analysed. The difficulty of meeting the EASA SC-VTOL reliability goals shows that a new approach is needed that integrates the safety and reliability assessment into the conceptual design. Additionally, the implications of a safe multirotor propulsion architecture on the vehicle design, mass, the feasibility and complexity need to be readily analysed within the approach to show whether or not the system architecture should be pursued.
The rules and regulations that must be considered within the design process are the already mentioned EASA SC-VTOL, the SC E-19 that define the special condition for electric or hybrid propulsion systems intended for VTOL aircraft, the Aerospace Recommended Practice (ARP) 4754A and ARP4761 that define certification considerations and safety assessment guidelines [12, 17, 18, 19].2
Footnote 2: For completeness it is noted, that the CS-P define the certification specifications for the propellers and should also be taken into consideration during the propulsion architecture design [20]. However, they are out of scope of this work.
Therefore, this work will at first present a methodological approach which was applied for the conceptual design of an all-electric propulsion system for a quadcopter that aims at fulfilling EASA SC-VTOL, takes into consideration ARP4761 as well as the Special Condition for Electric / Hybrid Propulsion System (EASA SC E-19) [19] and further analyses its consequences on critical vehicle and flight mission parameters.
## 2 Method
The applied methodological approach for designing a propulsion system consists of following steps as indicated in Fig. 1:
1. ConOps Definition
2. Vehicle Requirement Analysis
3. Vehicle System Concept Definition
4. Safety Analysis
5. Vehicle Sizing & Simulation
In the first step, the concept of operations (ConOps), the flight mission as well as the payload requirements are defined. Based on this information the best suitable type of vehicle is preselected. Thereafter, the propulsion technology for the selected vehicle is defined.
In the second step, further design requirements for the propulsion system and its components are derived. Several topics should be included herein, among them controllability, handling quality and noise aspects, as they may heavily influence the propulsion system design.
The ConOps definition and the requirement analysis are the basis for step three, the propulsion system design loop. Within this step, an initial system concept consisting of the propulsion system as well as the other vehicle systems is defined. Thereby, the propulsion system is specified in terms of the power system, electrical system and the thermal management system. Based on the initial vehicle system architecture concept a first vehicle sizing loop is performed to derive vehicle parameters as, for example, the required vehicle propulsion power, the required energy, vehicle empty weight and rotor size.
In the fourth step, a safety and reliability analysis is conducted for the initial vehicle system architecture, in order to fulfil the EASA SC-VTOL safety goals. Any architecture changes are then passed back to the vehicle system concept.
Within the last step, the safe vehicle architecture is modelled and simulated to validate the suitability of the derived vehicle architecture based on the ConOps definition and vehicle requirement analysis.
As this paper focuses on the propulsion system architecture concept, emphasis is put on presenting steps three, four and five primarily for the propulsion system. The steps one and two are only briefly described in order to provide the context for the propulsion system design.
### ConOps Definition
When detailing the ConOps, the type of eVTOL vehicle needs to be selected based on the intended use case and payload requirements. As shown by Ratei [21] different vehicle concepts may be suitable for different operating areas. For example, a rotary-wing concept like a multirotor, or fixed-wing concepts like the lift & cruise, or vectored thrust configurations with tilted wing, tilted rotors or tilted ducts may be suitable.
As soon as the flight mission is defined and the eVTOL vehicle is chosen, the propulsion technology to be used should be evaluated. Propulsion systems like a full-electric battery-powered or hydrogen powered system or even serial or parallel hybrid electric solutions may be suitable.3
Footnote 3: A comparative overview of the characteristics of different propulsion technologies used for a multirotor and their impact on the application areas are presented within [16, 22].
### Vehicle Requirement Analysis
Within step two, several further design requirements, primarily for the propulsion system design are collected. As the flight control functions are primarily taken over by the propulsion system within eVTOL vehicles, controllability and handling quality requirements have a significant impact on the propulsion system design and should therefore already be considered at an early design stage. The controllability analysis aims at ensuring, that the vehicle is controllable around all axes. This requires that the Newton's law for translation and rotation need to be fulfilled. As described within Liscouet et al. [15], it is
Figure 1: Applied methodological approach for the conceptual propulsion system design
essentially not only to analyse the controllability during normal operating conditions, but also for any failure cases. The difficulty during this step is that the failure cases are not known in the beginning. Therefore, the controllability for failure cases needs to be analysed again when having identified the failure cases within step four, the safety analysis. In general, the controllability depends on whether rotor speed control or pitch control is used and is influenced by the vehicle moment of inertia, the number of rotors, the rotor arrangement, additional rotor parameters (e.g. rotor inertia) and the thrust coefficient of the corresponding rotors [23]. In addition to the controllability aspects, the handling qualities should be considered. According to Pavel [23] the handling qualities of a multicopter are influenced not only by the aircraft response to a control input (controllability) but also to the coupled rotor-motor dynamics. The dynamic response of a coupled rotor-motor drive system, is beneficially influenced by a low rotor inertia, low inertia of the drive components (motor and gearbox), a drive system gear ratio which is optimized for high motor efficiency, balanced motor performance4, a low motor equivalent resistance and low friction losses within the drive system which are influenced by the rotational speed and the gear ratio [23]. All influencing parameters combined result in good handling qualities when the dynamic response around all axes is characterised by low rise-time, high bandwidth, low overshoot and high stability in terms of phase and gain margin [23]. According to the analysis of Bahr et al. [24] and Niemiec et al. [25] for rotor speed-controlled quadcopters, the weight of the electric motors that are only sized based on the maximum required power demand for performing the flight mission, is insufficient for meeting handling quality requirements. As a high motor torque capability is required for achieving a low motor time constant and therefore good handling qualities, the motor becomes twice as heavy as initially sized. The sum of all electric motors might reach \(15-16\) % of the total vehicle weight in case a direct drive is used. Therefore, the factors influencing the handling qualities should already be considered within the conceptual design phase to minimize the design adjustments at a later design phase.5
Footnote 4: Which is expressed in terms of the ratio \(\frac{K_{e}^{2}}{R_{a}}\) consisting of the motor back-electromotive force constant \(K_{e}^{2}\) and the equivalent resistance \(R_{a}\).
Footnote 5: For more information about analysing the handling qualities of multicopter eVTOL vehicles it is referred to [26], [27] and [28].
When designing the propulsion architecture the noise level as well as the effect of noise annoyance should be taken into account in order ensure public acceptance [29]. The eVTOL architecture parameters type of rotor control, maximum rotor tip speed, number of rotors, rotor arrangement, disc loading and the propulsion system architecture should be carefully chosen, as they mainly influence the emitted noise according to Brown and Harris [30], Smith et al. [31] and Smith et al. [32].
An overview of the different parameters and their optimum values is given within Fig. 2.
Figure 2: Overview of some selected parameters influencing controllability, handling quality and noise. The parameters may be dependent on each other and the list is not exclusive.
### Propulsion System Concept Definition
Taking all the aspects of the ConOps definition and the vehicle requirements analysis into consideration a first propulsion concept is established. In this step all systems are identified that are required within the eVTOL propulsion system. At this point it is important to differentiate between the different nomenclature that is used to describe the propulsion systems. Herein the nomenclature offered by Herrmann and Rothfuss [33] is followed in which the propulsion system encompasses a group of systems that contribute to provide lift and power the eVTOL. The propulsion concept was developed by conducting the following steps:
1. At first, the propulsion system context is defined, which identifies external elements interacting with the propulsion system. The type of interaction is defined by the interfaces.
2. Then the use cases for the propulsion system are established and the tasks of each system context element for each use case are defined. This allows to identify all tasks and functions that need to be fulfilled by the propulsion system.
3. For each derived function an activity diagram is developed to describe the activities that are taking place within the propulsion system itself.
4. With this information an initial system concept is derived by grouping the identified activities and allocating them to a specific system. In accordance with Darmstadt et al. [14] the propulsion system architecture is generally composed of the following system groups: * Flight control system * Power and drive system * Electrical system * Thermal management system (TMS) The flight control system encompasses all sensors and systems that collect air data, receive and process control commands and calculate the corresponding motor control inputs for each motor controller for speed-controlled rotors or inputs for the actuation system of collective pitch-controlled rotors. The power and drive system (also called powertrain and drivetrain) is responsible for converting the electrical power, which is supplied by the electrical system, into mechanical rotational power in accordance with the inputs received from the flight control system. The drivetrain is a system group within the powertrain and does only include the systems that transmit the mechanical power of the engine into thrust at the rotors. The electrical system takes over the function to store and distribute the electrical energy. The thermal management system shall ensure to keep all system components within their operating temperature range. These system groups can provide an initial guidance during the architecture developing process.
5. In the last step the propulsion concept is integrated into the complete vehicle system architecture concept and an initial sizing loop is conducted to size the eVTOL vehicle as well as the propulsion system, estimate the required power and energy and calculate the estimated weight proportions using the method presented by Bertram et al. [16]).
### Safety and Reliability Analysis
The basis for the safety analysis are the safety and reliability requirements of EASA SC-VTOL [12] and the EASA SC E-19 [19]. With the initial understanding about the intended propulsion system components from the previous section, the safety analysis helps to identify weak points of the propulsion system and to define the type and amount of required safety measures. Thereby, system requirements for each propulsion system component can be derived which may significantly impact the propulsion system design compared to the initial design. Consequently, a more precise prediction about the required system components and their specifications can be generated.
Generally, the safety analysis for the propulsion system design is conducted using the methods described in SAE ARP4754A [18] and ARP4761 [17]. In Fig. 3 an extract of the safety assessment process is shown. The green parts mark the steps of the safety assessment that are covered within this work. In order to conduct the aircraft level Functional Hazard Analysis (FHA) the system concept from the previous section as well as a functional breakdown analysis on the aircraft level are
required, which are assigned to the system development process within the ARP4754A. However, as this aircraft level functional analysis has yet not been addressed, it is herein conducted under the topic of the safety analysis. Thereafter, the aircraft level Fault Tree Analysis (FTA), the system level Functional Hazard Analysis (FHA) and FTA are conducted while iteratively gathering the information for the Preliminary System Safety Assessment (PSSA) and Preliminary Aircraft Safety Assessment (PASA) during the system design adjustments. During each iteration of the design process, the granularity of the considered systems within the safety analysis can be increased. Initially, a functional breakdown is being conducted to identify the main aircraft functions. In this context, especially those functions that are taken over by the propulsion system are of special interest. In the next step an FHA is being conducted on the aircraft level which identifies the failure cases of the previous functions and their effects on, for example, the aircraft, the passengers, the vehicle and the environment. As proposed in Schafer et al. [34] the failure cases _total loss of function_, _partial loss of function_, _unannunciated loss of function_, _incorrect operation of function_, _inadvertent operation of function_ and _unable to stop the function_ should be analysed and their failure effects on the aircraft be described.
The failure effect of a functional failure is the basis for the following process of developing a safe system architecture as indicated in Fig. 4. Based on the failure effect and the required Function Development Assurance Level (FDAL) as defined within the EASA SC-VTOL an allowable failure probability is assigned to each functional failure case (label 1). For each identified functional failure case a subsequent fault tree is created within the FTA on aircraft level to identify the causes (base events) that contribute to each functional failure case, the so-called top event (label 2). Based on the allowable failure probability for each failure case an allowable failure probability can be assigned to each failure cause (label 3).
Up to this point the analysis has been conducted based on the aircraft functions. In the next step the system level is analysed by identifying the systems that contribute in fulfilling a specific
Figure 3: Extract of the ARP4754A [18] and illustration of the safety assessment steps covered within this work (marked in green)
aircraft function. The initial system architecture from section 2.3 is the basis for this analysis and needs to be crosschecked against all safety requirements that were derived from the aforementioned process (label 1-3). The crosscheck is done by developing a system level FTA, in which the top event of the system level FTA is the base event of the previously created aircraft level FTA. Within this system level FTA the component failure causes leading towards the top event are collected (label 4) and their failure probabilities defined by using historical data as provided by e.g. the Nonelectronic Parts Reliability Data (NPRD) Dataset [35] (label 5). With this information the actual failure probability of a system function is calculated bottom-up (label 6) and gathered within the PASA. As long as the allowable system reliability cannot be assured by the designed propulsion system architecture, the architecture needs to be adjusted and the process starts again. Finally, the results are collected within the PSSA, which indicates if the requirements of the aircraft level FHA can be fulfilled.
To evaluate the sensitivity of the propulsion system architecture to critical system components, minimal cut sets are generated based on Reliability Block Diagrams (RBDs) and fed back into the system design.
As the propulsion system is a safety-critical system, whose failure may cause human injury or even loss of life, the system shall only have two states: operational or failed-safe [36]. A fail-unsafe condition shall be prevented by all means. Therefore, either the design principle of a safe life or fault-tolerant design must be applied for developing the propulsion system. While a safe life design is characterized by oversizing and prematurely replacing components before failure, a fault tolerant design requires to incorporate hardware, information, time or software redundancy [36]. The following strategies are promoted herein to ensure a safe system design depending on the analysed aircraft functional failure case:
* **Total loss or partial loss of function**: Implement additional system components with the same functionality (for example: passive, active or hybrid hardware redundancy).
* **Unannunciated loss of function**: Make use of software redundancy by implementing fault detection and fault indication mechanisms.
* **Incorrect operation, inadvertent operation, unable to stop a function**: Implement options for masking a faulty system component.
Figure 4: Schematic depiction of the interconnection between Aircraft Level FTA and System Level FTA
### Propulsion System Component Sizing and Validation
The last step within the applied propulsion system design methodology aims at further specifying the propulsion system components and validating the system architecture by sizing and simulating each component based on off-the-shelf components. The sizing is conducted using common sizing methods.6 The simulation of the propulsion system helps to validate if the derived vehicle architecture can suitably fulfil the initial ConOps definition and requirements. If necessary, additional architecture adjustments and requirements are derived based on the sizing and simulation results.
Footnote 6: For detailed information about the sizing process of a multicopter propulsion system it is referred to Bertram et al. [37].
## 3 Case Study
Within this section, the previously described conceptual design method is applied to a case study to derive a conceptual propulsion architecture for a battery-powered multirotor eVTOL vehicle that is operated by a pilot.
### ConOps Definition & Vehicle Requirement Analysis
The general requirements of the ConOps are summarized in Table 1 and define the flight mission, payload, type of eVTOL vehicle and the powertrain technology.
For this case study the vehicle under investigation shall be operating within the intracity use case. The total flight mission, as indicated in Fig. 5, consists of in total 50 km, which are separated in three flights to allow for passenger embarkation and disembarkation in between plus additional 20 minutes reserve time at minimum drag speed. The eVTOL vehicle shall be able to transport at least in total 4 passengers with 90 kg each, which results in a payload capability of 360 kg.
As up to date the quadcopter is the most critical vehicle configuration in terms of safety as described within section 1.2 as well as energy consumption, the quadcopter vehicle configuration shall be selected for fulfilling this mission. Thereby, a cross-configuration of the rotors is selected, in which they are evenly and symmetrically distributed along the x- and y-axis to provide good controllability and handling quality as shown
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Flight Mission & Three Flights + 20 min Loiter \\ Design Range & 50 km \\ Payload & 360 kg (4 Passengers incl. Pilot) \\ MTOM & \(<\) 3175 kg \\ Vehicle configuration & Multirotor \\ Powertrain technology & All-electric \\ Energy source & Battery \\ \hline \hline \end{tabular}
\end{table}
Table 1: General requirements of the ConOps definition
Figure 5: Flight mission profile of the analysed use case consisting of three consecutive flights with the flight segments 1: Taxi out, 2: Vertical climb to 50ft AGL, 3: Transition to cruise, 4: Cruise climb to cruise altitude, 5: Cruise until destination, 6: Cruise descent, 7: Re-transition to hover, 8: Vertical descent to ground, 9: Diversion to alternate, 10: Taxi in. [16]
Figure 6: Quadcopter rotor cross-configuration
in Fig. 6. The rotors shall be fixed-pitch and speed-controlled as they promise less system complexity, even though the pitch rotor control would be beneficial in terms of controllability, achieving quick vehicle response times and low noise. The power-train shall be all-electric and powered by batteries, since it could be shown in Bertram et al. [16] that a battery full electric powertrain is competitive compared to other powertrain technologies for up to 50 km design range, based on the current state of the art.
The controllability analysis, handling quality and noise aspects presented within section 2.2 have been taken into consideration in parallel to any system design adjustments. As they are not in focus of this paper, they are not further elaborated on herein.
### Propulsion System Concept Definition
With the information about the ConOps and the vehicle requirements an initial propulsion system concept is defined within step three of the applied conceptual design method. To develop the propulsion system concept, the CAMEO Systems Modeler was used. As described in section 2.3, initially the system context for the system of interest is defined as shown in Fig. 7.
The system context indicates that the propulsion system of the multirotor receives control commands from the cockpit crew. These control commands are merged with data from the air data sensors in order to lift and control the aircraft for passenger transport. In order to provide a closed control loop for the unit controlling the vehicle, currently a cockpit crew, status information is fed back to the cockpit indication systems. During the transformation of the input signals into lift and thrust, air, thermal energy and noise are interchanged between the environment and the propulsion system. As the propulsion system components will suffer of degradation during daily operation, the means for maintenance actions are included within the system context.
By analysing the use cases of the propulsion system, numerous main functions could be identified that need to be fulfilled by the propulsion system and the adjacent aircraft systems. The activity diagrams for each identified main function enabled to identify the sub-functions for the propulsion system. Fig. 24 gives an overview of all identified functions.
When grouping similar functions together they can be allocated to respective system groups as shown in Fig. 25. It becomes apparent that the four generic system groups, namely the flight control, the power and drive, the electrical and the thermal management system group, as described in section 2.3, can be identified here as well. In addition to that, the functional analysis requires the integration of an information system group into the propulsion system as it is essential to feed back the information from all participating propulsion system groups back to the cockpit and an optional in-service monitoring unit for health monitoring and improving maintenance schedules. Besides the identification of the main system groups, this graphical representation allows to identify the item flow between the different system groups. Within the next step of the architecture development each functional block is assigned to a specific system component as shown in Fig. 26 and, thereby, an initial logical propulsion system architecture is developed. Within this architecture the flight control system group is composed of at least a Flight Control Computer (FCC) and an air data computer gathering and distributing air sensor data including GPS data. Using the control inputs from the cockpit and the air data, the FCC calculates and controls the required power setting for the electric motor and, thereby, regulates also the corresponding setting of the thermal management system. The power and drive system group consists of motor controllers, motors, optionally gearboxes and the rotor. The energy for the power and drive system group is provided by electrical system group which consists of the battery system, a power distribution system and battery control units. The information system group consists of data concentrator units which gather and distribute status information. The thermal management system is not further specified, as its system requirements are unknown so far. However, based on the sizing and simulation of the power and electrical system group within section 3.4.1, some specifications for an thermal management system are collected in section 3.4.3. Merging this logical architecture with the other vehicle systems as described within section 2.3 an initial sizing loop for the whole vehicle system is conducted.
However, as this is not part of this paper, it is referred to [16].
Within the subsequent safety analysis, the propulsion architecture is further modified to fulfil the safety requirements of EASA SC-VTOL [12] of the enhanced7 vehicle category. The considered systems within the safety analysis are indicated in Fig. 26. However, for simplification reasons, the thermal management system and the information system group are initially excluded from the safety assessment and will be integrated in the future.
Footnote 7: As soon as the vehicle is expected to transport passengers over congested areas it falls into the certification category enhanced of EASA SC-VTOL with the highest required safety levels.
### Safety and Reliability Analysis
The safety and reliability analysis as presented in section 2.4 has been conducted using the SysML Modelling Language within the CAMEO Systems Modeler together with the SysML Profile Risk Analysis and Assessment Modelling Language (RAAML) [38] and the FHA profile [34] which facilitate conducting a model-based safety assessment. The safety analysis loop has been run through several times during the design process to account for the system changes that were required to reach the reliability guidelines and to account for the results of the sizing process. Therefore, the following section presents the main results of each step of the safety analysis based on the final propulsion system architecture as presented in section 3.5.
Functional Breakdown AnalysisWithin the functional breakdown analysis the main aircraft level functions were identified, that are taken over by the propulsion system as shown in Fig. 8:
1. provide lift for safe flight,
2. provide differential thrust for yaw,
3. provide differential thrust for pitch,
4. provide differential thrust for roll and
5. provide forward thrust.
Functional Hazard Analysis During the initial safety analysis loop it became quickly apparent
Figure 7: System context definition for the propulsion system
that a quadcopter with a non-redundant propulsion system as presented in previous section8 will require so many additional redundancies that the design will most probably not be reasonable in terms of total weight and system complexity. The difficulty of the quadcopter configuration is that a partial loss of providing lift may be caused, for example, by a single rotor failure which exhibits a failure probability of \(2.83\cdot 10^{-4}\) per hour9. The _partial loss_ of providing lift caused by a single rotor failure within a quadcopter configuration must be expected to be a catastrophic event [13] which shall not happen more often than \(1.0\cdot 10^{-9}\) per hour as it is categorized as a FDAL A event within EASA SC-VTOL [12]. Therefore, a main vehicle design adjustment was conducted by adding two push propellers to the rear of the vehicle as shown in Fig. 9. This measure assumes, that the resulting yaw moment of one rotor loss can be counteracted and the failure effect of a partial loss of providing lift caused by one single main rotor loss attenuates from a catastrophic event to a hazardous event with an allowable failure probability of \(1.0\cdot 10^{-7}\) per flight hour and less redundancies can be expected to be required.
Footnote 8: During the first iteration a quadcopter vehicle configuration is assumed in which each rotor is powered by a pure series connection of the system components FCC, battery, motor controller and electric motor as described in the previous section §.2.
Footnote 9: This failure rate results when using the component failure rates presented later on in Table 3.
An aggregated overview of the FHA results based on this quadcopter configuration with two push propellers is given in Table 2. Especially the failure cases _total loss_ or _partial loss_ of a function as well as the _inadvertent, incorrect operation_ including the _unable to stop_ functional failure are critical for the system design since they exhibit catastrophic or hazardous events. For all functions that have at least a catastrophic and hazardous failure effect the corresponding Aircraft FTA is being conducted.
Aircraft Level FTA Results The aircraft FTA for the catastrophic event _incorrect operation_ of the function \(<\)provide lift\(>\) is presented exemplarily in Fig. 10. As the permanent incorrect
Figure 8: Functional Breakdown analysis with the aircraft functions that are connected to the propulsion system
Figure 10: Aircraft FTA for the functional hazard ”Incorrect operation of the function: provide lift”
Figure 9: Visualization of the quadcopter with two push propellers [27]
operation of the function to provide lift is classified as a catastrophic event with an allowable failure probability of \(1.0\cdot 10^{-9}\) per hour it can be concluded that the permanent incorrect operation of one rotor is allowed to happen with a probability lower than \(2.5\cdot 10^{-10}\) per flight hour. With this information the system architecture that propels each rotor is designed in the next step using the system level FTA. By developing all FTAs for the catastrophic failure effects of the aircraft functions, it becomes apparent that, according to the herein used definition, the inadvertent operation or unable to stop functional failures are subsets of the incorrect operation. Therefore, three main basic events remain that need to be further analysed within the system level FTA:
* Total loss of one rotor lift
* Incorrect ops. of one rotor providing lift
* Incorrect or inadvertent ops. of one propeller providing thrust
System Level FTA Results On the system level, the rotor drive system architecture is revised until the _total_ loss and the _incorrect ops_ of one rotor fulfils the failure probability goals defined within the aircraft level FTAs of \(<2.5\cdot 10^{-8}\) and \(<2.5\cdot 10^{-10}\) respectively. Using a single drive unit without any redundancies and assuming the failure probabilities as listed in Table 3 the _total loss_ of one rotor must be expected to occur with a probability of \(2.8\cdot 10^{-4}\). Therefore, as a first countermeasure the rotor drive is designed as a dual active drive system composed of two drive units. As the dual active drive system would still exhibit a failure rate for a _total loss_ of one rotor of \(5.6\cdot 10^{-8}\) each motor controller unit shall be powered by at least two separate battery packs and a dual active / passive channel motor controller shall be used. The passive channel continuously monitors the main motor controller and shall be able to take over its function (hot standby redundant system).
For meeting the maximum allowable failure rate for the _incorrect operation_ of one rotor, it is essential to reduce the probability that an erroneous FCC signal, motor controller output or motor output propagates up to the rotor. Firstly, the probability of an erroneous motor controller output is already reduced by the usage of a dual channel motor controller. Secondly, it must be prevented that any malfunction within the electric motor is passed to the rotor. Therefore, two masking strategies must be in place: once by disconnecting the power from the electric motor using an emergency power disconnect relay and secondly using a mechanical disconnect clutch that separates the motor output from the rotor shaft. Thirdly, the probability of an erroneous or missing valid FCC command is reduced by implementing a triple modular redundant FCC setup which enables determination of the correct output by majority voting. Each FCC is then required to exhibit a malfunction or failure probability of less than \(1.58\cdot 10^{-5}\) per hour.10 By implementing these strategies both the probability for an erroneous power output of the driving units and the FCC group is reduced below \(10^{-10}\) per flight hour which reduces the _incorrect operation_ of one rotor to \(2.46\cdot 10^{-10}\) and therefore fulfils
\begin{table}
\begin{tabular}{l l l l l l} \hline Function failure & Provide lift & Provide diff. & Provide diff. & Provide diff. & Forward \\ & & thrust for & thrust for & thrust for & thrust \\ & & pitch & roll & yaw & \\ \hline Total loss & catastrophic & catastrophic & catastrophic & catastrophic & hazardous & major \\ Partial loss & hazardous & hazardous & hazardous & n.a & n.a \\ Incorrect ops. & catastrophic & catastrophic & catastrophic & hazardous & catastrophic \\ Inadvertent ops. & catastrophic & catastrophic & catastrophic & hazardous & catastrophic \\ Unable to stop & catastrophic & catastrophic & catastrophic & hazardous & catastrophic \\ Unsym. partial loss & n.a. & n.a. & n.a. & minor & minor \\ Degradation & major & major & minor & minor & minor \\ \hline \end{tabular}
\end{table}
Table 2: FHA failure effect classification for each identified aircraft level function
the allowable failure probability. The system FTA with the identified system components of the final propulsion system architecture contributing to a permanent _incorrect operation_ is shown in Fig. 27. The _incorrect_ or _inadvertent operation_ of the rear push propellers can be mitigated by the use of triple redundant FCC signals, a dual channel motor controller and a single disconnect option, like a disconnect relay.
To also identify common causes of error, minimal cut sets were calculated and analysed within a reliability block diagram analysis. All derived requirements from the system level FTA and the minimal cut sets are listed below. Implementing these requirements for the overall system architecture ensures that the maximum allowable failure rates for the four system level FTA top events or respectively the basic events of the aircraft level FTA as shown in Table 4 can be complied with.
1. Each rotor requires a dual active redundant drive train. When a geared propulsion is chosen each drive train must also be equipped with a separate gearbox.
2. Each rotor unit must be able to produce \(\geq\) 50 % of the total vehicle thrust required for hover for a prolonged time.
3. For a short time interval, each rotor unit must be able to produce more than 50 % of the total hover thrust (ideal would be to provide \(\geq\) 50 % of the total vertical climb thrust) in order to break any vertical descent during landing.
4. Each motor unit must be able to be passivated and therefore be equipped with at least two means of decoupling, preferably a mechanical and electrical decoupling device.
5. Any internal fault of the motor control units must not lead to an unrecognized malfunction that propagates to the electric motor. Therefore, the motor control units should be designed as dual channel active passive units.
6. The passive channel of the motor control unit acts as a fail-safe-backup-mode that activates in case of any loss of input signal from the FCCs or in case of a motor control unit malfunction. In this state the motor control unit should command a constant motor rotational speed which corresponds to the hover state in normal flight.
7. Each motor control unit must be connected to at least two batteries or power supply busses to achieve a dual modular redundant power source. To prevent any common cause failures in total at least 4 battery packs are required for the four main rotors and one
\begin{table}
\begin{tabular}{l l l} \hline \hline Functional Hazard & Max. allowable & Expected fail- \\ & failure rate & ure rate \\ \hline Loss of one rotor & \(<2.5E-8\) & \(1,06\cdot 10^{-8}\) \\ lift & \(<1.0E-9\) & \(2.37\ 10^{-20}\) \\ one rotor & & \(<2.5E-10\) & \(2.46\cdot 10^{-10}\) \\ Incorrect ops. of one rotor & & \(<5.0E-10\) & \(2.76\cdot 10^{-15}\) \\ Inadvertent ops. of one propeller & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Extract of the PSSA results showing the expected system failure rates for the most limiting system level FTA top events
\begin{table}
\begin{tabular}{l l l} \hline \hline & Component & Failure condition & Applied Failure Rate1 \\ \hline BAT & Battery & Failure & \(9.31\cdot 10^{-5}\)[35] \\ MC & Motor Controller & Failure & \(4.75\cdot 10^{-5}\)[13] \\ M & Electric Motor & Failure & \(9.24\cdot 10^{-5}\)[13] \\ GB & Gearbox & Failure & \(5.00\cdot 10^{-6}\)[13] \\ FCC & Flight Control Computer & Failure, Malfunction & \(1.57\cdot 10^{-5}\)[35] \\ REL & Disconnect Power Relay & Unintended opening, Failure to operate & \(4.60\cdot 10^{-5}\)[35] \\ DISC & Disconnect Clutch & Unintended opening, Failure to operate & \(4.70\cdot 10^{-5}\)[35] \\ \hline \hline \end{tabular}
* Due to lack of data it is assumed that the applied failure rate is the same for the different cases of failure condition.
\end{table}
Table 3: Failure rate probability for each propulsion system component
additional separate battery pack is required for the rear push power train.
8. One independent stand-alone battery source must be used to power both push-propeller units together.
9. Both rear propellers in combination must be able to create a vehicle yaw moment bigger than the resulting yaw moment of two shutdown concordant rotating main rotors.
10. The FCC setup must be triple modular redundant using majority voting. Each FCC must exhibit a failure rate of \(\leq 1.58\cdot 10^{-5}\) per hour
It becomes apparent that the loss of one rotor lift and the incorrect operation of one rotor providing lift are the most critical system design drivers for the propulsion system of the main rotors. As soon as the failure rate requirements are fulfilled for those events, the other functional hazards will be fulfilled as well. The push-propeller architecture however, is mainly driven by the functional hazard of an inadvertent operation of each propeller.
The resulting achievable failure rates for the top events of the aircraft level FTA, which were identified within the FHA, are summarized in Table 5. It is important to note that the architecture design and its failure probabilities are based on the following assumptions:
1. The loss of one main rotor does not lead to a catastrophic event as the opposite main rotor will be shut down to achieve equilibrium in pitch and roll. The resulting yaw moment is counteracted by the push propellers at the rear. The quadcopter therefore remains controllable and is able to continue safe flight and landing.
2. Each motor can be passivated by an own electric disconnect relay as well as a mechanical declutch mechanism. Thereby, it is assumed that for passivating the electric motor it is sufficient if either the mechanical declutch or the electric disconnect relay is activated. However, the control logic will only allow recovery of the electric motor drive to power the rotor if both the mechanical and electrical switches are closed. This is required in order to prevent a passivated malfunctioning electric motor drive to become operative due to an inadvertently closed electrical switch or mechanical clutch.
### Propulsion System Component Sizing and Validation
Within this section the main system groups of the propulsion system are sized, compared with off-the-shelf components and additionally simulated. At first the power and drive system is specified. Based on these results on the one hand the electrical system group with a primary focus on the batteries and on the other hand the thermal management system for the propulsion system are developed and analysed. The simulation for each system group was carried out using the open modelling language Modelica within the Dymola environment by Dassault Systemes.
\begin{table}
\begin{tabular}{l l l l l l} \hline Function failure & Provide lift & Provide diff. & Provide diff. & Provide diff. & Forward \\ & & thrust for & thrust for & thrust for & thrust \\ & & pitch & roll & yaw & \\ \hline Total loss & \(4.49\cdot 10^{-16}\) & \(4.49\cdot 10^{-16}\) & \(4.49\cdot 10^{-16}\) & hazardous & major \\ Partial loss & \(4.24\cdot 10^{-8}\) & \(4.24\cdot 10^{-8}\) & \(4.24\cdot 10^{-8}\) & n.a & n.a \\ Incorrect ops. & \(9.86\cdot 10^{-10}\) & \(9.86\cdot 10^{-10}\) & \(9.86\cdot 10^{-10}\) & hazardous & \(5.52\cdot 10^{-15}\) \\ Inadvertent ops. & \(9.86\cdot 10^{-10}\) & \(9.86\cdot 10^{-10}\) & \(9.86\cdot 10^{-10}\) & hazardous & \(5.52\cdot 10^{-15}\) \\ Unable to stop & \(9.86\cdot 10^{-10}\) & \(9.86\cdot 10^{-10}\) & \(9.86\cdot 10^{-10}\) & hazardous & \(5.52\cdot 10^{-15}\) \\ Unsym. partial loss & n.a. & n.a. & n.a. & minor & minor \\ Degradation & major & major & minor & minor & minor \\ \hline \end{tabular}
\end{table}
Table 5: PASA results: expected failure rates for each identified catastrophic aircraft level function
#### 3.4.1 Sizing & Simulation Power System
As the power and drive system group consists of the motor controller, the electric motor, the gearbox and the rotor (see Fig. 26), this section presents the specifications of these components for the main rotor and the rear push propeller drive system. The results are based on the following assumptions:
1. The main rotor drive is designed for a maximum rotor tip speed of \(Ma_{tip}=0.45\) during normal operation and two rotor blades to ensure a quiet operation. The disc loading of 200 \(N/m^{2}\) is chosen to aim for an energy efficient flight and minimise rotor losses.
2. The rear push propeller is designed for a cruise tip speed of \(Ma_{tip}=0.5\), using a propeller with two blades and a propeller radius of 0.54 m to provide a cruise speed of 110 km/h at 3120 RPM.
3. A system voltage for the power and drive system components is defined with 600 V.
Based on the safety assessment a total power of at least 200 % compared to the highest continuous required flight power needs to be provided by the four main rotor drives. For the quadcopter configuration this amounts to at least 450 kW that needs to be provided by eight electric motors.
During the sizing process a direct drive architecture was compared to a geared drive for the main rotor propulsion architecture. The comparison of a direct drive and a geared drive architecture assuming commercial off-the-shelf permanent magnet synchronous electric motors (PMSM) and planetary gearboxes11 clearly indicates the disadvantages of a direct drive. The currently available electric PMSMs cannot be operated in their optimal efficiency range due to the low rotating speeds and high torque values that are required for the quadcopter main rotor propulsion. The efficiency of the direct drive lies between 85 \(\%-92\) % as indicated in Fig. 11 whereas the geared drive is able to operate between 92 \(\%-96\) % as indicated in Fig.12. Therefore, high thermal heat losses must be suspected using a direct drive.12
Footnote 11: Based on an internal market study, suitable commercial off-the-shelf products were selected for this comparison.
In terms of the main propulsion system weight (consisting of the electric motor, motor controller and gearbox) the geared propulsion architecture exhibits a weight of 374 kg compared to 400 kg for the direct drive. As it can be seen in Fig. 13 the weight of the direct drive powertrain is mainly
Figure 11: Representation of the electric motor operating points for a direct drive architecture (blue: hover, vertical climb, cruise climb, cruise, loiter, vertical descent; red: emergency hover and vertical climb) fitted within the efficiency map of a commercial off-the-shelf motor [39] that is providing up to 1000 Nm torque
Figure 12: Representation of the electric motor operating points for a geared drive architecture with a 5:1 gear reduction ratio (blue: hover, vertical climb, cruise climb, cruise, loiter, vertical descent; red: emergency hover and vertical climb) fitted within the efficiency map of a commercial off-the-shelf motor [40] that is providing up to 230 \(Nm\) torque
driven by the electric motor. While the electric motor for the geared powertrain is approximately 70 % lighter, heavy gearboxes are required to provide sufficient torque capability. Still, a weight saving of almost 7 % is estimated using the geared drive.
This comparison of the motor efficiency and the expected propulsion system weight shows that the usage of a geared drive is advisable to save weight and thermal losses. The gearbox as an additional component of the powertrain therefore needs to be considered in the safety and reliability analysis.
The propulsion system for the rear push propulsors is sized to provide the highest efficiency during the cruise flight. This requires an electric motor to provide the highest efficiency at 3120 RPM and 99 Nm torque. In total the electric motors for the rear push drive are expected
\begin{table}
\begin{tabular}{l l} \hline Component & Specification \\ \hline \hline \multicolumn{3}{l}{Specifications of the main rotor power and drive system:} \\ \hline Electric Motor & \(\bullet\) Highest efficiency at 500-550 RPM (without gearbox) or 2650-2800 RPM (gearbox 5:1) with 100 Nm torque (hover \& vertical climb operating point) \\ & \(\bullet\) Continuous torque capability of \(\geq\) 120 Nm and \(\geq\) 145 Nm maximum peak torque \\ & \(\bullet\) Continuous power capability of 29 kW and a maximum peak power of 58 kW \\ & \(\bullet\) Max RPM at \(\geq\) 780 RPM (without gearbox) or 3905 RPM (with gearbox 5:1) \\ \multirow{2}{*}{Gearbox} & \(\bullet\) Reduction gear ratio of 5:1 \\ & \(\bullet\) Input rotating speed range of 330-2770 RPM (normal ops), up to 3905 RPM in irregular operation \\ & \(\bullet\) Output rotating speed range 66-554 RPM (normal ops), up to 781 RPM in irregular operation \\ & \(\bullet\) Equivalent output torque of \(\geq\) 455 Nm \\ & \(\bullet\) Maximum peak output torque of \(\geq\) 700 Nm \\ Motor Controller & \(\bullet\) Continuous power of \(\geq\) 30 kW \\ & \(\bullet\) Maximum peak power of \(\geq\)61 kW \\ \hline \multicolumn{3}{l}{Specifications of the push propeller power and drive system:} \\ \hline Electric Motor & \(\bullet\) Highest efficiency at 3120 RPM with 99 Nm torque ( cruise operating point) \\ & \(\bullet\) Continuous torque capability of 99 Nm and a maximum peak torque of 125 Nm \\ & \(\bullet\) Continuous power capability of 32 kW and a maximum power of 60 kW \\ & \(\bullet\) Max RPM of \(\geq\) 4576 RPM \\ Motor Controller & \(\bullet\) Continuous power of \(\geq\) 34 kW and maximum peak power of \(\geq\) 63 kW \\ & \(\bullet\) Continuous motor current \(\geq\) 90 A and maximum motor current \(\geq\) 108 A \\ \hline \hline \end{tabular}
\end{table}
Table 6: Recommended specifications for the power system components of the main rotor
Figure 13: Weight comparison of a direct drive and geared propulsion system for the main rotor
to weigh 42 kg and amount to 58 % of the rear propulsion weight. As the rear drive system does not require a gearbox, the motor controllers make up for the remaining 42 %.
The specifications for each component of the main rotor and rear propeller power and drive system are listed in Table 6.
The simulation of the power and drive system has shown that the powertrain components with the above-mentioned specifications are suitably sized for powering the main rotors as well as the push propellers. Also, in failure conditions the main rotors can still be accelerated to the required rotational speeds. The preliminary dynamic simulation of the rotor rotational speed following control signals shows rise times in the magnitude of tenth of seconds, depending on the step size. However, whether or not this rise time of the rotor is sufficient to achieve quick response times of the total vehicle and therefore to achieve good handling qualities is still under further analysis13.
Footnote 13: For further information concerning the controllability and handling quality of RPM controlled rotors within a quadcopter it is referred to [27].
Based on these analyses the following implications can be drawn for the propulsion system architecture:
* Using a gearbox is recommended for the main rotor drive system.
#### 3.4.2 Sizing & Simulation Electrical System
Based on the functional propulsion architecture of Fig. 25 and the correspondingly derived logical architecture of Fig. 26 a battery storage system is required to power the propulsion system. As described by the requirement no. 7 of the system and reliability analysis of section 3.3 in total four identical batteries are required for powering the main rotors and one common battery pack is required for the rear push-propeller drive trains. Within this section an initial sizing of the energy storage system is conducted. This includes an analysis of the required energy, the battery pack size and weight by considering failure conditions of the propulsion system during flight.
Looking into the electrical power distribution of the main rotors, each motor controller needs to be able to receive power from an alternate battery source in case its main battery source is unavailable, in order to fulfil the requirement of section 3.3. Therefore, an allocation as shown in Table 7 is chosen. The Table indicates that for the rotor drive number 1 the motor controller 1.1. receives mainly power from battery 1 but can be switched to battery 3, whereas motor controller 1.2 is powered by battery 2 and 4. Based on this allocation each battery continuously powers two motor controllers, with each motor controller consuming a maximum of 30 kW during normal operations.
Fig. 14 shows the power allocation of the main rotor battery packs during normal operation. Based on the allocation, the sizing of the battery packs can be conducted which is influenced by the following requirements:
1. The batteries must provide the total energy required for fulfilling the flight mission during normal ops.
2. The capacity of each battery pack must be sufficient to provide sufficient power for the connected motor controllers during normal ops.
3. The battery packs must provide enough energy and power for enabling a continued safe flight and landing during any failure condition
Each requirement is now analysed separately. As a basis the Panasonic 18650 battery cells are used [41]. Since a system voltage \(U_{sys}\) of 600 V is chosen, each battery pack should provide 600 V which requires 167 cells connected in series.
**Battery sizing based on the required total energy and power for normal operation:**
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{1}{c}{Rotor 1} & \multicolumn{1}{c}{Rotor 2} & \multicolumn{1}{c}{Rotor 3} & \multicolumn{1}{c}{Rotor 4} \\ \hline MC x.1 & BAT 1 & BAT 1 & BAT 3 & BAT 2 \\ MC x.1 ALT & BAT 3 & BAT 2 & BAT 2 & BAT 3 \\ MC x.2 & BAT 2 & BAT 3 & BAT 4 & BAT 4 \\ MC x.2 ALT & BAT 4 & BAT 4 & BAT 1 & BAT 1 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Allocation of the four main battery packs to the motor controllers of each main rotor drive system
Based on the energy requirement of 19.7 kWh14 for powering two drive units during normal operation for the total flight mission \(E_{BS}\), the required battery pack capacity \(C_{BP,E}\) amounts to 32.8 Ah.
Footnote 14: The energy requirement was derived from the initial sizing process and validated with the developed simulation model
\[C_{BP,E}=\frac{E_{BP}}{U_{sys}}=\frac{19.7\;kWh}{600\;V}=32.8\;Ah\]
The battery capacity based on the two supplied drive units with a required maximum power \(P_{max}\) of 30 kW each for normal operations amounts to 29.3 Ah.
\[C_{BP,P}=\frac{P_{max}}{\xi\cdot U_{sys}}=\frac{2\cdot\;30\;kW}{3.448\frac{1} {h}\cdot 600\;V}=29.3\;Ah\]
using the cell discharge rate \(\xi\) that is composed of the Panasonic rated battery cell current \(i_{Batt}\) and rated capacitance \(c_{Nenn}\)
\[\xi=\frac{i_{Batt}}{c_{Nenn}}=\frac{10\;A}{2.9\;Ah}=3.448\frac{1}{h}.\]
Sizing the battery with the higher of both capacity requirements of 32.8 Ah results in a battery pack that is able to provide 68 kW discharge power.
\[P_{available} = C_{BP}\cdot\xi\cdot U_{Sys}\] \[= 32.8\;Ah\cdot 3.448\frac{1}{h}\cdot\;600V\] \[= 68\;kW.\]
**Battery sizing based on the energy and power requirement during failure conditions.**
The three most restrictive failure conditions are analysed and their effects on the power distribution of the battery packs shown in figure 15:
* Failure of one rotor drive unit due to motor or motor controller unit failure
* Failure of one rotor unit
* Failure of one battery pack
**Power requirement:** During a single motor unit loss or a failure of one battery pack at least one of the remaining battery packs is required to supply at least 90 kW power as indicated in Fig. 15a and 15c. In order to provide 90 kW battery pack power, a pack capacity of 43.5 Ah is required.
\[C_{BS,P}=\frac{P_{min}}{\xi\cdot U_{sys}}=\frac{90\;kW}{3.448\frac{1}{h}\cdot 6 00\;V}=43.5\;Ah\]
During a single rotor loss, however, the available 68 kW pack power are sufficient to cope with this failure.
**Energy requirement:** Whether or not the battery capacity is sufficient for reaching a suitable airfield even during failure conditions depends on the type of failure condition, the time at which the failure occurs and the intended emergency flight procedure. Herein the most unfavourable conditions are assumed:
* Based on the previous failure analysis the type of failure condition is assumed to draw 90 kW power of a single battery pack for the remainder of the flight.
* The failure condition occurs on the third flight of the total flight mission at the equal
Figure 14: Power allocation of the main rotor battery packs during normal operation. Above or below each motor symbol the maximum required power of each electric motor is indicated for the shown operating state. The required motor controller power is shown next to the corresponding power lines of the motor symbol. On top of each battery pack the maximum available power of each battery pack is listed.
time point (ETP)15. The ETP for the defined flight is reached 5.7 min after start of the third flight segment. Up to this point approximately 12.2 kWh are already consumed, which includes the energy for flight one and two and the energy up to the ETP. 16 Footnote 15: The ETP defines the point within each flight, where the time to reach the next suitable airfield equals the time to return to last overflow or departed airfield.
* There is no other closer landing site available than the intended destination. Therefore, the flight is continued to the destination.
With these assumptions another 8.5 kWh are required, which is the equivalent energy to reach the destination airfield within 5.7 minutes. Therefore, a battery capacity of at least 34.3 Ah or 20.6 kWh are required. All battery pack capacity requirements of the normal and failure condition operation are summed up in Table 8.
It becomes apparent that a main rotor battery pack capacity of 43.5 Ah is required and driven by the maximum power requirement during the failure condition caused by a single rotor drive unit loss or a battery pack loss. Under normal operating conditions a pack with this capacity will be discharged down to 25 % after having completed the flight mission and used 20 min reserve flight time and will have 9.2 min of flight time available at the occurrence of a failure condition at the ETP during the third flight. A battery with these
\begin{table}
\begin{tabular}{l l l} \hline \hline Capacity Requirement & Battery capacity & Destination \\ based on & & reachable?1 \\ \hline Energy required & 32.8 Ah & no \\ (normal ops) & & \\ Energy required & 34.3 Ah & yes2 \\ (emergency ops) & & \\ Power required & 29.3 Ah & no \\ (normal ops) & & \\ Power required & 43.5 Ah & yes \\ (emergency ops) & & \\ \hline \hline \end{tabular}
\end{table}
Table 8: Summarized capacity requirements for each main rotor battery pack based on the energy and power requirements
Figure 15: Power distribution of the main rotor battery packs for different failure cases
specifications can be composed of 15 of the Panasonic cells connected in parallel and 167 in series, so that in total 2505 cells17 are used. Each battery pack then weighs 120 kg.
Footnote 17: The effect of battery degradation has so far not been in the scope of this research.
Footnote 18: As the battery sizing based on the normal operation results in more than 20 % less required capacity than based on the highest failure case capacity requirement, no additional reserve of 20 % for the battery capacity is included
**Battery sizing for the rear push propulsion system:**
The same assessment is conducted for the push propeller propulsion battery pack, which must be able to provide at least 68 kW power during normal operation and 126 kW power during an emergency condition at which a main rotor has failed. The results are summarized in Table 9.
The battery pack capacity in this case is mainly driven by the energy requirement during the failure condition at the most unfavourable point of time. Therefore, the battery is sized with a capacity of at least 88.8 Ah19 respectively 54.1 kWh, which results in having 31 parallel cells and 167 cells connected in series. In total 5177 cells are required, which results in a capacity of 90 Ah, with 186 kW available power. During normal operation the battery pack is discharged down to 37 % in case the 20 minutes loiter time had been utilized during flight. At the ETP the battery provides enough energy to power the push propulsion system for another 6 minutes in case of emergency operations, which is sufficient to reach the landing site.
Footnote 19: As the battery sizing based on the normal operation results in more than 20 % less required capacity than based on the highest failure case capacity requirement, no additional reserve of 20 % for the battery capacity is included
Further optimization of the battery pack size could be gained by an intelligent battery management system that allows an interconnection between all battery packs and, thereby, allocates energy and power requirements smartly between all battery packs.
The simulation of the battery packs in combination with the previous power and drive system has indicated that the battery is sufficiently sized to provide energy for the whole flight mission. The heat development within each battery pack is analysed within the next section and the requirements for a cooling system are derived,
Summary of the derived specifications for the propulsion architecture:
* Each main rotor battery pack requires a battery capacity of at least 43.5 Ah and should be able to provide at least 90 kW power.
* The push-propeller battery pack requires a battery capacity of at least 88.8 Ah and should be able to provide at least 68 kW power.
#### 3.4.3 Sizing & Simulation Thermal Management System
Within this section the results of the heat development simulation within the power and drive system components are presented and initial system requirements for a thermal management system derived. As a result from previous section, this analysis is based on the propulsion design which incorporates a geared drive propulsion system. Whereas the gearbox, as an encapsulated system, can be expected to be self-cooling and lubricating, the electric motor and motor controller are expected to be combinable within a cooling system due to their similar requirements. For the battery packs, however, a separate thermal management system is expected to be required, which allows for heating at low temperatures and cooling during operation at high operating temperatures, due to the narrow optimal battery operating temperature range of 20-40 \({}^{\ast}\)C. When assuming VTOL operation at the warmest areas within Europe,
\begin{table}
\begin{tabular}{l l l} \hline Capacity Requirement & Battery capacity & Destination \\ based on & & reachable?1 \\ \hline Energy required & 70.0 Ah & no \\ (normal ops) & & \\ Energy required & 88.8 Ah & yes 2 \\ (emergency ops) & & \\ \hline \end{tabular}
\end{table}
Table 9: Summarized capacity requirements for the push propulsion battery pack based on the energy and power requirements
ambient temperatures of up to 42.7 \({}^{\circ}\)C19 are taken into account during the design process of the propulsion and cooling system.
Footnote 19: The highest measured temperature of the European city Madrid within the years 2017 and 2022 [42].
Initially the thermal management system requirements for the motor and motor controller of one rotor drive system are evaluated. Analysing the amount of generated heat within the electric motor and motor controller gives the following results. During the phase of the highest power requirement, the vertical climb phase, the motor controller and the electric motor are expected to produce up to 1.1 kW and 1.4 kW heat respectively under normal operating conditions as shown in Fig. 16 and Fig. 17. During emergency conditions, in which only one electric motor is left driving a main rotor, the heat output amounts to 1.8 kW and 2.9 kW respectively.
As the intended electric motor provides means for airflow cooling, its effect on the heat development within the electric motor components is initially analysed. With an ambient air flow that is based on the flight mission, the flight speeds, the vehicle rotor configuration and the cooling tubes geometry, the temperature within the copper windings of the electric motor will still rise over 120 \({}^{\circ}\)C during the transition of hover to cruise climb of the third flight within the flight mission after 2088 seconds and reach a maximum of 136 \({}^{\circ}\)C as shown in Fig. 18. Thus, the temperatures within the electric motor cannot be kept below 120\({}^{\circ}\)C during normal operation using only the ambient air flow. Operating in the emergency rating, in which one electric motor must provide the full power for a single rotor, the temperatures within the electric motor would even rise over 200 \({}^{\circ}\)C during the third flight of the mission (see Fig. 18). Consequently, the electric motor cooling system must be complemented by an additional liquid cooling system. As the analysed motor controller also requires liquid cooling according to the manufacturer's data sheet in order to keep its temperature below 85 \({}^{\circ}\)C, the motor controllers and corresponding electric motors of one rotor drive unit are combined within the same liquid cooling system. As the four rotor drive units will be located below each rotor and therefore be located distant to each other, a separate cooling system for each rotor drive unit is recommended.
As a conclusion the following requirements must be fulfilled by a combined ambient air flow
Fig. 16: Heat generation of the electric motor during normal and emergency operation which is assumed to start at the end of the vertical flight phase of the third flight.
Fig. 17: Heat generation of the motor controller during normal and emergency operation which is assumed to start at the end of the vertical flight phase of the third flight.
and liquid cooling system for the off-the-shelf analysed electric motor and motor controllers.
1. The electric motor operating temperature must be kept below 120 \({}^{\circ}\)C also during emergency conditions while the motor is operating in emergency rating.
2. The motor controller operating temperature must be kept below 85 \({}^{\circ}\)C during all operating conditions (normal and emergency rating).
3. As the motor controller can be operated with a liquid cooling temperature of a maximum of 65 \({}^{\circ}\)C and the electric motor requires liquid cooling temperatures below 50 \({}^{\circ}\)C, the thermal management system needs to keep the cooling liquid temperature below 65 \({}^{\circ}\)C when passing the motor controller and 50 \({}^{\circ}\)C when passing the electric motors.
4. The maximum volume flow of cooling liquid for the motor controller is 6-12 l/min whereas the electric motor can only withstand 6-8 l/min.
5. The maximum input pressure of the liquid cooling shall not exceed 2 bar when entering each electric motor.
One exemplary cooling topology that is able to fulfil these requirements consists of the motor controllers and electric motors of each rotor being
Figure 18: Temperature development within the electric motor copper windings using only air flow cooling (AF) or a combined cooling (CC) compared to the temperature development of the motor controller using liquid cooling.
connected in series as shown in Fig. 19. As the motor controllers can withstand higher liquid input pressures and emit less heat they are placed at the beginning of the cooling flow. The electric motors can then be placed downstream.20
Footnote 20: A possible alternative would be to place the electric motors in parallel downstream.
The combined cooling system (CC) consisting of airflow cooling (AF) and liquid cooling (LC) is designed with the following specifications:
* Cooling fluid: Glysantin G40
* Cooling liquid flow: 0.14 kg/s
* maximum available 2.3 \(m^{3}/kg\)
* Heat exchanger size: 0.3 \(\cdot\) 0.3 \(\cdot\) 0.3 m
* Heat exchanger weight: 3.8 kg
The temperature development within the electric motor using this cooling topology of combined cooling for the electric motor and liquid cooling for the motor controller are shown in Fig. 18. It can be seen, that the temperature within the electric motor stays below 120 \({}^{\circ}\)C during all operating conditions, even in emergency conditions at the most unfavourable situation during the flight mission when using a combined liquid and air-cooling system. Fig. 21 indicates how the generated heat within the electric motor is absorbed by the ambient air flow and the liquid flow. As not all of the generated heat within the first flight can be dissipated, the temperature within the electric motor components rises during the flight mission. The inlet liquid cooling temperature for the electric motor as shown in Fig. 18, however, almost reaches the maximum manufacturer's recommendation of 50 \({}^{\circ}\)C during normal operation. Operating in emergency rating the inlet temperature even rises up to 54 \({}^{\circ}\)C and therefore exceeds the manufacturer's limit of 50 \({}^{\circ}\)C.
The temperature within the motor controller stays well below its limit of 85 \({}^{\circ}\)C as shown in Fig. 18. All of the heat is absorbed by the liquid cooling flow. During the whole flight mission, the maximum inlet temperature of the liquid fluid for the second downstream motor controller (MC2) reaches a maximum of 60 \({}^{\circ}\)C (during emergency ops) and therefore stays below the required 65 \({}^{\circ}\)C.
To prevent an excessive heat build-up during the ground phases of the flight mission, in which the vehicle is not moving and therefore receives no cooling air flow, it is essential to keep up the liquid cooling flow as well as the cooling air flow. Therefore, the liquid cooling pump needs to be operative. Additionally, a ground fan, should be installed to facilitate the heat transfer within the heat exchanger. Using no ground cooling the temperature of all components within the cooling system would heat up to over 65 \({}^{\circ}\)C during the five minutes ground phase. The same effect can be observed after termination of the flight mission, where a temperature of 77 \({}^{\circ}\)C can be observed across all components after 30 minutes. By increasing the ground cooling time up to 30 minutes the overall temperature can be kept below 50 \({}^{\circ}\)C.
In the following, the thermal management system requirements for the battery packs are evaluated. Fig. 20 compares the temperature development within each battery pack during normal operation with and without any cooling system. It becomes clearly visible that the batteries cannot be operated without a cooling system as the battery pack temperature will exceed the 40 \({}^{\circ}\)C temperatures limit even at 20 \({}^{\circ}\)C ambient temperatures. Using a liquid cooling circuit with a glycol-water mixture of 20:80 that is channelled along each battery cell, the temperatures of each battery pack can be kept below 40 \({}^{\circ}\)C during normal operation if the ambient temperature does not rise above 37 \({}^{\circ}\)C. However, under emergency operation, in which one battery pack has failed during
Figure 19: Schematic view of the combined cooling system for the electric motor and motor controller consisting of a liquid cooling cycle and airflow cooling.
the transition to cruise on the third flight segment, the battery pack temperature of two battery packs (refer to Fig. 15c) will even increase up to 40.6 \({}^{\circ}\)C at ambient temperatures of 37 \({}^{\circ}\)C. The ambient temperature has to stay below 36.2 \({}^{\circ}\)C in order to ensure a battery pack temperature below 40 \({}^{\circ}\)C during emergency conditions using a liquid cooling circuit. The liquid cooling circuit used within this simulation has the following properties:
* ambient air volume flow: 0.3 \(m^{3}/s\)
* liquid volume flow: 6.8\(\cdot\) 10\({}^{-5}\)\(m^{3}/s\)
* Inner cooling fluid: glycol-water mixture 20:80
For higher ambient temperatures a refrigeration cycle is required which is part of future research.
A summarizing overview about the expected behaviour of the electric motors, motor controllers and main rotor battery packs as well as their thermal management systems during normal and emergency operation is given in Fig. 21. Besides the expected heat flow also the amount of heat absorption of each used cooling method is shown. Additionally, the temperature development of each component is indicated for a typical hot summer day with 30 \({}^{\circ}\)C ambient temperature.
Based on the analyses of this section the following implications can be drawn for the propulsion system architecture:
1. Each propulsor requires a separate cooling system
2. For each electric motor, that was chosen and analysed herein, a combination of liquid cooling and air cooling is required as only air cooling is not sufficient to cool the electric motor sufficiently.
3. Each motor controller requires a liquid cooling system
4. The motor controller and electric motor can be cooled using the same liquid cooling system.
5. Each liquid cooling system consists of the components: pump, cooling fluid reservoir, heat exchanger.
6. The cooling system should stay operative after each flight on ground for up to 30 minutes to prevent the cooling liquid to exceed the motor inlet temperature of 50 \({}^{\circ}\)C and to absorb the stored thermal energy within the electric motor and motor controller.
7. The battery needs an own thermal management system that is capable of cooling and heating. If operating at ambient temperatures of 20 - 36.2 \({}^{\circ}\)C each battery pack can be cooled using a liquid cooling cycle. For ambient temperatures outside this range the thermal management system still needs to be analysed and designed.
Figure 20: Temperature development within each battery pack at different ambient temperatures with and without cooling during normal operation
### Final Architecture
This section presents how the requirements of the previous sections were implemented in the final propulsion system design. This architecture is expected to fulfil the safety reliability requirements. A schematic representation of the propulsion system architecture for the quadcopter is shown in Fig. 22. On the left side the main propulsion systems are shown with the four main rotor drives that are supplemented by two push drives. Each drive unit is connected with the battery units. In total at least three FCC are provided. On the right the propulsion architecture for each main rotor is depicted in more detail. It shows that the
Figure 21: Heat power loss within the electric motor, motor controller and battery pack and its dissipation via ambient air flow and / or liquid cooling at an ambient temperature of 30 \({}^{\circ}\)C
main rotor propulsion architecture is composed of two electric motors that drive the rotor through two separate gearboxes. Each motor is driven by its own motor controller, while each motor controller is backed up with a passive control board that takes over, in case faulty signals are sent by the primary controller or even the connection is lost. Each motor controller receives power from one of the four main batteries and can be switched to an alternate battery source if necessary. Additionally, each motor controller receives inputs from all three FCC and determines the valid FCC input by majority voting. The main criticality of the propulsion system of the push-propeller drive is the _incorrect operation_ including the _inadvertent_ or _unable to stop operation_ which are classified as a potentially catastrophic event. To prevent the inadvertent operation, the motor controller failsafe operation as already introduced for the main rotor propulsion system is implemented as well as an option for passivating a faulty motor output by implementing a power disconnect switch. The integration of a thermal management system and the information management system has been excluded so far, but will be included in further research.
Based on the sizing of all components the total propulsion system weight for the presented multirotor excluding the thermal management systems is expected to reach 1144 kg as shown in Table 10. An overview about further vehicle design parameters can be found within Table 11 of the appendix.
\begin{table}
\begin{tabular}{l l l} \hline \hline & Weight per Unit [kg] & Total Weight [kg] \\ \hline Main rotor propulsion system 1 & 24.6 + 17 + 52 & 98.4 + 68 + 208 \\ Push propulsion system 2 & 12.3 + 8.5 & 24.6 + 17 \\ Battery packs & 120 / 248 & 480 + 248 \\ Total propulsion system & & 1144 \\ \hline \hline \end{tabular} \({}^{1}\)Weights indicated for motor weight, motor controller weight and gearbox weight
\({}^{2}\)Main battery pack and push drive battery pack
Power distribution system and cooling system currently excluded from weight analysis.
\end{table}
Table 10: Summary of the propulsion system weights
Figure 22: Schematic representation of the overall quadcopter propulsion architecture and its implemented safety measures, excluding the thermal management system.
## 4 Conclusion and final evaluation
Within this paper, initially a method was presented for the conceptual design of an eVTOL propulsion system. This method was then applied to a multirotor vehicle for a specific intracity use case, with a special focus on developing a safe propulsion architecture, sizing each component and validating the architecture by simulation. In the following the results are structured to answer the initial research questions:
**How should the conceptual design process of the propulsion system be carried out for an all-electric multirotor VTOL vehicle that is transporting passengers so that the safety goals of EASA SC-VTOL can be met?**
The conceptual design method as presented within section 2 is divided in five Steps. Within step one the concept of operation needs to be defined, which includes defining the flight mission and payload requirements. Based on these requirements the vehicle configuration has to be preselected and the powertrain technology to be used is defined. Within step two several further requirements are developed which are based on the required controllability, the handling quality and allowed noise emission. Within the third step, the propulsion system is defined which can be segregated into defining the flight control system, the power and drive system, the electrical system and the thermal management system considering the previously established requirements. This propulsion system concept is then refined within the safety analysis and sized as well as validated within the vehicle sizing and simulation step. The system architecture refinement process is usually an iterative process between the concept definition, the safety analysis and the succeeding sizing step and is being conducted until the safety requirements of EASA SC-VTOL can be met.
**What is the impact of the EASA SC-VTOL reliablity requirements on the conceptual design of a multirotor propulsionsystem?**
Based on the safety & reliability analysis it became apparent that the loss of one rotor lift and the incorrect operation of one rotor providing lift are the most critical system design drivers for the propulsion system of the main rotor in terms of the reliability requirements. Additionally, it was identified that, if the failure rate requirements can be fulfilled for those events, the other functional hazards will be fulfilled as well. Therefore, the focus of any system designer should be put on meeting the safety & reliability requirement for the total and partial loss of one main rotor and for the incorrect operation of one rotor. Within the analysed case study, the safe propulsion system for the multirotor requires numerous redundancies. This includes that each main rotor requires two separate drive trains, which can be masked by two means in case of any malfunction within each drive train. One option for passivating the drive train is using a disconnect clutch, the other option is to cut the power to the electric motor by using a power disconnect relay. Additionally, each of the two drive trains driving the rotor must be designed for being capable to provide 200 % of normal power in case of any system malfunction. In case of a signal loss of the triple modular FCC system, each motor controller must be designed as a dual active/passive module, which is additionally equipped with a backup-mode. This backup mode shall be able to set a constant rotor speed slightly below hover power for normal flight. Four battery packs are advisable to be used for the four main rotor propulsion system. Each motor controller should be connected to at least two batteries. Additionally, two push propellers are required which counteract the torque moment in case of one main rotor loss. The push-propeller architecture, is mainly driven by preventing the inadvertent operation of each propulsor. Consequently, the corresponding motor controllers should be connected to the fifth stand-alone battery pack and also incorporate a fail-safe backup mode. This time one disconnect relay is sufficient for passivating faulty drive outputs. In terms of the cooling system for each main rotor drive system it must be ensured, that not more than one cooling system fails simultaneously, as the failure of the cooling system results in the loss of the corresponding main rotor. For the cooling system design of the battery packs it must be ensured that not more than one battery pack is influenced in case of a cooling system failure. A complete loss of the cooling function may become a catastrophic event.
**Which implications does an all-electric battery-powered eVTOL have on the propulsion system architecture besides the safety requirements?**
Besides the safety analysis, the sizing and simulation process revealed the following: The sizing of the power and drive system identified that a gearbox is required for driving each main rotor of the quadcopter. Only by increasing the number of rotors, the gearbox could become obsolete.
During the sizing of the battery it became apparent that it is essential not only to check the energy and power requirement during normal operation, but also during emergency operation. The case study of this paper has shown, that the sizing of the main rotor battery packs for the presented architecture, is not driven by the energy amount or power requirement during normal operation but rather by the power requirement during emergency operation, in which two of four packs need to deliver 1.5 times the power compared to normal operation. The sizing of the push drive system battery pack is also driven by the emergency operation energy requirement which is 1.27 times the energy requirement for normal operation.
**Which requirements must be met by a thermal management system of the developed all-electric multirotor propulsion system?**
A battery-electric multirotor propulsion system requires at least two cooling circuits. The sizing of the thermal management system for the power and drive components revealed that the electric motors and the motor controllers of each drive unit can be cooled within the same liquid cooling system, whereas the battery requires a separate cooling system due to the different operating temperatures. For cooling the electric motor and motor controller it is not sufficient to rely on the airstream. However, a combined cooling of airflow cooling and liquid cooling should be preferred. The liquid cooling circuit can cool the electric motor as well as the corresponding motor controllers of one drive unit simultaneously, by connecting them, for example, in series. This exemplary liquid cooling circuit requires at least 0.14 kg/s flow rate, using Glysantin G40. This enables to keep the electric motor below 90 \({}^{\circ}\)C during normal operation and just below 120 \({}^{\circ}\)C during emergency operations, even at ambient temperatures up to 42.7 \({}^{\circ}\)C. Additionally, the heat exchanger and the electric motors must be placed within the airstream to allow for additional air cooling. The heat exchanger can be expected to weight around 4 kg with a size of \(0.3\cdot 0.3\cdot 0.3\) m. In order to cool down the heated components after each flight it is necessary to keep the cooling system operative on ground as well. This cool down can take up to 30 minutes depending on the outside temperature. A secondary liquid cooling circuit is required in order to keep the battery packs below 40 \({}^{\circ}\)C operating temperature, as without any cooling circuit the batteries would heat up above 40 \({}^{\circ}\)C even at ambient temperatures of 20 \({}^{\circ}\)C. A cooling circuit using a glycol water mixture of 20:80 with a mass flow of 0.0695 kg/s is expected to keep the operating temperature of the battery pack within \(T_{amb}+5\)\({}^{\circ}C\) during normal operation and in case of an emergency procedure within \(T_{amb}+7\)\({}^{\circ}C\). The liquid cooling system however, can only be used up to ambient temperatures of 36.2 \({}^{\circ}\)C. Higher ambient temperatures require a refrigeration circuit. In order to ensure a minimum battery operating temperature of 20 \({}^{\circ}\)C a heating circuit is advisable as soon as the ambient temperature falls well below 20 \({}^{\circ}\)C.
Comparing the results to the literature identified within section 1.2, specifically the research of Darmstadt et al. [14], this work presents an alternative solution for a safe propulsion system design for a quadcopter. In addition, the implications of such a propulsion system on the total propulsion system mass and a validation of the system architecture based on current technology is provided through simulation models.
## 5 Future perspective
Within this research a conceptual design process for achieving a safe propulsion system for eVTOL multirotors was presented. As the focus was set primarily on designing a reliable drive system, other system groups that are linked to the propulsion system, like the information management system, the electrical system, the thermal management system as well as safety systems need further investigation. Firstly, the thermal management system as well as the information system
group need to be included in future safety analysis. Secondly, the electrical system architecture, therein especially the power distribution, should be analysed in further detail. As battery degradation has so far not been considered, its effect on the sizing of the battery packs should be analysed as well. Thirdly, the safety system requirements defined within the EASA SC E-19 [19] have to be included within the propulsion architecture design which includes e.g. means to prevent and cope with uncontrolled fire within the battery system. Thirdly, the rotor, rotor shaft connection as well as the junction between the two gearboxes and the rotor shaft need to be designed from a mechanical perspective and investigated to prevent single point of failures. Forth, the thermal management system of the battery needs to be extended as the currently assumed liquid cooling circuit is only able to provide sufficient cooling below ambient temperatures of 36.4 \({}^{\circ}\)C. Therefore, a lightweight and safe refrigeration circuit should be assessed as an alternative. In order to ensure the correct battery operating temperature even at low ambient temperatures, adding a heating possibility to the battery pack thermal management system probably using the heat of the motor and motor controllers should be considered. Additionally, a comparative study for the liquid cooling circuit of the electric motors and motor controllers should be conducted to assess the implications of a two-step cooling instead of the currently evaluated one-step cooling system. Fifth, the heat development within the power and drive system of the rear propulsion needs to be investigated and the thermal management system adapted accordingly. As the presented propulsion architecture is only valid, if the vehicle can continue safe flight and landing even during a single rotor loss, further investigation is required to establish corresponding means of controlling such a flight state.
### Contribution of this work towards minimizing costs and maximizing benefits of a UAM system
In accordance with the leitmotif "Opportunities and Challenges of Urban Air Mobility" this section evaluates how this work contributes to advancements within the Urban Air Mobility (UAM) system (see Fig. 23). The contribution in making UAM become reality can be grouped in minimizing UAM costs or maximizing UAM benefits. As this work presented a method for the conceptual safe design of the propulsion system as well as its implications on the propulsion system architecture for an exemplary UAM concept of operation, this paper primarily adds value towards increasing the reliability of a vehicle design. By providing means for a model-based systems engineering approach for the safe vehicle design, the chance of a fast and successful certification process may also be increased. Additionally, by taking safety aspects into account already during the conceptual design phase, subsequent high vehicle development costs due to late design adjustments can be prevented. On the other hand, the safe propulsion design as presented for the multicopter may positively influence the passenger's acceptance towards these vehicles.
## Statements and Declarations
**Author Contributions.**
Conceptualization: F.J.; Methodology: F.J., O.B.; Writing - original draft preparation: F.J., O.B., S.M.L., A.H.B., J.R., L.B.; Writing - review and editing: F.J.; Supervision: O.B.; project administration, O.B.; All authors have read and agreed to the published version of the manuscript.
Figure 23: Mapping the areas of the UAM overall system that are positively impacted by the presented work.
**Acknowledgments.**
The paper presents results from the internal DLR project HorizonUAM.
**Competing Interests.**
The authors have no competing interests to declare that are relevant to the content of this article.
|
2302.14725 | Parameterized Complexity of Vertex Splitting to Pathwidth at most 1 | Motivated by the planarization of 2-layered straight-line drawings, we
consider the problem of modifying a graph such that the resulting graph has
pathwidth at most 1. The problem Pathwidth-One Vertex Explosion (POVE) asks
whether such a graph can be obtained using at most $k$ vertex explosions, where
a vertex explosion replaces a vertex $v$ by deg$(v)$ degree-1 vertices, each
incident to exactly one edge that was originally incident to $v$. For POVE, we
give an FPT algorithm with running time $O(4^k \cdot m)$ and an $O(k^2)$
kernel, thereby improving over the $O(k^6)$-kernel by Ahmed et al. [GD 22] in a
more general setting. Similarly, a vertex split replaces a vertex $v$ by two
distinct vertices $v_1$ and $v_2$ and distributes the edges originally incident
to $v$ arbitrarily to $v_1$ and $v_2$. Analogously to POVE, we define the
problem variant Pathwidth-One Vertex Splitting (POVS) that uses the split
operation instead of vertex explosions. Here we obtain a linear kernel and an
algorithm with running time $O((6k+12)^k \cdot m)$. This answers an open
question by Ahmed et al. [GD22].
Finally, we consider the problem $\Pi$ Vertex Splitting ($\Pi$-VS), which
generalizes the problem POVS and asks whether a given graph can be turned into
a graph of a specific graph class $\Pi$ using at most $k$ vertex splits. For
graph classes $\Pi$ that can be tested in monadic second-order graph logic
(MSO$_2$), we show that the problem $\Pi$-VS can be expressed as an MSO$_2$
formula, resulting in an FPT algorithm for $\Pi$-VS parameterized by $k$ if
$\Pi$ additionally has bounded treewidth. We obtain the same result for the
problem variant using vertex explosions. | Jakob Baumann, Matthias Pfretzschner, Ignaz Rutter | 2023-02-28T16:33:18Z | http://arxiv.org/abs/2302.14725v2 | # Parameterized Complexity of Vertex Splitting to Pathwidth at most 1
###### Abstract
Motivated by the planarization of 2-layered straight-line drawings, we consider the problem of modifying a graph such that the resulting graph has pathwidth at most 1. The problem Pathwidth-One Vertex Explosion (POVE) asks whether such a graph can be obtained using at most \(k\) vertex explosions, where a _vertex explosion_ replaces a vertex \(v\) by \(\deg(v)\) degree-1 vertices, each incident to exactly one edge that was originally incident to \(v\). For POVE, we give an FPT algorithm with running time \(O(4^{k}\cdot m)\) and a quadratic kernel, thereby improving over the \(O(k^{6})\)-kernel by Ahmed et al. [2] in a more general setting. Similarly, a _vertex split_ replaces a vertex \(v\) by two distinct vertices \(v_{1}\) and \(v_{2}\) and distributes the edges originally incident to \(v\) arbitrarily to \(v_{1}\) and \(v_{2}\). Analogously to POVE, we define the problem variant Pathwidth-One Vertex Splitting (POVS) that uses the split operation instead of vertex explosions. Here we obtain a linear kernel and an algorithm with running time \(O((6k+12)^{k}\cdot m)\). This answers an open question by Ahmed et al. [2].
Finally, we consider the problem \(\Pi\) Vertex Splitting (\(\Pi\)-VS), which generalizes the problem POVS and asks whether a given graph can be turned into a graph of a specific graph class \(\Pi\) using at most \(k\) vertex splits. For graph classes \(\Pi\) that can be tested in monadic second-order graph logic (\(\mathrm{MSO}_{2}\)), we show that the problem \(\Pi\)-VS can be expressed as an \(\mathrm{MSO}_{2}\) formula, resulting in an FPT algorithm for \(\Pi\)-VS parameterized by \(k\) if \(\Pi\) additionally has bounded treewidth. We obtain the same result for the problem variant using vertex explosions.
Vertex Splitting, Vertex Explosion, Pathwidth 1, FPT, Courcelle's Theorem 1.1
## 1 Introduction
Crossings are one of the main aspects that negatively affect the readability of drawings [20]. It is therefore natural to try and modify a given graph in such a way that it can be drawn without crossings while preserving as much of the information as possible. We consider three different operations.
A _deletion operation_ simply removes a vertex from the graph. A _vertex explosion_ replaces a vertex \(v\) by \(\deg(v)\) degree-1 vertices, each incident to exactly one edge that was originally incident to \(v\). Finally, a _vertex split_ replaces a vertex \(v\) by two distinct vertices \(v_{1}\) and \(v_{2}\) and distributes the edges originally incident to \(v\) arbitrarily to \(v_{1}\) and \(v_{2}\).
Nollenburg et al. [17] have recently studied the vertex splitting problem, which is known to be NP-complete [11]. In particular, they gave a non-uniform FPT-algorithm for deciding whether a given graph can be planarized with at most \(k\) splits.
We observe that, since degree-1 vertices can always be inserted into a planar drawing, the vertex explosion model and the vertex deletion model are equivalent for planar graphs. The latter problem, also known as Vertex Planarization, has been studied extensively in the literature. While the problem is NP-complete [15], it follows from results of Robertson
and Seymour [21] that the problem can be decided in cubic time for any fixed \(k\). Subsequent algorithms gradually improved upon this result [16, 14], culminating in an \(O(2^{O(k\log k)}\cdot n)\)-time algorithm introduced by Jansen et al. [12].
Ahmed et al. [2] investigated the problem of splitting the vertices of a bipartite graph so that it admits a 2-layered drawing without crossings. They assume that the input graph is bipartite and only the vertices of one of the two sets in the bipartition may be split. Under this condition, they give an \(O(k^{6})\)-kernel for the vertex explosion model, which results in an \(O(2^{O(k^{6})}m)\)-time algorithm. They ask whether similar results can be obtained in the vertex splitting model. Figure 1 illustrates the three operations in the context of 2-layered drawings1.
Footnote 1: In this context, minimizing the number of vertex explosions is equivalent to minimizing the number of vertices that are split, since it is always best to split a vertex as often as possible.
We note that a graph admits a 2-layer drawing without crossings if and only if it has pathwidth at most 1, i.e., it is a disjoint union of caterpillars [4, 9]. Motivated by this, we more generally consider the problem of turning a graph \(G=(V,E)\) into a graph of pathwidth at most 1 by the above operations. In order to model the restriction of Ahmed et al. [2] that only one side of their bipartite input graph may be split, we further assume that we are given a subset \(S\subseteq V\), to which we may apply modification operations as part of the input.
More formally, we consider the following problems, all of which have been shown to be NP-hard [1, 18].
\begin{tabular}{l l}
**Input:** & An undirected graph \(G=(V,E)\), a set \(S\subseteq V\), and a positive integer \(k\). \\
**Question:** & Is there a set \(W\subseteq S\) with \(|W|\leq k\) such that the graph resulting from exploding \\ & all vertices in \(W\) has pathwidth at most 1? \\ & Pathwidth-One Vertex Splitting (POVS) \\
**Input:** & An undirected graph \(G=(V,E)\), a set \(S\subseteq V\), and a positive integer \(k\). \\
**Question:** & Is there a sequence of at most \(k\) splits on vertices in \(S\) such that the resulting graph has pathwidth at most 1? \\ \end{tabular}
We refer to the analogous problem with the deletion operation as Pathwidth-One Vertex Deletion (POVD). Here an algorithm with running time \(O(7^{k}\cdot n^{O(1)})\) is known [18], which was later improved to \(O(4.65^{k}\cdot n^{O(1)})\)[8], and very recently to \(O(3.888^{k}\cdot n^{O(1)})\)[23].
Figure 1: Given the shown bipartite graph, a crossing-free 2-layered drawing can be obtained using one vertex deletion (a), two vertex explosions (b), or three vertex splits (c).
Philip et al. [18] also gave a quartic kernel for POVD, which Cygan et al. [8] later improved to quadratic. Our results are as follows.
First, in Section 3, we show that POVE admits a kernel of size \(O(k^{2})\) and an algorithm with running time \(O(4^{k}m)\), thereby improving over the results of Ahmed et al. [2] in a more general setting.
Second, in Section 4, we show that POVS has a kernel of size \(16k\) and it admits an algorithm with running time \(O((6k+12)^{k}\cdot m)\). This answers the open question of Ahmed et al. [2].
In Section 5, we consider analogous problem variants where the target is to obtain a graph of treewidth at most 1, rather than pathwidth at most 1. Here we show that the deletion model and the explosion model are both equivalent to the problem Feedback Vertex Set, and that the split model is equivalent to Feedback Edge Set and can thus be solved in linear time.
Finally, in Section 6, we consider the problem \(\Pi\) Vertex Splitting (\(\Pi\)-VS), the generalized version of the splitting problem where the goal is to obtain a graph of a specific graph class \(\Pi\) using at most \(k\) split operations. Eppstein et al. [10] recently studied the similar problem of deciding whether a given graph \(G\) is _\(k\)-splittable_, i.e., whether it can be turned into a graph of \(\Pi\) by splitting every vertex of \(G\) at most \(k\) times. For graph classes \(\Pi\) that can be expressed in monadic second-order graph logic (\(\text{MSO}_{2}\), see [7]), they gave an FPT algorithm parameterized by the solution size \(k\) and the treewidth of the input graph. We adapt their algorithm for the problem \(\Pi\)-VS, resulting in an FPT algorithm parameterized by the solution size \(k\) for \(\text{MSO}_{2}\)-definable graph classes \(\Pi\) of bounded treewidth. Using a similar algorithm, we obtain the same result for the problem variant using vertex explosions.
## 2 Preliminaries
A parameterized problem \(L\) with parameter \(k\) is _non-uniformly fixed-parameter tractable_ if, for every value of \(k\), there exists an algorithm that decides \(L\) in time \(f(k)\cdot n^{O(1)}\) for some computable function \(f\). If there is a single algorithm that satisfies this property for all values of \(k\), then \(L\) is _(uniformly) fixed-parameter tractable_.
Note that isolated vertices do not increase the pathwidth or treewidth of a graph. Since we can determine the subgraph of \(G\) that contains no isolated vertices in \(O(m)\) time, we assume, without loss of generality, that \(n\in O(m)\). For a vertex \(v\in V(G)\), we let \(N(v)\) and \(N[v]\) denote the open and closed neighborhood of \(v\) in \(G\), respectively.
We refer to vertices of degree 1 as _pendant_ vertices. For a vertex \(v\) of \(G\), we let \(\deg^{*}(v)\coloneqq|\{u\in N(v)\mid\deg(u)>1\}|\) denote the degree of \(v\) ignoring its pendant neighbors. If \(\deg^{*}(v)=d\), we refer to \(v\) as a vertex of _degree* d_. A graph is a _caterpillar_ (respectively a _pseudo-caterpillar_), if it consists of a path (a simple cycle) with an arbitrary number of adjacent pendant vertices. The path (the cycle) is called the _spine_ of the (pseudo-)caterpillar.
Philip et al. [18] mainly characterized the graphs of pathwidth at most 1 as the graphs containing no cycles and no \(T_{2}\) (see Figure 1(a)) as a subgraph. We additionally use slightly different sets of forbidden substructures. An \(N_{2}\)_substructure_ consists of a _root_ vertex \(r\) adjacent to three distinct vertices of degree at least 2. Note that every \(T_{2}\) contains an \(N_{2}\) substructure, however, the existence of an \(N_{2}\) substructure does not generally imply the existence of a \(T_{2}\) subgraph; see Figure 1(b). In the following proposition, we state the different characterizations for graphs of pathwidth at most 1 that we use in this work.
For a graph \(G\), the following statements are equivalent.
1. \(G\) has pathwidth at most 1
_
2. _every connected component of_ \(G\) _is a caterpillar_
3. \(G\) _is acyclic and contains no_ \(T_{2}\) _subgraph_
4. \(G\) _is acyclic and contains no_ \(N_{2}\) _substructure_
5. \(G\) _contains no_ \(N_{2}\) _substructure and no connected component that is a pseudo-caterpillar._
Proof.: For the equivalences (a) \(\Longleftrightarrow\) (b) \(\Longleftrightarrow\) (c), we refer to the paper by Philip et al. [18].
We now show the equivalence (c) \(\Longleftrightarrow\) (d). Since any \(T_{2}\) subgraph also contains an \(N_{2}\) substructure, the implication (d) \(\Rightarrow\) (c) is clear. Consider a graph \(G\) that does not contain a cycle or a \(T_{2}\) subgraph. Assume that \(G\) contains an \(N_{2}\) substructure, i.e., a vertex \(r\) with three neighbors \(x\), \(y\), and \(z\) of degree at least 2. Let \(a,b\in\{x,y,z\}\). Note that \(r\in N[a]\cap N[b]\). If \((N[a]\cap N[b])\setminus\{r\}\neq\emptyset\), then \(N[a]\cap N[b]\) contains a cycle, a contradiction. Thus \(N[a]\cap N[b]=\{r\}\), i.e., \(x\), \(y\), and \(z\) are each adjacent to distinct vertices of \(V(G)\setminus\{r,x,y,z\}\). But then these vertices form a \(T_{2}\) subgraph, a contradiction. Thus \(G\) contains no \(N_{2}\) substructure.
Finally, we show the equivalence (d) \(\Longleftrightarrow\) (e). Since a pseudo-caterpillar contains a cycle as its spine, the direction (d) \(\Rightarrow\) (e) is clear. Let \(G\) be a graph containing no \(N_{2}\) substructures or connected components that are a pseudo-caterpillar. Assume that \(G\) contains a cycle \(C\) and let \(H\) denote the connected component containing \(C\). Since \(H\) contains no \(N_{2}\) substructure and since every vertex of \(C\) has two other neighbors contained in \(C\), all other vertices of \(H\) must have degree 1 and are thus pendant vertices. Therefore, \(H\) is a pseudo-caterpillar, a contradiction. Thus \(G\) contains no cycles and the implication (e) \(\Rightarrow\) (d) follows.
We define the _potential_ of \(v\in V(G)\) as \(\mu(v)\coloneqq\max(\deg^{*}(v)-2,0)\). The _global potential_\(\mu(G)\coloneqq\sum_{v\in V(G)}\mu(v)\) is defined as the sum of the potentials of all vertices in \(G\). Observe that \(\mu(G)=0\) if and only if \(G\) contains no \(N_{2}\) substructure. The global potential thus indicates how far away we are from eliminating all \(N_{2}\) substructures from the instance.
Recall that, for the problems POVE and POVS, the set \(S\subseteq V(G)\) marks the vertices of \(G\) that may be chosen for the respective operations. We say that a set \(W\subseteq S\) is a _pathwidth-one explosion set_ (POES) of \(G\), if the graph resulting from exploding all vertices in \(W\) has pathwidth at most 1. Analogously, a sequence of vertex splits on \(S\) is a _pathwidth-one split sequence_ (POS-sequence), if the resulting graph has pathwidth at most 1. We can alternatively describe a sequence of split operations as a _split partition_, a function \(\tau\) that maps every vertex \(v\in S\) to a partition of the edges incident to \(v\). The number of splits corresponding to \(\tau\) is then defined by \(|\tau|\coloneqq\sum_{v\in S}(|\tau(v)|-1)\). We say that \(|\tau|\) is the _size_ of \(\tau\). If \(\tau\) corresponds to a POS-sequence, we refer to \(\tau\) as a _pathwidth-one split partition_ (POS-partition).
A graph class \(\Pi\) is _minor-closed_ if, for every graph \(G\in\Pi\) and for every minor \(H\) of \(G\), \(H\) is also contained in \(\Pi\). We say that a graph class \(\Pi\) is _MSO\({}_{2}\)-definable_, if there exists an MSO\({}_{2}\) (monadic second-order graph logic, see [7]) formula \(\varphi\) such that \(G\models\varphi\) if and only
Figure 2: (a) The graph \(T_{2}\). (b) Two graphs that do not contain \(T_{2}\) as a subgraph, but both contain \(N_{2}\) (marked in orange) as a substructure.
if \(G\in\Pi\). A graph class \(\Pi\) has _bounded treewidth_ if there exists a constant \(c\in\mathbb{N}\) such that every graph contained in \(\Pi\) has treewidth at most \(c\). We let \(\operatorname{tw}(\Pi)\) denote the minimum constant \(c\) where this is the case.
## 3 FPT Algorithms for Pathwidth-One Vertex Explosion
In this section, we first show that POVE can be solved in time \(O(4^{k}\cdot m)\) using bounded search trees. Subsequently, we develop a kernelization algorithm for POVE that yields a quadratic kernel in linear time.
### Branching Algorithm
We start by giving a simple branching algorithm for POVE, similar to the algorithm by Philip et al. [18] for the deletion variant of the problem. For an \(N_{2}\) substructure \(X\), observe that exploding vertices not contained in \(X\) cannot eliminate \(X\), because the degrees of the vertices in \(X\) remain the same due to the new degree-1 vertices resulting from the explosion. To obtain a graph of pathwidth at most 1, it is therefore always necessary to explode one of the four vertices of every \(N_{2}\) substructure by Proposition 1. Our branching rule thus first picks an arbitrary \(N_{2}\) substructure from the instance and then branches on which of the four vertices of the \(N_{2}\) substructure belongs to the POES.
**Branching Rule 1**.: _Let \(r\) be the root of an \(N_{2}\) substructure contained in \(G\) and let \(x\), \(y\), and \(z\) denote the three neighbors of \(r\) in \(N_{2}\). For every vertex \(v\in\{r,x,y,z\}\cap S\), create a branch for the instance \((G^{\prime},S\setminus\{v\},k-1)\), where \(G^{\prime}\) is obtained from \(G\) by exploding \(v\). If \(\{r,x,y,z\}\cap S=\emptyset\), reduce to a trivial no-instance instead._
Note that an \(N_{2}\) substructure can be found in \(O(m)\) time by checking, for every vertex \(v\) in \(G\), whether \(v\) has at least three neighbors of degree at least 2. Also note that vertex explosions do not increase the number of edges of the graph. Since Branching Rule 1 creates at most four new branches, each of which reduces the parameter \(k\) by 1, exhaustively applying the rule takes \(O(4^{k}\cdot m)\) time. By Proposition 1, it subsequently only remains to eliminate connected components that are a pseudo-caterpillar. Since a pseudo-caterpillar can (only) be turned into a caterpillar by exploding a vertex of its spine, the remaining instance can be solved in linear time.
The problem Pathwidth-One Vertex Explosion can be solved in time \(O(4^{k}\cdot m)\).
### Quadratic Kernel
We now turn to our kernelization algorithm for POVE. In this section, we develop a kernel of quadratic size, which can be computed in linear time.
We adopt our first two reduction rules from the kernelization of the deletion variant by Philip et al. [18] and show that these rules are also safe for the explosion variant. The first rule reduces the number of pendant neighbors of each vertex to at most one; see Figure 2(a).
**Reduction Rule 1**.: _If \(G\) contains a vertex \(v\) with at least two pendant neighbors, remove all pendant neighbors of \(v\) except one to obtain the graph \(G^{\prime}\) and reduce the instance to \((G^{\prime},\ S\cap V(G^{\prime}),\ k)\)._
Proof of Safeness.Observe that exploding a vertex of degree \(1\) has no effect, thus no minimum POES contains a vertex of degree \(1\). It is therefore clear that any minimum POES of \(G\) is also a POES of \(G^{\prime}\).
Let \(W\) denote a minimum POES of \(G^{\prime}\). It remains to show that \(W\) is a POES of \(G\). Let \(l\) denote the remaining pendant neighbor of \(v\) in \(G^{\prime}\) and let \(P\coloneqq V(G)\setminus V(G^{\prime})\) denote the set of pendant neighbors of \(v\) the reduction rule removed from \(G\). Let \(\hat{G}\) and \(\hat{G}^{\prime}\) denote the graphs obtained by exploding the vertices of \(W\) in \(G\) and \(G^{\prime}\), respectively. If \(v\in W\), then \(\hat{G}\) only contains \(|P|\) additional connected components compared to \(\hat{G}^{\prime}\), each of which consists of two adjacent degree-\(1\) vertices. Since \(\hat{G}^{\prime}\) has pathwidth at most \(1\) and a connected component consisting of two adjacent degree-\(1\) vertices also has pathwidth \(1\), \(W\) is a POES of \(G\).
Now consider the case where \(v\notin W\), i.e., \(v\in V(\hat{G}^{\prime})\). Recall that no minimum POES contains a vertex of degree \(1\), thus \(l\notin W\) and \(v\) is still adjacent to \(l\) in \(\hat{G}^{\prime}\). Since \(\hat{G}^{\prime}\) has pathwidth at most \(1\), \(\hat{G}^{\prime}\) contains no cycles or \(T_{2}\) subgraphs by Proposition 1. Note that the graph \(\hat{G}\) can be obtained from \(\hat{G}^{\prime}\) by adding the vertices of \(P\) as pendant neighbors to \(v\), thus \(\hat{G}\) also contains no cycles. Since \(v\) already has a neighbor \(l\) of degree \(1\) in \(\hat{G}^{\prime}\), adding additional pendant neighbors to \(v\) does not introduce \(T_{2}\) subgraphs [19, Lemma 9]. Hence \(\hat{G}\) contains no \(T_{2}\) subgraphs or cycles and thus \(\hat{G}\) has pathwidth at most one by Proposition 1. Therefore, \(W\) is a POES of \(G\).
Since a caterpillar has pathwidth at most \(1\) by Proposition 1, we can safely remove any connected component of \(G\) that forms a caterpillar; see Figure 2(b) for an example.
If \(G\) contains a connected component \(X\) that is a caterpillar, remove \(X\) from \(G\) and reduce the instance to \((G-X,\ S\setminus V(X),\ k)\).
If \(G\) contains a connected component that is a pseudo-caterpillar, then exploding an arbitrary vertex of its spine yields a caterpillar. If the spine contains no vertex of \(S\), the spine is a cycle that cannot be broken by a vertex explosion. However, by Proposition 1, acyclicity is a necessary condition for a graph of pathwidth at most \(1\). Hence we get the following reduction rule; see Figure 2(c) for an illustration.
**Reduction Rule 3**.: _Let \(X\) denote a connected component of \(G\) that is a pseudo-caterpillar. If the spine of \(X\) contains a vertex of \(S\), remove \(X\) from \(G\) and reduce the instance to \((G-X,\ S\setminus V(X),\ k-1)\). Otherwise reduce to a trivial no-instance._
Recall that the degree* of a vertex is the number of its non-pendant neighbors. Our next goal is to shorten paths of degree*-\(2\) vertices to at most two vertices. If we have a path \(x,y,z\)
Figure 3: Examples for Reduction Rules 1 (a), 2 (b), 3 (c), and 4 (d). The vertices of \(S\) are marked in green.
of degree*-2 vertices, we refer to \(y\) as a _2-enclosed_ vertex. Note that exploding a 2-enclosed vertex \(y\) cannot eliminate any \(N_{2}\) substructures from the instance. By Proposition 1, vertex \(y\) can thus only be part of an optimal solution if exploding \(y\) breaks cycles. If we want to shorten the chain \(x,y,z\) by contracting \(y\) into one of its neighbors, we therefore need to ensure that the shortened chain contains a vertex of \(S\) if and only if the original chain contained a vertex of \(S\). If \(y\in S\), we cannot simply add one of its neighbors, say \(x\), to \(S\) in the reduced instance, because exploding \(x\) may additionally remove an \(N_{2}\) substructure; see Figure 4 for an example. While shortening paths of degree*-2 vertices to at most three vertices is simple, shortening them to length at most 2 (i.e., eliminating all 2-enclosed vertices) is therefore more involved. To solve this problem, we will show that we can greedily decide whether a 2-enclosed vertex \(y\) is part of an optimal solution or not. This means that we can either immediately explode \(y\), or we can safely contract it into one of its degree*-2 neighbors. We start with the following auxiliary lemma.
Let \(y\in S\) be a 2-enclosed vertex of \(G\) and let \(\mathcal{C}_{y}\) denote the set of simple cycles of \(G\) that contain \(y\). If \(|C\cap S|\geq 2\) holds for every cycle \(C\in\mathcal{C}_{y}\), then there exists a minimum POES of \(G\) that does not contain \(y\).
Proof.: Let \(W\) be a minimum POES of size \(k\) for \(G\). As argued above, the 2-enclosed vertex \(y\) can only be part of an optimal solution if exploding \(y\) breaks a cycle. Assume without loss of generality that \(y\in W\). For two cycles \(C_{1},C_{2}\in\mathcal{C}_{y}\), define \(C_{1}\oplus C_{2}\) as the symmetric difference of the edges in \(C_{1}\) and \(C_{2}\), i.e., an edge is present in \(C_{1}\oplus C_{2}\) if and only if it is present in exactly one of the cycles \(C_{1}\) and \(C_{2}\). Since \(y\) is contained in a chain of degree*-2 vertices, \(C_{1}\oplus C_{2}\) is a collection of cycles that do not contain \(y\). Now consider the set \(\hat{W}\coloneqq W\setminus\{y\}\). Since \(W\) is a minimum POES for \(G\) and since \(C_{1}\oplus C_{2}\) does not contain \(y\), exploding \(\hat{W}\) still breaks all cycles of \(C_{1}\oplus C_{2}\).
Assume that there exist two distinct cycles \(C_{1}\) and \(C_{2}\) in \(\mathcal{C}_{y}\) that both remain intact after exploding \(\hat{W}\). This means that all cycles of \(C_{1}\oplus C_{2}\) also remain intact, a contradiction to the assumption that \(W\) is a POES of \(G\). We can therefore have at most one cycle \(C\in\mathcal{C}_{y}\) with \(C\cap\hat{W}=\emptyset\). Since \(|C\cap S|\geq 2\) holds by prerequisite of this lemma, we can pick an arbitrary vertex \(v\in C\cap S\) with \(v\neq y\) and we find that \(\hat{W}\cup\{v\}\) is a POES of size \(k\) for \(G\) that does not contain vertex \(y\).
If a degree*-2 neighbor \(y\) of a 2-enclosed vertex \(v\) is contained in \(S\), Lemma 1 guarantees that there exists a minimum POES that does not contain \(v\), because every cycle that contains \(v\) also contains \(y\). We can therefore define the following simple auxiliary reduction rule that ensures that no 2-enclosed vertex of \(S\) is adjacent to another degree*-2 vertex of \(S\); see Figure 2(d). This will be helpful for our next reduction rule, because it reduces the number of cases we have to consider.
Figure 4: A graph \(G\) that has no POES, because the highlighted \(N_{2}\) substructure contains no vertex of \(S\). For the graph \(G^{\prime}\) resulting from contracting \(y\) into \(x\), the set \(\{x\}\) is a POES. The two instances are therefore not equivalent.
**Reduction Rule 4**: _Let \(v\) be a 2-enclosed vertex of \(G\) adjacent to a degree*-2 vertex \(y\in S\). Reduce the instance to \((G,S\setminus\{v\},k)\)._
Let \(S_{2}\subseteq S\) denote the set of 2-enclosed vertices contained in \(S\). We now use Lemma 1 to greedily determine for each vertex in \(S_{2}\) whether it should be contained in a minimum POES or not. Let \(G^{\prime}\coloneqq G[V(G)\setminus(S\setminus S_{2})]\) denote the graph obtained by removing all vertices of \(S\) that are not contained in \(S_{2}\) from \(G\). Let further \(\hat{G}\) denote the graph obtained from \(G^{\prime}\) by first removing all pendant vertices and subsequently contracting every connected component of \(G^{\prime}[V(G^{\prime})\setminus S_{2}]\) into a single vertex. Compute an arbitrary spanning forest \(T\) of \(\hat{G}\). Let \(S_{\text{explode}}\subseteq S_{2}\) denote the vertices of \(S_{2}\) that are a leaf of \(T\) and let \(S_{\text{keep}}\coloneqq S_{2}\setminus S_{\text{explode}}\) denote the remaining vertices of \(S_{2}\) that are thus inner nodes of \(T\); see Figure 5 for an illustration.
**Lemma 2**: _There exists a minimum POES \(W\) of \(G\) such that \(S_{\text{keep}}\cap W=\emptyset\) and \(S_{\text{explode}}\subseteq W\)._
Let \(v\in S_{\text{keep}}\) and let \(C\) denote an arbitrary simple cycle of \(G\) that contains \(v\). We want to show that \(C\) always contains a vertex of \(S\setminus S_{\text{keep}}\), which will allow us to use Lemma 1 to find a minimum POES of \(G\) that does not contain \(v\). First consider the case where \(C\) is not completely contained in a connected component of \(G^{\prime}\). This means that removing the vertices of \(S\setminus S_{2}\) from \(G\) splits cycle \(C\), thus \(C\) contains a vertex of \(S\setminus S_{2}\subseteq S\setminus S_{\text{keep}}\). Now consider the case where \(C\) is completely contained in the graph \(G^{\prime}\). If a vertex of \(x\in C\cap S_{2}\) has degree 1 in \(\hat{G}\), then \(x\) is a leaf of \(T\) and therefore contained in \(S_{\text{explode}}\), thus \(C\) contains a vertex of \(S\setminus S_{\text{keep}}\). Otherwise every vertex of \(C\cap S_{2}\) has degree 2 in \(\hat{G}\) and the construction of \(\hat{G}\) ensures that cycle \(C\) of \(G^{\prime}\) also induces a cycle \(\hat{C}\) in \(\hat{G}\). Because \(T\) is a spanning forest of \(\hat{G}\), cycle \(\hat{C}\) must contain an edge \(e\) that is not contained in \(T\). Due to the construction of \(\hat{G}\), one endpoint \(y\) of \(e\) must be a vertex of \(S_{2}\). But because \(y\) has degree 2 in \(\hat{G}\) and its incident edge \(e\) is not part of \(T\), \(y\) must be a leaf of \(T\). Thus \(y\in S_{\text{explode}}\), which again yields a vertex of \(S\setminus S_{\text{keep}}\) contained in cycle \(C\).
We have shown that, for every \(v\in S_{\text{keep}}\) and for every simple cycle \(C\) of \(G\) containing \(v\), \(C\) also contains a vertex of \(S\setminus S_{\text{keep}}\). Consequently, by Lemma 1, there exists a minimum POES of \(G\) that does not contain \(v\). Given the initial instance \(\mathcal{I}=(G,S,k)\) of POVE, the instance \(\mathcal{I}^{\prime}=(G,S\setminus\{v\},k)\) is therefore equivalent. Because we have shown that every cycle of \(G\) that contains a vertex of \(S_{\text{keep}}\) also contains a vertex of \(S\setminus S_{\text{keep}}\), we can repeatedly
Figure 5: (a) A graph \(G\) with the vertices of \(S\) marked in green. (b) The corresponding graph \(G^{\prime}\) obtained after removing all vertices of \(S\) that are not 2-enclosed, i.e., the remaining green vertices form the set \(S_{2}\). (c) The graph \(\hat{G}\) obtained after contracting all connected components of \(G^{\prime}[V(G^{\prime})\setminus S_{2}]\) into a single vertex in \(G^{\prime}\), together with a spanning tree \(T\) highlighted in orange. The vertices of \(S_{2}\) that are leaves of \(T\) compose the set \(S_{\text{explode}}\) (marked with red crosses), the remaining vertices of \(S_{2}\) compose the set \(S_{\text{keep}}\).
apply this step to obtain the equivalent instance \(\mathcal{I}^{*}=(G,S\setminus S_{\text{keep}},k)\). Note that we do not actually alter the initial instance \(\mathcal{I}\), but the existence of the equivalent instance \(\mathcal{I}^{*}\) shows that there exists a minimum POES of \(G\) that contains no vertices of \(S_{\text{keep}}\).
Let \(W\) denote a minimum POES of \(G\) with \(S_{\text{keep}}\cap W=\emptyset\). We now want to show that \(S_{\text{explode}}\subseteq W\) always holds. First consider a vertex \(v\in S_{\text{explode}}\) that has degree 1 in \(\hat{G}\). Note that Reduction Rule 4 ensures that \(v\) is not adjacent to a degree*-2 vertex of \(S\) in \(G\), thus \(v\) cannot have degree 1 in \(G^{\prime}\). Vertex \(v\) can therefore only have degree 1 in \(\hat{G}\) if both neighbors of \(v\) in \(G^{\prime}\) lie in the same connected component \(H\) of \(G^{\prime}[V(G^{\prime})\setminus S_{2}]\). Thus \(v\) and vertices of \(H\) form a cycle \(C\) with \(C\cap S=\{v\}\). In order to break cycle \(C\), it is therefore necessary to explode \(v\) and thus \(v\in W\).
Now consider a vertex \(v\in S_{\text{explode}}\) that has degree 2 in \(\hat{G}\). Let \(x\) and \(y\) denote the neighbors of \(v\) in \(\hat{G}\). Let \(\hat{C}\) denote the cycle of \(\hat{G}\) consisting of the path \(x,v,y\) and the unique path of spanning forest \(T\) connecting \(x\) and \(y\). Since \(v\) is a leaf of \(T\), one of the edges \(vx\) or \(vy\) is not contained in \(T\), thus \(\hat{C}\) is indeed a cycle. Reduction Rule 4 ensures that no two vertices of \(S_{2}\) are adjacent in \(G\), and thus the construction of \(\hat{G}\) guarantees that \(x,y\notin S\). Because \(x,y\notin S\), \(v\) is the only vertex of \(S\) contained in \(\hat{C}\) that is also a leaf of \(T\), thus \(\hat{C}\cap S_{\text{explode}}=\{v\}\). We therefore only have a single vertex \(v\) of \(S_{\text{explode}}\) contained in \(\hat{C}\), all other vertices of \(S\) contained in \(\hat{C}\) must be a subset of \(S_{\text{keep}}\). Note that the construction of \(\hat{G}\) guarantees that we find a cycle \(C\) in \(G\) with the same properties. But because the POES \(W\) does not contain any vertices of \(S_{\text{keep}}\), \(W\) must contain \(v\) in order to break the cycle \(\hat{C}\) in \(\hat{G}\) (and consequently the cycle \(C\) in \(G\)).
Lemma 2 now allows us to eliminate all 2-enclosed vertices and thus lets us shorten chains of degree*-2 vertices to length at most 2. We state this in the following reduction rule; see Figure 5(a) for an illustration.
**Reduction Rule 5**.: _Let \(v\) denote a 2-enclosed vertex of \(G\) with degree*-2 neighbors \(x\) and \(y\)._
* _If_ \(v\in S_{\text{explode}}\)_, let_ \(G^{\prime}\) _denote the graph obtained from_ \(G\) _by exploding_ \(v\)_. Reduce the instance to_ \((G^{\prime},S\setminus\{v\},k-1)\)_._
* _Otherwise, remove_ \(v\) _from_ \(G\)_, add the new edge_ \(xy\)_, and reduce the instance to_ \((G-v+xy,S\setminus\{v\},k)\)_._
Proof of Safeness.: If \(v\in S_{\text{explode}}\), then Lemma 2 immediately tells us that it is safe to explode \(v\), thus the first case is safe.
If \(v\notin S_{\text{explode}}\), then Lemma 2 lets us assume, without loss of generality, that \(v\notin S\). Note that \(x\) and \(y\) cannot be adjacent, because Reduction Rule 2 removes all connected components that form a caterpillar from \(G\). The reduction therefore does not introduce multi-edges. Because \(v\) is 2-enclosed, the reduction retains all \(N_{2}\) substructures and all cycles of \(G\). Because \(v\notin S\), any solution for the original instance is also a solution for the reduced instance and vice versa, thus the second case is also safe.
Figure 6: Examples illustrating the two cases of Reduction Rule 5 (a) and Reduction Rule 6 (b).
To simplify the instance even further, the following reduction rule removes all degree*-2 vertices that are adjacent to a vertex of degree* 1; see Figure 5(b) for an illustration.
Let \(v\) be a degree*-2 vertex of \(G\) with non-pendant neighbors \(x\) and \(y\), such that \(x\) has degree* 1. Remove \(v\) from \(G\) and add a new edge \(xy\). If \(v\in S\), reduce to \((G-v+xy,(S\setminus\{v\})\cup\{x\},k)\). Otherwise reduce to \((G-v+xy,S\setminus\{x\},k)\).
Proof of Safeness.: Since \(x\) has degree* 1, \(x\) and \(v\) cannot be contained in a cycle of \(G\), and \(x\) cannot be contained in a cycle of \(G-v+xy\). Hence we only have to consider \(N_{2}\) substructures. Because \(x\) itself has degree* 1 and is not adjacent to a vertex of degree* at least 3, \(x\) cannot be contained in an \(N_{2}\) substructure of \(G\). Since \(x\) is not contained in a cycle or \(N_{2}\) substructure, \(x\) is therefore also not contained in a minimum POES of \(G\). Note that \(x\) must have a pendant neighbor, because otherwise, \(x\) itself would be a pendant neighbor of \(v\). This means that any \(N_{2}\) substructure \(H\) of \(G\) containing \(v\) is also present in \(G-v+xy\), with vertex \(x\) replacing \(v\) in \(H\). Observe that the reduction modifies the set \(S\) to ensure that \(x\) can be exploded in the reduced instance \(\mathcal{I}^{\prime}\) if and only if \(v\) can be exploded in the original instance \(\mathcal{I}\). Therefore, a minimum POES \(W^{\prime}\) of \(\mathcal{I}^{\prime}\) can be obtained from a minimum POES \(W\) of \(\mathcal{I}\) by replacing \(v\) with \(x\) in \(W\) and vice versa.
Recall that the global potential \(\mu(G)\) indicates how far away we are from our goal of eliminating all \(N_{2}\) substructures from \(G\). With the following lemma, we show that our reduction rules ensure that the number of vertices in the graph \(G\) is bounded linearly in the global potential of \(G\).
After exhaustively applying Reduction Rules 1-6, it is \(|V(G)|\leq 8\cdot\mu(G)\).
Proof.: Reduction Rule 2 ensures that \(G\) contains no vertices of degree* 0. For \(i\in\{1,2\}\), let \(V_{i}\) denote the set of non-pendant degree*-\(i\) vertices of \(G\) and let \(V_{3}\) denote the set of vertices with degree* at least 3. Recall that we defined the global potential as
\[\mu(G)=\sum_{v\in V(G)}\mu(v)=\sum_{v\in V(G)}\max(0,\deg^{*}(v)-2).\]
Since all vertices of \(V_{1}\) and \(V_{2}\) have degree* at most 2, their potential is 0 and we get
\[\mu(G)=\sum_{v\in V_{3}}(\deg^{*}(v)-2)=\sum_{v\in V_{3}}\deg^{*}(v)-2\cdot|V _{3}|.\]
Note that \(|V_{3}|\leq\mu(G)\), because each vertex of degree* at least 3 contributes at least 1 to the global potential. We therefore get
\[\sum_{v\in V_{3}}\deg^{*}(v)\leq 3\cdot\mu(G). \tag{1}\]
By Reduction Rule 5, every vertex in \(v\in V_{2}\) is adjacent to a vertex of \(V_{1}\cup V_{3}\), since otherwise, \(v\) would be 2-enclosed. However, Reduction Rule 6 additionally ensures that vertices of \(V_{2}\) cannot be adjacent to vertices of \(V_{1}\), thus every vertex of \(V_{2}\) must be adjacent to a vertex of \(V_{3}\). Note that two adjacent vertices of \(V_{1}\) would form a caterpillar, which is prohibited by Reduction Rule 2. Therefore, every vertex of \(V_{1}\) is also adjacent to a vertex of \(V_{3}\).
Overall, every vertex of \(V_{1}\) and \(V_{2}\) is thus adjacent to a vertex of \(V_{3}\). Note that every vertex \(v\in V_{1}\) must additionally have a pendant neighbor, because otherwise, \(v\) itself would be a pendant vertex. Hence every vertex of \(V_{1}\) and \(V_{2}\) has degree at least 2 and thus contributes to the degree* of its neighbor in \(V_{3}\). We therefore have \(|V_{1}|+|V_{2}|\leq\sum_{v\in V_{3}}\deg^{*}(v)\), hence
\(|V_{1}|+|V_{2}|\leq 3\cdot\mu(G)\) by Equation 1. Recall that \(|V_{3}|\leq\mu(G)\), thus \(|V_{1}|+|V_{2}|+|V_{3}|\leq 4\cdot\mu(G)\). By Reduction Rule 1, each of these vertices can have at most one pendant neighbor and thus \(|V(G)|\leq 8\cdot\mu(G)\).
With Lemma 3, it now only remains to find an upper bound for the global potential \(\mu(G)\). We do this using the following two reduction rules.
Let \(v\) be a vertex of \(G\) with potential \(\mu(v)>k\). If \(v\in S\), explode \(v\) to obtain the graph \(G^{\prime}\) and reduce the instance to \((G^{\prime},\ S\setminus\{v\},\ k-1)\). Otherwise reduce to a trivial no-instance.
Proof of Safeness.: Since exploding a vertex \(u\in V(G)\setminus\{v\}\) decreases \(\mu(v)\) by at most one, after exploding at most \(k\) vertices in \(V(G)\setminus\{v\}\) we still have \(\mu(v)>0\). Because \(\mu(v)>0\) implies that \(G\) contains an \(N_{2}\) substructure, it is therefore always necessary to explode vertex \(v\) by Proposition 1.
If \(\mu(G)>2k^{2}+2k\), reduce to a trivial no-instance.
Proof of Safeness.: By Reduction Rule 7 we have \(\mu(v)\leq k\) and therefore \(\deg^{*}(v)\leq k+2\) for all \(v\in V(G)\). Hence exploding a vertex \(v\) decreases the potential of \(v\) by at most \(k\) and the potential of each of its non-pendant neighbors by at most \(1\). Overall, \(k\) vertex explosions can therefore only decrease the global potential \(\mu(G)\) by at most \(k\cdot(2k+2)\).
Because Reduction Rule 8 gives us an upper bound for the global potential \(\mu(G)\), we can use Lemma 3 to obtain the kernel. The problem Pathwidth-One Vertex Explosion admits a kernel of size \(16k^{2}+16k\). It can be computed in time \(O(m)\).
Proof.: By Reduction Rule 8, using Lemma 3 yields a kernel of size \(16k^{2}+16k\) for POVE. It remains to show that we can compute the kernel in linear time.
First observe that, while some reduction rules may increase the number of vertices in the instance, the number of edges never increases. Also note that no reduction rule increases the global potential or the potential of a single vertex. We can therefore apply Reduction Rules 7 and 8 exhaustively in the beginning in \(O(m)\) time. Subsequently, we use Reduction Rules 2 and 3 to eliminate all connected components that are caterpillars and pseudo-caterpillars, respectively. To test for the latter, it suffices to check whether every vertex in the component has at most two neighbors of degree* 2 or higher, for the former it suffices to additionally test for acyclicity. Both reduction rules can thus be exhaustively applied in linear time. We then exhaustively apply Reduction Rules 4, 5, and 6 in linear time to eliminate most degree*-2 vertices. Since these three rules only affect degree*-2 vertices, each of them can be implemented using a single pass through the graph. For Reduction Rule 5, it is not hard to see that the auxiliary graphs \(G^{\prime}\) and \(\hat{G}\), as well as the spanning tree \(T\) used to determine the sets \(S_{\text{keep}}\) and \(S_{\text{explode}}\), can be computed in \(O(m)\) time. Note that Reduction Rules 2 and 3 ensure that every connected component of \(G\) contains an \(N_{2}\) substructure. Since Reduction Rules 4, 5, and 6 cannot eliminate \(N_{2}\) substructures, no connected component is a (pseudo-)caterpillar after applying Reduction Rules 4, 5, and 6 and thus we do not have to apply Reduction Rules 2 and 3 again. Finally, we use Reduction Rule 1 to remove excess pendant neighbors at all vertices in linear time. We therefore obtain the kernel in \(O(m)\) time.
## 4 FPT Algorithms for Pathwidth-One Vertex Splitting
In this section, we first adapt the kernelization algorithm from Section 3.2 to obtain a linear kernel for POVS in linear time. Subsequently, we show that POVS can also be solved in time \(O((6k+12))^{k}\cdot m)\) using bounded search trees.
### Linear Kernel
In order to obtain a kernel for POVS, we reuse Reduction Rules 1 - \(6^{2}\) from Section 3.2. We first show that these reduction rules are also safe in the context of POVS.
Reduction Rules 1 - 6 are safe for the problem Pathwidth-One Vertex Splitting.
Proof.: We first show that Reduction Rule 1 is safe for POVS. Since \(G^{\prime}\) is an induced subgraph of \(G\), it is clear that any POS-partition of size \(k\) for \(G\) yields a POS-partition of size at most \(k\) for \(G^{\prime}\).
Conversely, let \(\tau^{\prime}\) be a POS-partition of size \(k\) for \(G^{\prime}\). Let \(l\) denote the remaining pendant neighbor of \(v\) in \(G^{\prime}\) and let \(P\coloneqq V(G)\setminus V(G^{\prime})\) denote the set of pendant neighbors of \(v\) the reduction rule removed from \(G\). Let \(\tau\) denote the split partition of \(G\) obtained from \(\tau^{\prime}\) by adding all edges incident to the vertices in \(P\) to the cell \(c\in\tau^{\prime}(v)\) containing the edge \(vl\). Note that \(\tau\) also has size \(k\). Let \(\hat{G}\) (respectively \(\hat{G}^{\prime}\)) be the graph obtained from \(G\) (\(G^{\prime}\)) after applying the splits defined by \(\tau\) (\(\tau^{\prime}\)). We want to show that \(\hat{G}\) also has pathwidth at most \(1\). Let \(v^{\prime}_{c}\) denote the vertex of \(\hat{G}^{\prime}\) corresponding to \(c\) (i.e., \(v^{\prime}_{c}\) is adjacent to \(l\)) and let \(v_{c}\) denote the corresponding vertex of \(\hat{G}\). Note that the only difference between \(\hat{G}^{\prime}\) and \(\hat{G}\) is that \(v_{c}\) additionally has the vertices of \(P\) as pendant neighbors. Since \(\hat{G}^{\prime}\) has pathwidth at most \(1\), \(\hat{G}^{\prime}\) contains no \(T_{2}\) subgraphs or cycles (Proposition 1). Because \(\hat{G}\) only contains additional degree-\(1\) vertices, \(\hat{G}\) also contains no cycles. Since \(v^{\prime}_{c}\) already has a pendant neighbor \(l\) in \(\hat{G}^{\prime}\), adding additional pendant neighbors to \(v^{\prime}_{c}\) does not introduce any \(T_{2}\) subgraphs [19, Lemma 9]. Hence \(\hat{G}\) contains no cycles and no \(T_{2}\) subgraphs and thus has pathwidth at most \(1\) by Proposition 1. Therefore, \(\tau\) is a POS-partition of size \(k\) for \(G\) and we can conclude that Reduction Rule 1 is safe for POVS.
Removing connected components that are caterpillars (Reduction Rule 2) is clearly also safe for POVS. If a connected component \(X\) of \(G\) is a pseudo-caterpillar, any split that separates two edges belonging to the spine of \(X\) yields a caterpillar. Reduction Rule 3 is therefore also safe.
As the next step, we show that Lemma 1 also holds for minimum split sequences of POVS. Let \(y\in S\) be a \(2\)-enclosed vertex of \(G\) such that \(|C\cap S|\geq 2\) holds for every simple cycle \(C\) containing \(y\). Let \(\phi\) denote a minimum POS-sequence for \(G\) that splits \(y\). Because \(y\) is \(2\)-enclosed, \(y\) is not contained in any \(N_{2}\) substructures of \(G\) and since a single split of \(y\) can break all cycles containing \(y\), \(\phi\) splits \(y\) exactly once. Let \(\phi\setminus y\) denote the split sequence obtained from \(\phi\) by removing the split involving \(y\). Using the same argument as the proof of Lemma 1, there is at most one cycle \(C\) in \(G\) that is not broken by \(\phi\setminus y\). Because \(C\) contains a vertex \(v\in S\) with \(v\neq y\), we can add an arbitrary split of \(v\) that breaks \(C\) to the sequence \(\phi\setminus y\) and we obtain a POS-sequence of size \(k\) that does not split vertex \(y\). The safeness of Reduction Rule 4 again immediately follows from Lemma 1.
We now show that Lemma 2 is also correct in the context of POVS. Specifically, we show that there exists a minimum POS-sequence of \(G\) that splits all vertices in \(S_{\text{explode}}\) but no vertices of \(S_{\text{keep}}\). The first part of the proof of Lemma 2 uses Lemma 1 to find a minimum POES that contains no vertices of \(S_{\text{keep}}\). Since we have shown above that Lemma 1 also holds for split operations, the same strategy can be used to find a minimum POS-sequence that splits no vertices of \(S_{\text{keep}}\). The second part of the proof shows that, for every vertex \(v\in S_{\text{explode}}\), there exists a cycle \(C\) in \(G\) that contains no other vertices of \(S\setminus S_{\text{keep}}\). Given a POS-sequence \(\phi\) that splits no vertices of \(S_{\text{keep}}\), it is therefore necessary that \(\phi\) splits \(v\) in order to break cycle \(C\), thus Lemma 2 also holds for POVS. Since Lemma 2 is correct, the proof of safeness for Reduction Rule 5 can also be applied to POVS.
Finally, consider Reduction Rule 6. Since \(v\) is not contained in any cycles of \(G\), \(v\) is only contained in a minimum POS-sequence \(\phi\) of \(G\) if \(v\) is contained in an \(N_{2}\) substructure of \(G\). If \(\phi\) splits \(v\), \(\phi\) must therefore split off edge \(yv\) alone at \(v\), because otherwise, the resulting vertex still has degree at least \(2\) and thus the \(N_{2}\) substructure remains intact. Observe that, after splitting off \(yv\) at \(v\), the other half of the split subsequently lies in a connected component that is a caterpillar. Since \(\phi\) has minimum size, \(\phi\) therefore only splits \(v\) once. Similarly, any minimium POS-sequence of \(G^{\prime}\) that splits \(x\) must isolate the edge \(yx\) and only splits \(x\) once. Analogously to the proof of Reduction Rule 6 in Section 3.2, we can therefore obtain a minimum POS-sequence \(\phi^{\prime}\) of \(G^{\prime}\) from a minimum POS-sequence \(\phi\) of \(G\) by replacing the uniquely defined split of \(v\) in \(\phi\) with the uniquely defined split of \(x\) and vice versa.
As in Section 3.2, we now define a reduction rule that gives an upper bound for the global potential \(\mu(G)\). To obtain this upper bound, it suffices to show that a single split operation can decrease the global potential by at most \(2\).
If \(\mu(G)>2k\), reduce to a trivial no-instance.
Proof of Safeness.: Consider a vertex \(v\) of \(G\) being split into two new vertices \(v_{1}\) and \(v_{2}\). We show that the global potential decreases by at most \(2\).
If \(\deg^{*}(v_{1})=0\), then \(\mu(v_{1})=0\) and thus \(\mu(v_{1})+\mu(v_{2})=\mu(v_{2})=\mu(v)\). Additionally, all neighbors of \(v_{1}\) are pendant vertices whose potential remains unchanged. Note that the potential of neighbors of \(v_{2}\) can only have decreased (by at most \(1\)) if \(\deg(v_{2})=1\). Overall, the global potential thus decreases by at most \(1\) if \(\deg^{*}(v_{1})=0\).
If \(\deg^{*}(v_{1})=1\) and \(\deg^{*}(v_{2})=1\) then \(\mu(v)=0\) and the potential of the two non-pendant neighbors decreases by at most \(1\) each.
If \(\deg^{*}(v_{1})=1\) and \(\deg^{*}(v_{2})\geq 2\), then the potential of the non-pendant neighbor of \(v_{1}\) decreases by at most \(1\) and \(\mu(v_{1})+\mu(v_{2})=\mu(v_{2})=\mu(v)-1\), hence the global potential decreases by at most \(2\).
Finally, if \(\deg^{*}(v_{1})\geq 2\) and \(\deg^{*}(v_{2})\geq 2\), then the potential of the neighbors of \(v\) does not change and \(\mu(v_{1})+\mu(v_{2})=\deg^{*}(v_{1})-2+\deg^{*}(v_{2})-2=\deg^{*}(v)-4=\mu(v)-2\), hence the global potential decreases by \(2\).
Note that all remaining cases are symmetric. Therefore, a single split can decrease the global potential by at most \(2\). Since \(\mu(G)>0\) implies that \(G\) contains an \(N_{2}\) substructure, any instance with \(\mu(G)>2k\) is therefore a no-instance by Proposition 1.
Because Reduction Rule 9 gives a linear upper bound for the global potential of \(G\), we can use Lemma 3 from Section 3.2 to obtain a linear kernel for POVS.
The problem Pathwidth-One Vertex Splitting admits a kernel of size \(16k\). It can be computed in time \(O(m)\).
Proof.: After exhaustively applying Reduction Rules 1 - 6, using Lemma 3 with the upper bound \(\mu(G)\leq 2k\) provided by Reduction Rule 9 yields a kernel of size \(16k\) for POVS.
To obtain this kernel in linear time, we first apply Reduction Rule 9 once in the beginning. The proof of Theorem 3 shows that the remaining Reduction Rules 1 - 6 can be applied exhaustively in time \(O(m)\).
### Branching Algorithm
We now propose an alternative FPT algorithm for POVS using bounded search trees. We reuse Reduction Rule 3 to eliminate connected components that are pseudo-caterpillars. Similar to Section 3.1, our branching rule will remove all \(N_{2}\) substructures from the instance. For the vertex split operation, however, we need to additionally consider the possible ways to split a single vertex. The following lemma helps us limit the number of suitable splits and thus decreases the size of our branching vector.
For every instance of POVS, there exists a minimum POS-sequence \(\phi\) such that every split operation in \(\phi\) splits off at most two edges.
Proof.: Consider a minimum POS-partition \(\tau\) of \(G\). In order to prove the statement of the lemma, we want to show that we can alter \(\tau\) such that, for every \(v\in S\), \(\tau(v)\) contains at most one cell with more than two elements. Let \(G^{\prime}\) denote the graph obtained from \(G\) by applying the splits corresponding to \(\tau\) to \(G\), thus \(G^{\prime}\) has pathwidth at most 1 and contains no cycles or \(N_{2}\) substructures by Proposition 1. For a cell \(c\in\tau(v)\), let \(v_{c}\) denote the corresponding vertex of \(G^{\prime}\). Note that \(v_{c}\) can have at most two neighbors of degree 2 or higher, because otherwise, we find an \(N_{2}\) substructure in \(G^{\prime}\), a contradiction. All other neighbors of \(v_{c}\) must therefore be degree-1 vertices. Now fix an arbitrary cell \(c\in\tau(v)\) with \(|c|\geq 2\) (if no such cell exists, we are already done). For every cell \(c^{\prime}\in\tau(v)\) with \(c^{\prime}\neq c\), we move \(\max(0,|c^{\prime}|-2)\) edges corresponding to degree-1 vertices in \(G^{\prime}\) from cell \(c^{\prime}\) to cell \(c\) in \(\tau(v)\) (and therefore from vertex \(v_{c^{\prime}}\) to vertex \(v_{c}\) in \(G^{\prime}\)). Since the vertex \(v_{c}\) has degree 2 or higher, adding additional degree-1 neighbors to it does not increase the pathwidth of \(G^{\prime}\) ([19, Lemma 10]). Similarly, removing the degree-1 vertices from the other vertices also does not increase the pathwidth. This leads to a partition of size \(|\tau(v)|\) for the edges incident to \(v\) such that at most one cell contains more than two edges. Since this partition can be realized by a sequence splitting off at most two edges per operation, this concludes the proof.
In addition to Reduction Rule 3, we also reuse Reduction Rule 1 to limit the number of pendant neighbors for each vertex, and Reduction Rule 9 to bound the global potential \(\mu(G)\). These two rules together bound the degree of all vertices in \(G\), which lets us state the following branching rule; see Figure 7 for an illustration.
**Branching Rule 2**.: _Let \(r\) be the root of an \(N_{2}\) substructure contained in \(G\) and let \(x\), \(y\), and \(z\) denote the three neighbors of \(r\) in \(N_{2}\). If \(\{r,x,y,z\}\cap S=\emptyset\), reduce to a trivial no-instance. Otherwise branch on the following instances:_
* _For every_ \(v\in\{x,y,z\}\cap S\)_, create a separate branch for the instance_ \((G^{\prime},S^{\prime},k-1)\)_, where_ \(G^{\prime}\) _is the graph obtained from_ \(G\) _by splitting off the edge_ \(rv\) _at_ \(v\)_._
* _If_ \(r\in S\)_: For every subset_ \(X\subseteq N(v)\) _with_ \(|X|\leq 2\) _and_ \(|X\cap\{x,y,z\}|\geq 1\)_, create a separate branch for the instance_ \((G^{\prime},S^{\prime},k-1)\)_, where_ \(G^{\prime}\) _is obtained from_ \(G\) _by splitting off the edges corresponding to_ \(X\) _at_ \(r\)_._
_We set \(S^{\prime}\coloneqq S\cup K\) in all branches, where \(K\) is the set of vertices created by the split operation. Exhaustively applying Reduction Rules 1 and 9 and Branching Rule 2 yields an equivalent instance without \(N_{2}\) substructures in time \(O((6k+12)^{k}\cdot m)\)._
Proof.: Let \(T\) denote the \(N_{2}\) substructure induced by \(r\) and its neighbors \(\{x,y,z\}\). Observe that \(T\) can only be removed from the graph by splitting one of the vertices in \(\{r,x,y,z\}\). If \(T\) contains no vertex of \(S\), we consequently have a no-instance.
If we split a vertex \(v\in\{x,y,z\}\) in order to eliminate \(T\), then we can only split off the edge \(rv\) alone, because otherwise, the resulting vertex adjacent to \(r\) still has degree \(2\). We thus only need three branches to enumerate all suitable splits of the vertices in \(\{x,y,z\}\).
If we split vertex \(r\), then splitting off any subset of \(N(r)\) that contains one or two vertices of \(\{x,y,z\}\) is suitable to eliminate \(T\). However, by Lemma 5, it suffices to consider subsets of \(N(r)\) with size at most \(2\). By Reduction Rule 9, the global potential \(\mu(G)\) is at most \(2k\), thus \(r\) can have at most \(2k+2\) neighbors of degree \(2\) or higher. Since Reduction Rule 1 limits the number of degree-\(1\) neighbors of \(r\) to at most one, \(r\) has degree at most \(2k+3\). We therefore need at most \(3\cdot(2k+3)\) branches to enumerate all subsets of \(N(r)\) of size at most \(2\) containing at least one vertex of \(\{x,y,z\}\). Together with the three branches from earlier, this yields \(6k+12\) branches, each of which reduces the parameter by \(1\).
We have shown earlier that all of our reduction rules can be applied exhaustively in linear time. Finding an \(N_{2}\) substructure in Branching Rule 2 can also be achieved in \(O(m)\) time by checking, for each vertex \(v\) in \(G\), whether \(v\) has at least three neighbors of degree \(2\) or higher. We thus find an equivalent instance without \(N_{2}\) substructures in time \(O((6k+12)^{k}\cdot m)\).
By Lemma 6, Branching Rule 2 eliminates all \(N_{2}\) substructures from the graph. Reduction Rule 3 additionally removes all pseudo-caterpillars from the graph, we therefore obtain a graph of pathwidth at most \(1\) by Proposition 1.
The problem Pathwidth-One Vertex Splitting can be solved in time \(O((6k+12)^{k}\cdot m)\).
## 5 Treewidth-One Vertex Splitting
In this section, we consider the variant of POVS where the goal is to obtain a graph of treewidth at most \(1\), rather than pathwidth at most \(1\). We remark that a graph \(G\) has treewidth at most \(1\) if and only if \(G\) is a forest.
Treewidth-One Vertex Splitting (TOVS)
**Input:**: An undirected graph \(G=(V,E)\), a set \(S\subseteq V\), and a positive integer \(k\).
**Question:**: Is there a sequence of at most \(k\) splits on vertices in \(S\) such that the resulting graph has treewidth at most \(1\)?
Note that the variant of TOVS with the deletion operation is exactly the problem Feedback Vertex Set, which is a well-studied NP-complete [13] problem that admits
Figure 7: (a) An \(N_{2}\) substructure consisting of the vertices \(\{r,x,y,z\}\). (b)-(c) Two of the branches of Branching Rule 2 eliminating the \(N_{2}\) substructure. The former splits off edge \(rx\) at \(x\), the latter splits off the edges \(rz\) and \(ra\) at \(r\).
a quadratic kernel [22]. Also note that, in this setting, removing degree-1 vertices from the graph yields an equivalent instance. For this reason, the variant with the explosion operation is also equivalent to Feedback Vertex Set. We thus only focus on the problem TOVS, for which we give a simple linear-time algorithm. Analogously to POS-sequences and POS-partitions, we define TOS-sequences and TOS-partitions as split sequences and split partitions, respectively, that result in a graph of treewidth at most 1.
Every minimum TOS-sequence of a graph \(G\) has size \(|E(G)|-|V(G)|+1\).
Proof.: We assume without loss of generality that \(G\) is connected. Consider a minimum TOS-partition \(\tau\) of size \(k\) for \(G\) and let \(G^{\prime}\) be the graph resulting from \(\tau\). Assume that \(G^{\prime}\) is disconnected. Then there exists a vertex \(v\) and two distinct cells \(c_{1},c_{2}\in\tau(v)\), such that the vertices \(v_{c_{1}}\) and \(v_{c_{2}}\) are not connected in \(G^{\prime}\). Since \(v_{c_{1}}\) and \(v_{c_{2}}\) are not connected, merging them into a single vertex does not introduce any cycles in \(G^{\prime}\). We can thus merge \(c_{1}\) and \(c_{2}\) into a single cell in \(\tau(v)\) and we obtain a TOS-partition of size \(k-1\) for \(G\), a contradiction to the minimality of \(\tau\). Therefore, for any minimum TOS-sequence of \(G\), the resulting graph \(G^{\prime}\) must be connected and is thus a tree with \(|E(G^{\prime})|=|V(G^{\prime})|-1\). Since a single split operation increases the number of vertices by exactly 1 and does not alter the number of edges, it is \(|E(G^{\prime})|=|E(G)|\) and \(|V(G^{\prime})|=|V(G)|+k\) and thus \(k=|E(G)|-|V(G)|+1\).
Note that a graph \(G\) with a set \(S\) defining its splittable vertices has a TOS-sequence if and only if \(G[V(G)\setminus S]\) is acyclic. Together with Lemma 3, it thus follows that an instance \((G,S,k)\) of TOVS is a yes-instance if and only if \(G[V(G)\setminus S]\) is acyclic and \(k\geq|E(G)|-|V(G)|+1\). Since the acyclicity of a graph can be tested in linear time using a simple depth-first search, we obtain the following result.
The problem TOVS can be solved in time \(O(n+m)\).
In fact, Theorem 3 implies that the problem of determining whether a graph can be turned into a forest using at most \(k\) splits is equivalent to the problem Feedback Edge Set, which asks whether a given graph can be turned into a forest using at most \(k\) edge deletions.
FPT Algorithms for Splitting and Exploding to \(\text{MSO}_{2}\)-Definable Graph Classes of Bounded Treewidth
While the previous sections focused on the problems of obtaining graphs of pathwidth and treewidth at most 1, respectively, using at most \(k\) vertex splits or explosions on the input graph, we now consider the problem of obtaining other graph classes using these operations. With the following problems, we generalize the problems from the previous sections.
\begin{tabular}{l p{142.3pt}} \multicolumn{2}{l}{**II Vertex Splitting (II-VS)**} \\
**Input:** & An undirected graph \(G=(V,E)\), a set \(S\subseteq V\), and a positive integer \(k\). \\
**Question:** & Is there a sequence of at most \(k\) splits on vertices in \(S\) such that the resulting graph is contained in \(\Pi\)? \\ \multicolumn{2}{l}{\(\Pi\) Vertex Explosion (II-VE)**} \\
**Input:** & An undirected graph \(G=(V,E)\), a set \(S\subseteq V\), and a positive integer \(k\). \\
**Question:** & Is there a set \(W\subseteq S\) with \(|W|\leq k\) such that the graph resulting from exploding all vertices in \(W\) is contained in \(\Pi\)? \\ \multicolumn{2}{l}{} \\ \end{tabular}
\end{table}
Table 1: Parameterized Complexity of Vertex Splitting to Pathwidth at most 1
In the following, we show that \(\Pi\)-VS and \(\Pi\)-VE are both FPT parameterized by the solution size \(k\), if the graph class \(\Pi\) is \(\text{MSO}_{2}\)-definable and has bounded treewidth. We first consider the split operation because here we can use results from related problems.
### Vertex Splitting
Nollenburg et al. [17] showed that, for any minor-closed graph class \(\Pi\), the graph class \(\Pi_{k}\) containing all graphs that can be modified to a graph in \(\Pi\) using at most \(k\) vertex splits is also minor-closed. Robertson and Seymour [21] showed that every minor-closed graph class has a constant-size set of forbidden minors and that it can be tested in cubic time whether a graph contains a given fixed graph as a minor. Since \(\Pi_{k}\) is minor-closed, this implies the existence of a non-uniform FPT-algorithm for the problem \(\Pi\)-VS.
[[17]] For every minor-closed graph class \(\Pi\), the problem \(\Pi\)-VS is non-uniformly FPT parameterized by the solution size \(k\).
In the following, we show that the problem \(\Pi\)-VS is uniformly FPT parameterized by \(k\) if \(\Pi\) is \(\text{MSO}_{2}\)-definable and has bounded treewidth. Since every minor-closed graph class is \(\text{MSO}_{2}\)-definable [21], this improves the result from Proposition 2 for graph classes of bounded treewidth.
Eppstein et al. [10] showed that the problem of deciding whether a given graph \(G\) can be turned into a graph of class \(\Pi\) by splitting each vertex of \(G\) at most \(k\) times can be expressed as an \(\text{MSO}_{2}\) formula on \(G\), if \(\Pi\) itself is \(\text{MSO}_{2}\)-definable. Using Courcelle's Theorem [6], this yields an FPT-algorithm parameterized by \(k\) and the treewidth of the input graph. Their algorithm exploits the fact that the split operations create at most \(k\) copies of each vertex in the graph. Since the same also applies for the problem \(\Pi\)-VS, where we may apply at most \(k\) splits overall, their algorithm can be straightforwardly adapted for \(\Pi\)-VS, thereby implying the following result.
For every \(\text{MSO}_{2}\)-definable graph class \(\Pi\), the problem \(\Pi\)-VS is FPT parameterized by the solution size \(k\) and the treewidth of the input graph.
For a graph class \(\Pi\) of bounded treewidth, recall that \(\text{tw}(\Pi)\) denotes the maximum treewidth among all graphs in \(\Pi\). With the following lemma, we show that, if the target graph class \(\Pi\) has bounded treewidth, then every yes-instance of \(\Pi\)-VS must also have bounded treewidth.
For a graph class \(\Pi\) of bounded treewidth, let \(\mathcal{I}=(G,S,k)\) be an instance of \(\Pi\)-VS. If \(\text{tw}(G)>k+\text{tw}(\Pi)\), then \(\mathcal{I}\) is a no-instance.
Proof.: We first show that a single split operation can reduce the treewidth of \(G\) by at most \(1\). Assume, for the sake of contradiction, that we can obtain a graph \(G^{\prime}\) of treewidth less than \(\text{tw}(G)-1\) by splitting a single vertex \(v\) of \(G\) into vertices \(v_{1}\) and \(v_{2}\) of \(G^{\prime}\). Let \(\mathcal{T}\) denote a minimum tree decomposition of \(G^{\prime}\). Remove all occurences of \(v_{1}\) and \(v_{2}\) in \(\mathcal{T}\) and add \(v\) to every bag of \(\mathcal{T}\). Observe that the result is a tree decomposition of size less than \(\text{tw}(G)\) for \(G\), a contradiction. A single split operation thus decreases the treewidth of the graph by at most \(1\). Since every graph \(G^{\prime}\in\Pi\) has \(\text{tw}(G^{\prime})\leq\text{tw}(\Pi)\), it is thus impossible to obtain a graph of \(\Pi\) with at most \(k\) vertex splits if \(\text{tw}(G)>k+\text{tw}(\Pi)\).
Given a graph class \(\Pi\) of bounded treewidth, we first determine in time \(f(k+\text{tw}(\Pi))\cdot n\) whether the treewidth of \(G\) is greater than \(k+\text{tw}(\Pi)\)[5]. If this is the case, then we can immediately report a no-instance by Proposition 3. Otherwise, we know that
\(k+\mathrm{tw}(\Pi)\). Since \(\mathrm{tw}(\Pi)\) is a constant, we have \(\mathrm{tw}(G)\in O(k)\), and thus Corollary 1 yields the following result.
For every \(\text{MSO}_{2}\)-definable graph class \(\Pi\) of bounded treewidth, the problem \(\Pi\)-VS is FPT parameterized by the solution size \(k\).
### Vertex Explosion
We now turn to the problem variant \(\Pi\)-VE that uses vertex explosions instead of vertex splits. Analogously to Section 6.1, we let \(\Pi_{k}^{\times}\) denote the graph class containing all graphs that can be modified to a graph in \(\Pi\) using at most \(k\) vertex explosions. For arbitrary minor-closed graph classes \(\Pi\), the class \(\Pi_{k}^{\times}\) is not necessarily minor-closed, as the counterexample in Figure 8 shows. It is therefore not clear whether Proposition 2 also holds for \(\Pi\)-VE. Note that, in Figure 8, splitting off a single edge in \(H_{1}\) yields a graph of \(\Pi\). The question whether a graph of \(\Pi\) can be obtained by applying arbitrarily many vertex splits to at most \(k\) vertices in the input graph is therefore not equivalent to \(\Pi\)-VE for arbitrary graph classes \(\Pi\).
Additionally, the FPT-algorithm for \(\Pi\)-VS derived from Eppstein et al. [10] cannot be straightforwardly adapted for \(\Pi\)-VE, since the number of new vertices resulting from explosions is not bounded by a function in \(k\). However, we use a similar approach for \(\Pi\)-VE by defining an \(\text{MSO}_{2}\) formula on an auxiliary graph, again yielding an FPT algorithm parameterized by the solution size \(k\) for \(\text{MSO}_{2}\)-definable graph classes \(\Pi\) of bounded treewidth.
Given an instance \((G,S,k)\) of \(\Pi\)-VE, we first construct the auxiliary graph \(G^{\times}=(V(G)\cup D,E^{\prime})\) by subdividing each edge of \(G\) twice; see Figure 9. The vertices of \(D\) denote the new subdivision vertices. The subdivision vertices adjacent to a vertex \(v\in V(G)\) in \(G^{\times}\) represent the vertices that result from exploding \(v\); see Figure 8(c).
Given a set \(W\subseteq S\subseteq V(G)\) representing the vertices of \(G\) that are chosen to be exploded,
Figure 8: (a) Two forbidden minors \(H_{1}\) and \(H_{2}\) characterizing a minor-closed graph class \(\Pi\). (b) A graph \(G\notin\Pi\) that can be modified into the graph \(G^{\prime}\in\Pi\) by exploding vertex \(x\), thus \(G\in\Pi_{1}^{\times}\). However, exploding at most one vertex in the graph \(H_{1}\) yields either \(H_{1}\) or \(H_{2}\), thus \(H_{1}\notin\Pi_{1}^{\times}\). Since \(H_{1}\) is a minor of \(G\), \(\Pi_{1}^{\times}\) is therefore not minor-closed.
Figure 9: (a) An instance \((G,S,2)\) of \(\Pi\)-VE. (b) The corresponding auxiliary graph \(G^{\times}\) obtained by subdividing each edge in \(G\) twice. (c) The graph obtained by exploding \(\{x_{1},x_{2}\}\) in \(G\) is the highlighted minor of \(G^{\times}\). Since \(\Pi\) is \(\text{MSO}_{2}\)-definable, one can express \(\Pi\)-VE using an \(\text{MSO}_{2}\) formula on \(G^{\times}\).
our \(\textsc{MSO}_{2}\) formula on \(G^{\times}\) works as follows. The graph that is obtained from \(G\) by exploding the vertices of \(W\) is exactly the graph \(G_{W}^{\times}\) obtained from \(G^{\times}\) by removing all vertices of \(W\) and by contracting all subdivision vertices of \(D\) adjacent to a vertex \(v\in V(G)\setminus W\) into \(v\); see Figure 8(c) for an example. We thus simply need to test whether the minor \(G_{W}^{\times}\) of \(G^{\times}\) is contained in \(\Pi\).
Let \(\Pi\) be an \(\textsc{MSO}_{2}\)-definable graph class and let \(\varphi\) denote the corresponding \(\textsc{MSO}_{2}\)-formula such that \(G^{\times}\models\varphi\) if and only if \(G^{\times}\) is contained in \(\Pi\). We let \(V^{\times}\coloneqq V(G^{\times})\) and \(E^{\times}\coloneqq E(G^{\times})\) denote the free variables of \(\varphi\) that correspond to the vertices and edges of \(G^{\times}\), respectively. In order to test whether the minor \(G_{W}^{\times}\) of \(G^{\times}\) is contained in \(\Pi\) for a given set \(W\), we now modify \(\varphi\) to a formula \(\varphi^{\prime}\), such that \(G^{\times}\models\varphi^{\prime}(W)\) if and only if \(G_{W}^{\times}\models\varphi\). In addition to \(V^{\times}\) and \(E^{\times}\), we also use the free variables \(V\coloneqq V(G)\), \(D\), and \(S\) to identify the vertices of \(G\) in \(G^{\times}\), the subdivision vertices of \(G^{\times}\), and the splittable vertices of \(G\), respectively.
For every predicate of the form "\(v\in V^{\times}\)" in \(\varphi\), we need to ensure that no vertices of \(W\) and no subdivision vertices adjacent to vertices of \(V\setminus W\) are allowed. We thus replace every predicate "\(v\in V^{\times}\)" with the following predicate:
\[v\in V_{W}^{\times}\coloneqq v\in V^{\times}\setminus W\land\neg(\exists u \in V\setminus W:\textsc{adj}^{\times}(u,v)).\]
We note that the predicate \(\textsc{adj}^{\times}(u,v)\) is true if and only if \(u\) and \(v\) are adjacent in \(G^{\times}\).
Furthermore, we let the edges of \(G^{\times}\) connecting the adjacent subdivision vertices represent the edges of \(G_{W}^{\times}\) by replacing the predicate "\(e\in E^{\times}\)" as follows:
\[e\in E_{W}^{\times}\coloneqq\exists v_{1},v_{2}\in D:v_{1}\neq v_{2}\land \textsc{inc}^{\times}(e,v_{1})\land\textsc{inc}^{\times}(e,v_{2}).\]
The formula \(\textsc{inc}^{\times}(e,v)\) is true if and only if edge \(e\) is incident to vertex \(v\) in \(G^{\times}\).
Finally, we need to redefine the edge-vertex incidence predicate of \(\varphi\) to be consistent with our new edge and vertex predicates from above. Since the edges of \(G_{W}^{\times}\) are represented by edges connecting adjacent subdivision vertices in \(G^{\times}\), we simply need to additionally account for the case where the given vertex is adjacent to one of the endpoints of the specified edge. This corresponds to a vertex of \(D\) being contracted into an adjacent vertex of \(V\setminus W\) as described earlier.
\[\textsc{inc}_{W}^{\times}(e,v)\coloneqq v\in V_{W}^{\times}\wedge e\in E_{W} ^{\times}\land(\textsc{inc}^{\times}(e,v)\lor\exists v^{\prime}\in D: \textsc{adj}^{\times}(v,v^{\prime})\land\textsc{inc}^{\times}(e,v^{\prime}))\]
We remark that the formulas described above can be straightforwardly translated to pure \(\textsc{MSO}_{2}\). Using the following formula on the graph \(G^{\times}\), we can now model whether exploding a set \(W\subseteq V(G)\) in the original graph \(G\) yields a graph of \(\Pi\).
\[\Pi\textsc{-Explodable}(W)=W\subseteq S\land\varphi^{\prime}(W)\]
Since, for any fixed \(\textsc{MSO}_{2}\)-definable graph class \(\Pi\), the corresponding formula \(\varphi\) (and thus also \(\varphi^{\prime}\)) has constant size, so does the formula \(\Pi\textsc{-Explodable}\). We can thus determine whether \(\Pi\textsc{-Explodable}\) is satisfiable for \(G^{\times}\) in \(f(\mathrm{tw}(G^{\times}))\cdot n\) time using Courcelle's Theorem [6]. Using the optimization version of Courcelle's Theorem [3], we can determine in the same time whether there exists a set \(W\) with \(|W|\leq k\) that satisfies this formula. Note that subdividing edges does not change the treewidth of a graph, thus \(\mathrm{tw}(G^{\times})=\mathrm{tw}(G)\). We therefore obtain the following result.
For every \(\textsc{MSO}_{2}\)-definable graph class \(\Pi\), the problem \(\Pi\textsc{-VE}\) is FPT parameterized by the treewidth of the input graph.
We now again consider the case where the graph class \(\Pi\) has bounded treewidth. Note that Proposition 3 also holds for \(\Pi\)-VE, as the proof can be applied almost verbatim to vertex explosions. For any yes-instance \((G,S,k)\) of \(\Pi\)-VE, we thus have \(\mathrm{tw}(\mathrm{G})\leq\mathrm{tw}(\Pi)+\mathrm{k}\) and we can report any input graph of higher treewidth as a no-instance. Since \(\mathrm{tw}(\Pi)\) is a constant, we obtain the following result using Lemma 8.
For every \(\text{MSO}_{2}\)-definable graph class \(\Pi\) of bounded treewidth, the problem \(\Pi\)-VE is FPT parameterized by the solution size \(k\).
## 7 Conclusion
In this work, we studied the problems Pathwidth-One Vertex Explosion (POVE) and Pathwidth-One Vertex Splitting (POVS), parameterized by the solution size \(k\).
For POVE, we gave an \(O(4^{k}\cdot m)\)-time branching algorithm and showed that POVE admits a quadratic kernel that can be computed in linear time. This improves on a recent result by Ahmed et al. [2], who developed a kernel of size \(O(k^{6})\) for a more restricted version of the problem.
For POVS, we developed an \(O((6k+12)^{k}\cdot m)\)-time branching algorithm and gave a kernelization algorithm that computes a kernel of size \(16k\) in linear time, thus showing that POVS is FPT with respect to the solution size \(k\). Interestingly, the branching algorithm for POVS performs significantly worse than its counterpart for POVE, but the kernelization algorithm yields a smaller kernel. This is because, for the POVS problem, the branching algorithm has to additionally consider multiple ways a single vertex can be split. At the same time, however, a single vertex split only eliminates few forbidden substructures, which was a helpful observation to bound the number of vertices in yes-instances for our kernelization.
Finally, we more generally considered the problem of obtaining a graph of a specific graph class \(\Pi\) using at most \(k\) vertex splits (respectively explosions). For \(\text{MSO}_{2}\)-definable graph classes \(\Pi\) of bounded treewidth, we obtained an FPT algorithm parameterized by the solution size \(k\). These graph classes include, for example, the outerplanar graphs, the pseudoforests, and the graphs of treewidth (respectively pathwidth) at most \(c\) for some constant \(c\).
|
2308.01415 | An Effective Data Creation Pipeline to Generate High-quality Financial
Instruction Data for Large Language Model | At the beginning era of large language model, it is quite critical to
generate a high-quality financial dataset to fine-tune a large language model
for financial related tasks. Thus, this paper presents a carefully designed
data creation pipeline for this purpose. Particularly, we initiate a dialogue
between an AI investor and financial expert using ChatGPT and incorporate the
feedback of human financial experts, leading to the refinement of the dataset.
This pipeline yielded a robust instruction tuning dataset comprised of 103k
multi-turn chats. Extensive experiments have been conducted on this dataset to
evaluate the model's performance by adopting an external GPT-4 as the judge.
The promising experimental results verify that our approach led to significant
advancements in generating accurate, relevant, and financial-style responses
from AI models, and thus providing a powerful tool for applications within the
financial sector. | Ziao Wang, Jianning Wang, Junda Wu, Xiaofeng Zhang | 2023-07-31T07:23:11Z | http://arxiv.org/abs/2308.01415v1 | An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model
###### Abstract
At the beginning era of large language model, it is quite critical to generate a high-quality financial dataset to fine-tune a large language model for financial related tasks. Thus, this paper presents a carefully designed data creation pipeline for this purpose. Particularly, we initiate a dialogue between an AI investor and financial expert using ChatGPT and incorporate the feedback of human financial experts, leading to the refinement of the dataset. This pipeline yielded a robust instruction tuning dataset comprised of 103k multi-turn chats. Extensive experiments have been conducted on this dataset to evaluate the model's performance by adopting an external GPT-4 as the judge. The promising experimental results verify that our approach led to significant advancements in generating accurate, relevant, and financial-style responses from AI models, and thus providing a powerful tool for applications within the financial sector.
## 1 Introduction
In recent years, the abilities of pre-trained language models have experienced significant enhancements, with instruction-tuning playing a pivotal role in these advancements. Despite these improvements, a critical gap remains in the application of these models within the financial sector, i.e., inaccurate or irrelevant responses from these models could lead to significant financial implications, underscoring the importance of enhancing their performance for financial tasks.
In the literature, several approaches have been proposed Taori et al. [10], Xu et al. [14] to generate instruction data to fine-tune a large language model on the down-stream task. Although these approaches well employed the SOTA LLMs like GPT-4 [7], the quality of the generated data are not suitable for financial related tasks. To cope with these challenges, this paper introduces a novel data creation pipeline designed for these high-stakes domains. The cornerstone of our approach is to leverage the in-context learning ability of contemporary large language models [1, 13] and supplement it with a high-quality corpus sourced directly from the financial domain, such as financial reports. This allows us to provide both real-world information and processed knowledge, thereby improving the model's ability to generate accurate and relevant responses.
As illustrated in Figure 1, our approach begins with a high-quality corpus such as a financial report. Using ChatGPT's strong in-context learning ability, we simulate a dialogue between an investor and a financial expert. The generated questions are clustered and then shown to human financial experts for refinement and feedback. This process helps in pruning low-quality or non-financial style questions, which are then sampled to continue the data collection pipeline. Our method resulted in a dataset consisting of 103k multi-turn chats, which was used for instruction tuning on open-sourced
Figure 1: An illustrating example of the data collection pipeline.
language models such as LLama [13]. We further conducted comprehensive tests on the models' performance on financial tasks, including a GPT-4 evaluation with specifically created questions. The contributions of this paper are summarized as follows:
* We design a novel data creation pipeline specifically designed for high-stakes domains, such as finance, and we generate a high-quality instruction tuning dataset consisting of 103k multi-turn chats.
* We automatically generate a set of high quality financial questions which could be used to comprehensively evaluate the model performance.
* We trained a large language model using this dataset and extensively evaluated the model performance. Both the dataset as well as the questions will be released later for public use.
## 2 Related Work
Instruction tuning has emerged as a fundamental component in the advancement of pre-trained language models. The large language models such as GPT-3 and ChatGPT have achieved substantial enhancements using this technique. The underlying idea is to align the model's behavior with human values in a specific domain by utilizing an instructional dataset.
Instruction-tuning on LLMouyang et al. [8] introduced InstructGPT, designed to comply with instructions presented in natural language and produce useful responses. The model performance exceeded that of GPT-3 after tuning under the instruction, substantiating the beneficial impact of instruction tuning. Databricks [4] presented Dolly, an instruction-tuned large language model. Dolly was trained on a dataset amalgamated from various sources, including the InstructGPT dataset by OpenAI and a fresh dataset crafted by Databricks. This new dataset encompasses instructions and outputs from real-world tasks undertaken by data scientists and engineers. Dolly exhibited outstanding capabilities in tasks that necessitated understanding and generating intricate text, alongside following instructions. Kopf et al. [5] proposed a large-scale dataset, OpenAssistant Conversations, intended to expedite the advancement of aligned language models. This dataset is the product of a crowdsourcing endeavor and includes over 1.5 million messages from an excess of 10,000 contributors. The authors also showcased the results of a user preference study, which established that a model, Pythia-12B, fine-tuned on this dataset, represents a formidable rival to OpenAI's gpt-3.5-turbo model. Chiang et al. [2] is an open-source chatbot that was fine-tuned on user-shared dialogues gathered from ShareGPT by using the LLaMA model [13] and has demonstrated a promising performance.
Automatic collection of instruction tuning dataThe automated collection of instruction tuning data is of crucial significance, considering the high costs and potential human bias associated with crowdsourcing. Wang et al. [12] proposed an innovative method termed'self-instruct', leveraging a large language model to generate a diverse range of instructions and associated inputs and outputs, thereby constructing a novel training dataset. Taori et al. [10] introduced Alpaca, a model fine-tuned from Meta's LLaMA [13] 7B model, which was trained on 52K instruction-following demonstrations created using text-davinci-003. The authors postulate that Alpaca exhibits many behaviors akin to OpenAI's text-davinci-003, yet remains remarkably compact and is both cost-effective and simple to reproduce. Xu et al. [14] put forward a proposition to employ ChatGPT to engage in dialogues with itself, consequently creating a novel dataset for training.
Existing methodologies for automatically collecting instructional datasets often employ a question seed as an initiation point. However, the answers are purely generated based on the capabilities of the large language models in use. This approach has a significant limitation as these models cannot generate accurate factual knowledge autonomously. Our method proposes an advancement in this regard, leveraging the powerful in-context learning capabilities of Large Language Models (LLMs). We further enhance the quality of generated data by enforcing adherence to knowledge derived from high-quality corpora. This proposed approach facilitates a more nuanced and accurate generation of responses, representing a significant step forward in the instruction-tuning of pre-trained language models.
## 3 The Proposed Data Collection Pipeline
In this section, we detailed the proposed data collection pipeline. Our data collection process consists of four main steps: (1) selecting a high-quality corpus, (2) simulating dialogues, (3) expert revision on questions and (4) sampling and augmenting the dataset. Through these steps, we were able to construct a rich dataset that effectively bridges the gap between LLMs and the specific requirements of the financial domain.
### Data Source
Our collection pipeline begins with the integration of a high-quality corpus. As our objective is to create a dataset pertinent to the Chinese financial domain, we therefore decided to utilize the Brokerage Research Reports, which we hereafter refer to as 'financial reports'. These reports are authored by financial professionals and embody a high standard of accuracy and expertise. Furthermore, these reports are publically accessible, offering a robust and readily available resource for our data collection efforts 1.
Footnote 1: [http://data.eastmoney.com/report/](http://data.eastmoney.com/report/)
### Simulating Dialogues
Once the corpus was selected, we employed the in-context learning ability of ChatGPT to simulate an investor-financial expert conversation. We used the content of the financial reports as the context for the dialogue. The model was prompted to emulate an investor's perspective, asking insightful questions based on the information presented in the financial reports. Subsequently, it responded to these queries from a financial expert's viewpoint, utilizing the facts and figures given in the report. An illustrative example of the deigned prompt and the resulting conversation is respectively shown in the upper and bottom part of Figure 1.
### Expert Revision
After conducting the dialogue simulation, we compiled a list of questions generated by the model. To ensure the diversity and quality of the data collected, we implemented a two-stage expert review process.
In the first stage, the questions were grouped based on their thematic similarity using a text clustering algorithm. Representative questions from each cluster were sampled and passed to a panel of five financial experts to evaluate the breadth of financial topics covered by the data. The experts were tasked with determining whether the questions adequately covered a wide range of financial topics. If any categories of financial topics were absent from the clusters, the experts were instructed to contribute questions in those areas. This process allowed us to identify any prevalent themes or concerns, ensuring that our final dataset encompassed a comprehensive range of financial discussions.
In the second stage, questions were randomly sampled and presented to a team of financial experts for refinement. Their task was to identify and eliminate any questions that were irrelevant, misleading, or inconsistent with typical financial discourse. Any questions identified as such, as well as those with a high degree of similarity (greater than 99%), were removed. This step was crucial in ensuring the relevance and quality of our final dataset. As shown in Table 1, we have, for the sake of illustration, randomly selected three themes and typical questions within these themes. To enhance readability, we have translated these examples into English.
### Sampling and Data Augmentation
After the expert revision process, a random sample of the refined questions was selected to be re-entered into the data collection pipeline. These questions were used to stimulate further dialogues with the model, effectively expanding the size and diversity of our dataset. This process of sampling and augmentation was repeated multiple times.
The final result of our data collection process was a robust dataset of 103k multi-turn chats and the statistics of the collected dataset are reported in Table 2. The topic distribution of the collected dataset is shown in Figure 2. This dataset served as a resource for instruction tuning, enabling the model to generate precise and relevant responses when faced with financial queries.
## 4 Experiments
To systematically measure the impact of our data creation pipeline, we fine-tuned state-of-the-art language models using our constructed dataset. The following sub-sections shows the details of our experiments.
### Model Setup and Tuning
We performed experiments on both foundational language models, such as LLama [13], and instruction-tuned models
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Mean & Q-5\% & Q-95\% \\ \hline \# dialog turns & 4.0 & 3.0 & 6.0 \\ \# words per question & 13.9 & 7.6 & 23.5 \\ \# words per answer & 78.3 & 46.9 & 116.9 \\ \# words per dialog & 714.5 & 426.0 & 1067.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of our collected dataset, ‘Q’ refers to quantile
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Time: Risk and Investment Opportunities** & _What are the risk factors and future development prospects of [Company Name]?_ \\ _What are the operational pressures and risks that the [Company Name] is facing in the short term?_ \\ _Which sectors will experience the most stable growth in the future business development of the [Company Name]?_ \\ \hline
**Theme: Company Financial Status** & _Will the [Company Name] raise funds for expansion? What will be the scale of the expansion?_ \\ _What is [Company Name]’s production capacity globally?_ \\ _In Q4, what is the expected growth in shipments for [Company Name]?_ \\ \hline
**Theme: New Energy Industry** & _What do you think is the impact of new energy vehicle production and sales data on the lithium sector?_ \\ _What role do you think the control of domestic lithium resources will play in ensuring supply chain security for the future new energy industry?_ \\ _The rising price of lithium resources, what kind of impact will it have on China’s lithium battery industry and electric vehicle industry?_ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Illustration of themes and questions through clustering process.
Figure 2: Topic distribution of the collected dataset
like Vicuna [2]. The key difference between these two models lies in the additional tuning that Vicuna incorporates. Vicuna is tuned on top of LLama using instructional dialogs, which already equips it with the ability to follow instructions in common scenarios. In our experiments, we utilized models of varying sizes--7B, 13B, and 30B. The experiments were designed to ascertain: (1) whether tuning our collected dataset can enhance model performance on financial tasks; (2) whether the tuned model can maintain performance on common tasks; and (3) which model, once tuned, demonstrates superior performance.
To ensure a comprehensive examination, we applied both fine-tuning and delta-tuning methods in our study. For delta-tuning, we utilized the LoRa technique, with parameters 'r' and 'alpha' both set to 8. We configured the dropout rate to be 0.05 and applied the LoRa module to the Q and V matrices in the attention layer. The learning rate was uniformly set at 2e-5 for both tuning methods, and we employed AdamW as our optimizer. The maximum tokens were configured at 2048 for the model.
### Evaluation
We employed automatic evaluation methods to evaluate the model performance on general tasks. Additionally, we specifically designed a GPT-4 evaluation on standard financial questions to assess the model's proficiency in financial tasks. The automatic evaluation include several tasks, given as
* XStoryCloze [6] is a multilingual dataset used to assess commonsense reasoning in the areas of story comprehension, story generation, and script learning. In this test, a system is tasked with selecting the appropriate ending for a four-sentence story.
* pawsk [15] is cross-lingual dataset used to evaluate models' ability to identify paraphrases. Each pair of sentences in the dataset has varying levels of paraphrasing, making it challenging for models to distinguish between them accurately.
* xnli [3] is a benchmark dataset and evaluation task for cross-lingual sentence understanding, designed to overcome the limitations of language-specific models by extending the MultiNLI corpus to 15 languages, facilitating research in low-resource cross-language transfer and multilingual sentence understanding.
* xcopa [9] and xwinograd [11] are both multilingual datasets to evaluate the causal commonsense reasoning ablility of the models.
### Automatic Evaluation Results
The results in Table 3 showcase the performance of the Large Language Models LLAMA-7b, LLAMA-13b, and LLAMA-30b, with and without LORA tuning. Key takeaways include:
* LORA tuning consistently boosts performance across all tasks, emphasizing its effectiveness in enhancing Chinese language comprehension and generation.
* Generally, larger models perform better, but the relationship isn't linear. Model performance also relies on factors like tuning technique and dataset specifics.
* LORA-tuned models significantly excel in the reasoning tasks of XCOPA and XWinograd, indicating their robust reasoning capabilities in Chinese after being fine-tuned on our dataset.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline lama-7b & llama-13b & llama-30b & llama-7b-lora & llama-13b-lora & llama-30b-lora & gpt-3.5 \\ \hline
1.73 & 2.09 & 3.18 & 6.59 & 6.82 & 7.36 & 8.09 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average GPT-4 evaluation scores.
\begin{table}
\begin{tabular}{l|l c c c c c} \hline \hline & & xstory-cloze-zh & xnli-zh & pawsk-zh & xcopa-zh & xwinograd-zh \\ \hline \multirow{8}{*}{llama} & llama-7b & 0.5493 & 0.3622 & 0.4910 & 0.5620 & 0.6369 \\ & llama-7b-lora & 0.5612 & 0.3491 & 0.4965\(\uparrow\) & 0.584 & 0.6429 \\ & llama-7b-finetune & 0.5608 & 0.3633\(\uparrow\) & 0.4872 & 0.5902 & 0.6348 \\ & llama-13b & 0.5652 & 0.3445 & 0.4520 & 0.5840 & 0.7003 \\ & llama-13b-lora & 0.5917 & 0.3403 & 0.4540 & 0.6021 & 0.6838 \\ & llama-13b-finetune & 0.593 & 0.3421 & 0.4605 & 0.6120 & 0.6846 \\ & llama-30b & 0.5857 & 0.3351 & 0.4590 & 0.6220 & 0.7123 \\ & llama-30b-lora & **0.6267\(\uparrow\)** & 0.3463 & 0.4606 & **0.6358\(\uparrow\)** & **0.7149\(\uparrow\)** \\ \hline \multirow{8}{*}{vicuna} & vicuna-7b & 0.6029 & **0.3796\(\uparrow\)** & 0.5205 & 0.594 & 0.5675 \\ & vicuna-7b-lora & 0.5996 & 0.3477 & **0.5285\(\uparrow\)** & 0.5900 & 0.5992 \\ \cline{1-1} & vicuna-7b-finetune & 0.6014 & 0.3500 & 0.4700 & 0.5860 & 0.5933 \\ \cline{1-1} & vicuna-13b & 0.6208\(\uparrow\) & 0.3445 & 0.4485 & 0.6180 & 0.6131 \\ \cline{1-1} & vicuna-13b-lora & 0.6016 & 0.3548 & 0.4561 & 0.6180 & 0.6448 \\ \cline{1-1} & vicuna-13b-finetune & 0.6028 & 0.3448 & 0.4606 & 0.6199\(\uparrow\) & 0.6468\(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation results before and after instruction-tuning. Within each group (either llama or vicuna), the highest scores are indicated by the \(\uparrow\) symbol. The highest score across all methods, irrespective of the group, is highlighted in bold.
Though our dataset concentrates on Chinese financial data, it could be effectively generalized to a variety of Chinese language understanding tasks. It's pertinent to mention that our tuning approach potentially endows the model with more general language skills beneficial across various domains, not just finance. We anticipate these models to particularly excel at finance-specific tasks in Chinese, so we further assess this expectation with a GPT-4 evaluation on finance-related questions curated by experts, which will be discussed in the subsequent section.
### GPT-4 Evaluation Results
Our assessment of model performance on finance-specific tasks employed 100 expert-curated questions, with GPT-4 serving as the judge (with scores ranging from 1 to 10). The results, presented in Table 4, shed light on how our models react to these finance-related queries. The observations from the table show:
* Even without LORA tuning, an increase in the size of the model (from LLAMA-7b to LLAMA-30b) results in better scores, demonstrating the advantage of larger model sizes.
* With the application of LORA tuning, the scores experience a substantial uplift. This boost underlines the effectiveness of the LORA tuning approach, even in a finance-specific context.
* Though the LORA-tuned LLAMA models perform significantly better than their non-tuned counterparts, they still trail behind the GPT-3.5. This suggests there remains potential for further improvement and optimization in our methods to bridge the gap.
\begin{table}
\begin{tabular}{p{343.4pt}} \hline
**Instruction: Currently, Shangji CNC’s main business is the manufacturing and sales of monocrystalline silicon. However, the company has recently invested in silicon material business. Does this mean that the company is expanding its business scope?** \\ \hline _LLaMA-7b: Hello, I am an artificial intelligence assistant._ \\ _LLaMA-13b: We have started researching the silicon industry. We will release more information about this project._ \\ _LLaMA-30b: I know you want to ask: “Currently, the main business of Shangji CNC is the manufacturing and sales of monocrystalline silicon. However, the company has recently invested in silicon material business. Does this mean that the company is expanding its business scope?"_ \\ _LLaMA-7b-lora: Of course, the company is expected to gain more market space by expanding its business areas. According to our analysis, the company has started accelerating the promotion of silicon material business and making it one of the new key businesses. In addition, the company can also expand its business areas through cooperation, acquisition and other means to enhance its strength and market position._ \\ _LLaMA-13b-lora: UpTech CNC has started to expand into the silicon material industry and has entered into a partnership. From a future development perspective, the silicon material industry is expected to become an important support point, which will also play an important role in the company’s long-term development._ \\ _LLaMA-30b-lora: Yes, Shangji CNC has begun to expand its business scope. The company’s latest investment project is a silicon material production line, which is expected to become one of the high-growth markets in the future. In addition, the company holds multiple subsidiaries including Shanghai Huake Optoelectronics and Shanghai Huake Microsystems, which can also provide more business options._ \\ \hline
**Instruction: The maturity level of the industrial chain for photovoltaic curtain walls and photovoltaic roofs, how are they?** \\ _LLaMA-7b: Hello, I am an artificial intelligence assistant. May I ask how can I assist you?_ \\ _LLaMA-13b: We can report that in 2017, the industrial chain maturity of China’s photovoltaic curtain walls and photovoltaic roofs was high._ \\ _LLaMA-30b: I know you want to learn about the price changes in the automotive industry._ \\ _LLaMA-7b-lora: Since 2019, the industrial chain of photovoltaic curtain walls and photovoltaic roofs has been widely recognized in the global market. Currently, some large domestic companies are conducting research and development and will begin production in the future. In addition, as both photovoltaic curtain walls and photovoltaic roofs are emerging technologies, it is necessary to continuously improve research and development capabilities and innovation while accelerating product transformation._ \\ _LLaMA-13b-lora: Currently, the industrial chains of both photovoltaic curtain walls and photovoltaic roofs are expanding. The industrial chain of photovoltaic curtain walls has formed a certain degree of complete automation, while the photovoltaic roof is still under construction, but many companies have started research and development as well as production._ \\ _LLaMA-30b-lora: Currently, the industrial chains of photovoltaic curtain walls and photovoltaic roofs are both in an incomplete state of maturity. In terms of photovoltaic curtain walls, parts such as panels, cells, modules have formed a relatively complete industrial chain while other parts still need further improvement. For example, products like photovoltaic press machines and packaging equipment still have room for efficiency improvement. At the same time, more research and experimentation is needed for the installation engineering of photovoltaic curtain walls to ensure controllability and safety during the installation process. As for photovoltaic roofs, building components and installation engineering also need further improvement._ \\ \hline \end{tabular}
\end{table}
Table 5: Case study, generation results of different models, the language has been translated into English.
In summary, these results reaffirm the beneficial impact of our collected dataset in financial domain.
### Case Study
Table 5 provides a sample of cases chosen randomly from the GPT-4 evaluation process, offering an intuitive comparison of response generation before and after tuning on our curated dataset. It becomes evident that the untuned LLaMA models struggle to produce content relevant to these queries. In contrast, the LORA-tuned variants, benefitting from our dataset, deliver pertinent and increasingly detailed responses. This comparison underscores the effectiveness of our instruction-tuning process.
## 5 Discussion
Our findings demonstrate the significance of domain-specific data collection and subsequent model tuning. Fine-tuning and delta-tuning methods, particularly with LORA, can effectively improve the model's comprehension and generation capabilities within the financial domain. It's important to note that our dataset, even while focusing on Chinese financial data, effectively generalizes to a variety of Chinese language understanding tasks. Our study further suggests that not only the quantity but also the quality of data matters. Our approach of subjecting the data to rigorous examination by financial experts ensured its relevance and quality, which in turn enhanced the effectiveness of the model tuning.
However, even though our dataset significantly improved the performance of LLAMA models, there remains a gap when compared to GPT-3.5, as evidenced by the GPT-4 evaluation results. It suggests that further optimization and improvement strategies should be explored. One direction could be to increase the diversity of the data collection by incorporating more complex financial dialogues or by incorporating feedback from the financial domain users during the model tuning phase. Another direction could be to refine the tuning techniques to better accommodate the specifics of the financial domain.
## 6 Conclusion
In this study, we presented an approach for improving the financial knowledge of large language models, focusing on Chinese financial discourse. We collected a dataset of financial questions, refined them with expert input, and then used them for model tuning. The experimental results confirm the effectiveness of our approach in enhancing the model's ability to handle financial queries. It also highlights the potential of the delta-tuning technique, such as LORA, in model performance enhancement. This work contributes to the ongoing discussion on the ways to enhance the capability of AI models in domain-specific tasks, particularly in the financial sector. Further research should focus on identifying other potential strategies for improving the performance of AI in financial services, such as refining tuning techniques or incorporating real-time user feedback. Despite the remaining challenges, this study paves the way for large language models to play an increasingly valuable role in financial services and applications.
|
2309.06053 | Confounder selection via iterative graph expansion | Confounder selection, namely choosing a set of covariates to control for
confounding between a treatment and an outcome, is arguably the most important
step in the design of observational studies. Previous methods, such as Pearl's
celebrated back-door criterion, typically require pre-specifying a causal
graph, which can often be difficult in practice. We propose an interactive
procedure for confounder selection that does not require pre-specifying the
graph or the set of observed variables. This procedure iteratively expands the
causal graph by finding what we call "primary adjustment sets" for a pair of
possibly confounded variables. This can be viewed as inverting a sequence of
latent projections of the underlying causal graph. Structural information in
the form of primary adjustment sets is elicited from the user, bit by bit,
until either a set of covariates are found to control for confounding or it can
be determined that no such set exists. Other information, such as the causal
relations between confounders, is not required by the procedure. We show that
if the user correctly specifies the primary adjustment sets in every step, our
procedure is both sound and complete. | F. Richard Guo, Qingyuan Zhao | 2023-09-12T08:42:22Z | http://arxiv.org/abs/2309.06053v2 | # Confounder Selection via Iterative Graph Expansion
###### Abstract
Confounder selection, namely choosing a set of covariates to control for confounding between a treatment and an outcome, is arguably the most important step in the design of observational studies. Previous methods, such as Pearl's celebrated back-door criterion, typically require pre-specifying a causal graph, which can often be difficult in practice. We propose an interactive procedure for confounder selection that does not require pre-specifying the graph or the set of observed variables. This procedure iteratively expands the causal graph by finding what we call "primary adjustment sets" for a pair of possibly confounded variables. This can be viewed as inverting a sequence of latent projections of the underlying causal graph. Structural information in the form of primary adjustment sets is elicited from the user, bit by bit, until either a set of covariates are found to control for confounding or it can be determined that no such set exists. Other information, such as the causal relations between confounders, is not required by the procedure. We show that if the user correctly specifies the primary adjustment sets in every step, our procedure is both sound and complete.
## 1 Introduction
Consider an observational study where the causal effect of a treatment variable \(X\) on an outcome variable \(Y\) is of interest. Arguably, the single most widely used strategy for identifying the causal effect is through _confounder adjustment_, which employs a set of observed covariates that are carefully chosen to control for confounding. Let \(Y(x)\) be the potential outcome of \(Y\) had the treatment \(X\) been intervened on and set to level \(x\). A set of covariates \(S\) control for confounding if for every observed level \(x\) of \(X\), \(X\) and \(Y(x)\) are independent within each stratum defined by \(S\), a condition known as conditional ignorability or conditional exchangeability (Rosenbaum and Rubin, 1983; Greenland and Robins, 2009; Hernan and Robins, 2020). The task of choosing such a set of covariates is called _confounder selection_.
While there exist a variety of approaches and criteria for confounder selection (see Guo et al., 2022 for a recent survey), it is clear that this cannot be answered by data alone and hence is fundamentally different from statistical variable selection (e.g. stepwise algorithms in linear regression). That is, domain knowledge about the underlying causal mechanism or structure is indispensable. Causal graphical model provides an intuitive framework for formalizing such knowledge. Specifically, if we have available a causal directed acyclic graph (DAG) model for relevant variables in an observational study, the celebrated back-door criterion (Pearl, 1993) answers whether or not a set of covariates \(S\) controls for confounding, and this criterion is complete in a sense that will be described in Section 2.3. The set \(S\) is called a _sufficient adjustment set_ if it satisfies the back-door criterion. When there are more than one sufficient adjustment
sets, one may wish to further choose a set based on size or statistical efficiency (see, e.g., Henckel et al., 2022; Rotnitzky and Smucler, 2020; Smucler and Rotnitzky, 2022). Yet, these are secondary objectives that we will set aside for the rest of this paper. As far as validity is concerned, confounder selection is sometimes considered a "solved problem" in light of the back-door criterion, provided that a causal DAG (or a latent projection of the DAG) can be pre-specified to represent our domain knowledge and assumptions.
However, the back-door criterion is often difficult to apply in practice. As the back-door criterion is a global condition about the candidate set of covariates \(S\), the treatment \(X\), the outcome \(Y\) and other variables in the system (see Section 2.3 for its statement), a practitioner must be able to (1) conceive all the variables, observed or unobserved, that are relevant, (2) posit all causal relations among these variables, (3) understand how a DAG encodes causal assumptions, and (4) draw the DAG accordingly, or at least a large portion of it, to transcribe the posited relations graphically. While tools and protocols for drawing DAGs have been developed (Shrier and Platt, 2008; Haber et al., 2022), this is still a formidable process in practice. It is often difficult to conceive all the relevant variables, let alone posit all causal relations among them.
### Overview of the iterative graph expansion procedure
In this paper, we take an interactive, bottom-up approach to confounder selection that does not require pre-specifying the causal graph. Our method, called _iterative graph expansion_, is based on a symmetric reformulation of Pearl's back-door criterion and can be viewed as the inverse of the well-known _latent projection_(Verma and Pearl, 1990). The knowledge about the underlying causal graph is elicited from the user, bit by bit, until one or more sets of covariates that meet the symmetric back-door criterion are found, or it is determined that so such set exists. For this procedure, a key new concept is a _primary adjustment set_: an adjustment set for a pair of variables is called primary if for every common ancestor of the two variables, at least one of the two causal paths from the ancestor to these two variables are blocked by the adjustment set. Intuitively, a primary adjustment set removes all "immediate" confounding between the two variables.
More specifically, the process of graph expansion starts with a working graph consisting of two vertices -- the treatment \(X\) and the outcome \(Y\) -- and a dashed bidirected edge, representing possible uncontrolled confounding, between them. In each step, the user is asked to provide candidates of _primary adjustment sets_ to expand a dashed bidirected edge selected from the current working graph; if no such set exists, the dashed bidirected edge is changed to a solid edge. If primary adjustment sets do exist, then every such set leads to an expanded graph: the selected dashed edge is removed and those vertices in the primary adjustment set are introduced to the graph, with a dashed bidirected edge drawn between every new vertex and every old vertex, as well as between every pair of new vertices. This process is repeated until the treatment and the outcome are no longer connected by solid or dashed bidirected edges. When this occurs, variables other than \(X\) and \(Y\) in the working graph form a sufficient adjustment set.
Figure 1 illustrates the iterative graph expansion when the underlying causal graph is a "butterfly" (the
Figure 1: An illustration of the iterative graph expansion using the “butterfly bias” example. The edge chosen for expansion is highlighted in red.
leftmost graph). In the first iteration, the algorithm expands the potential bidirected edge \(X\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}Y\) with primary adjustment set \(\{B\}\). This creates two more potential bidirected edges, \(B\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}X\) and \(B\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}Y\). The second iteration of the algorithm expands \(B\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}X\) by further adding \(\{C\}\) to the graph, which creates three more potential bidirected edges, \(C\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}X\), \(C\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}Y\), and \(C\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}B\). The next iteration simply removes \(C\mathbin{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}} \hbox{\kern 2.0pt\lower 3.0pt\hbox{$\bullet$}}}}X\), as there is no confounding between \(C\) and \(X\). This leads to the rightmost graph, where \(X\) and \(Y\) are not connected by bidirected edges. Hence, the algorithm returns \(\{B,C\}\) as a sufficient adjustment set. Similarly, the set \(\{B,D\}\) can also be identified as a sufficient adjustment set through another sequence of expansions. In fact, \(\{B,C\}\) and \(\{B,D\}\) are the only two minimal sufficient adjustment sets in this example. More examples are given in Section 6 and Appendix C.
Compared to existing methods for confounder selection, such as directly applying the back-door criterion, the iterative graph expansion procedure has several advantages. First of all, it makes structural queries "economical" in the sense that only so much information needed for confounder selection is elicited from the user. All other information, including the presence or absence of directed edges among the confounders, is never requested. Second, familiarity with causal graphs (such as how to apply d-separation) is not required to deploy the procedure; the user is only expected to identify common causes and mediators. To facilitate the use of our procedure, we provide a Shiny web application accessible from [https://ricguo.shinyapps.io/InteractiveConfSel/](https://ricguo.shinyapps.io/InteractiveConfSel/). Finally, we show that this procedure is sound and complete in the following sense: if all the primary adjustment sets specified by the user are indeed primary, then every adjustment set identified by the procedure is sufficient; further, if the user correctly specifies all the minimal primary adjustment sets in each step, then every minimal sufficient adjustment set will be identified by the procedure.
It is worth mentioning that there are other proposals in the literature for confounder selection that only require partial knowledge about the causal graph, including, notably, the _disjunctive cause criterion_ due to VanderWeele and Shpitser (2011); see also VanderWeele (2019) for its variants. The disjunctive cause criterion selects all the observed pre-treatment covariates that are ancestors (i.e. direct or indirect causes) of the treatment, the outcome or both. VanderWeele and Shpitser (2011) showed that this adjustment set is sufficient whenever the set of observed pre-treatment covariates contains any sufficient adjustment set as a subset; see also Richardson et al. (2018); Guo et al. (2022). While this criterion can be useful when data has already been collected and domain knowledge is scarce, verifying the assumption that the set of collected covariates contains a sufficient adjustment set can be as difficult as the task of confounder selection itself. Moreover, the disjunctive cause criterion and many other existing proposals require pre-specifying the set of observed covariates. As such, they are not best suited for designing an observational study, when a primary goal is to determine how data should be collected. Finally, to the best of our knowledge, all the existing methods for confounder selection are "static" -- they cannot learn from the user in an interactive way.
### Other contributions and organization of this paper
The iterative graph expansion procedure is built on several conceptual and technical novelties, summarized as follows.
First, we provide a definition of _confounding paths_, the symmetric structure that induces confounding between two variables. Specifically, a path between \(A\) to \(B\) is called _confounding_ if it has two endpoint arrowheads. This allows us to symmetrize Pearl's back-door criterion and reduce the task of confounder selection to blocking all confounding paths between the treatment and the outcome. The notion of a confounding _path_ complements earlier efforts in the literature towards properly defining a _confounder_, or more precisely, a confounding _variable_. VanderWeele and Shpitser (2013) reviewed several popular notions of confounders and showed that none of them is satisfactory. They further provided an alternative definition: a confounder is a pre-exposure covariate \(C\) for which there exists a set of other covariates
such that \((S,C)\) is a sufficient adjustment set but no proper subset of \((S,C)\) is sufficient. However, this definition does not lead to a practical procedure for selecting confounders.
Second, we adopt the notion of _confounding arcs_ from Pearl (2009, Section 3.5) and provide a concrete definition: a confounding arc is a confounding path with no colliders. It is easy to see that every a confounding path can be decomposed into one or more confounding arcs. Importantly, when a set of covariates block a confounding arc, all its superset also blocks the same arc; the same property doesn't hold for confounding paths consisting of more than one confounding arcs. Nevertheless, we show that, by iteratively blocking confounding arcs, we can eventually block all the confounding paths between the treatment and the outcome.
Third, we introduce a _district criterion_ for confounder selection, which posits that a set \(S\) is an adjustment set if \(X\) and \(Y\) are in different "districts" in the latent projection graph over \(S\cup\{X,Y\}\). This is equivalent to the symmetric back-door criterion but can be easier to check in practice.
Finally, to develop our procedure and prove its soundness and completeness, we introduce a set of refined m-connection/separation relations, along with a set of notation, to reason about confounding paths and arcs. As with the usual m-connection, we show that these relations are preserved by latent projection.
The rest of this paper is organized as follows. In Section 2, we introduce the preliminaries on causal graphical models, including some basic graphical terms, m-separation and latent projection. In Section 3, we introduce confounding paths, confounding arcs and refined m-connection, with which we formulate a symmetric version of the back-door criterion. We study the properties of refined m-connection in Section 4. We show that latent projection preserves refined m-connection (Theorem 2), which then leads to the district criterion (Section 4.2). Graphoid-like properties of refined m-connection are also studied in Section 4.3. We present iterative our graph expansion procedure in Section 5. When the user acts like an oracle and answers all queries about primary adjustment sets correctly, our procedure is shown to be both sound and complete (Theorem 5). Practical considerations and subroutines needed for the procedure are studied in Sections 5.3 and 5.4. We conclude with some discussion in Section 7. Technical proofs and other supplementary materials can be found in the appendix.
## 2 Preliminaries of causal graphical models
### Basic graphical concepts
A directed mixed graph \(\mathcal{G}=(\mathrm{V},\mathcal{D},\mathcal{B})\) is a graph over a finite vertex set \(\mathrm{V}\) that consists of two types of edges, directed edges \(\mathcal{D}\subseteq\mathrm{V}\times\mathrm{V}\) and bidirected edges \(\mathcal{B}\subseteq\mathrm{V}\times\mathrm{V}\). Each vertex in \(\mathrm{V}\) represents a random variable. We write \(A\to B\) for a directed edge \((A,B)\in\mathcal{D}\), which signifies the presence of a direct causal effect of \(A\) on \(B\); by "direct", we mean this causal effect is not mediated by other variables in the graph. We write \(A\leftrightarrow B\) for a bidirected edge \((A,B)\in\mathcal{B}\) (and hence also \((B,A)\in\mathcal{B}\)), which signifies the presence of _endogeneity_ -- a form of unmeasured _confounding_ -- between \(A\) and \(B\); endogeneity introduces non-causal association that cannot be explained away by other variables in the graph. A maximal set of vertices connected by bidirected edges is called a _district_. Between two vertices \(A,B\in\mathrm{V}\), we allow the existence of both a directed edge and a bidirected edge. A directed cycle is a sequence of directed edges \(v_{1}\rightarrow\cdots\to v_{l}\to v_{1}\) with \(v_{l}=v_{1}\) and \(l>1\). If \(\mathcal{G}\) contains no directed cycle, we call \(\mathcal{G}\) an _acyclic directed mixed graph_ (ADMG). Further, if \(\mathcal{G}\) contains no bidirected edge, we call it a _directed acyclic graph_ (DAG).
A _walk_ is a sequence of adjacent edges of any type or orientation. If all vertices on a walk are distinct, we say it is a _path_. For ADMGs, it is necessary to specify a walk or path as a sequence of edges instead of vertices, as there may exist both directed and bidirected edges between two vertices. A walk or path is directed if it is formed of directed edges pointing in the same direction.
For any walk \(\pi\), a non-endpoint vertex \(A\) is said to be a _collider_ on \(\pi\) if both the edges before and after \(A\) have an arrowhead pointing to \(A\), or in other words, if \(\pi\) contains \(\to A\leftarrow\), \(\to A\leftrightarrow\), \(\leftrightarrow A\leftarrow\) or \(\leftrightarrow A\leftrightarrow\). Note that a vertex can be a collider on one path and a non-collider on another path. We call a walk or path without colliders an _arc_ and denote it by a squiggly line (\(\xrightsquigarrow\)); the same concept (with some slight variation) is also referred to as a _trek_ by some authors (Spirtes et al., 1993; Sullivant et al., 2010). An arc can be further specified by the arrowhead or tail added to the squiggly line. A directed path from \(A\) to \(B\) is written as \(A\xrightsquigarrow B\) or \(B\xrightsquigarrow A\). A path from \(A\) to \(B\) with no colliders and two endpoint arrowheads is called a _confounding arc_ and written as \(A\xrightsquigarrow B\). We use a half-arrowhead to indicate that the endpoint can be either an arrowhead or a tail. For example, \(A\xrightsquigarrow B\) means is the path is either \(A\xrightsquigarrow B\) or \(A\xrightsquigarrow B\).
We adopt the common familial terminologies for graphical models. A vertex \(A\) is said to be a _parent_ of another vertex \(B\) and \(B\) a _child_ of \(A\) in the ADMG \(\mathcal{G}\) if \(\mathcal{G}\) contains \(A\to B\); and \(A\) is said to be an _ancestor_ of \(B\) and \(B\) a _descendant_ of \(A\) if \(\mathcal{G}\) contains \(A\xrightsquigarrow B\). (This differs from the convention that \(A\) is considered both an ancestor and a descendant of itself used by many authors.) With these relations, we define sets \(\operatorname{pa}_{\mathcal{G}}(v),\operatorname{ch}_{\mathcal{G}}(v), \operatorname{an}_{\mathcal{G}}(v),\operatorname{de}_{\mathcal{G}}(v)\) for a vertex \(v\in\operatorname{V}\) and extend these definitions to a vertex set \(A\subseteq\operatorname{V}\) disjunctively, e.g., \(\operatorname{pa}_{\mathcal{G}}(A):=\cup_{v\in A}\operatorname{pa}_{\mathcal{G }}(v)\), \(\operatorname{ch}_{\mathcal{G}}(A):=\cup_{v\in A}\operatorname{ch}_{\mathcal{G }}(v)\), etc. Additionally, we let \(\operatorname{\overline{an}}_{\mathcal{G}}(A):=\operatorname{an}(A)\cup A\). The subscript \(\mathcal{G}\) is often omitted when it is clear from the context.
### m-separation
Introduced by Richardson (2003), m-separation extends the d-separation criterion for conditional independence in DAGs (Pearl, 1988) to ADMGs. Given an ADMG \(\mathcal{G}\), a path is said to be _m-connected_ given \(C\subseteq\operatorname{V}\) if every non-collider on the path is not in \(C\), and every collider on the path is in \(C\) or has a descendant in \(C\). For disjoint \(A,B,C\subseteq\operatorname{V}\), if there exists a m-connected path in \(\mathcal{G}\) from any vertex in \(A\) to any vertex in \(B\) given \(C\), we say \(A\) and \(B\) are _m-connected_ in \(\mathcal{G}\) given \(C\) and denote it as \(A\xrightsquigarrow*\xrightsquigarrow B\mid C\)[5]; otherwise, we say they are _m-separated_ by \(C\) in \(\mathcal{G}\) and denote it as \(A\xrightsquigarrow*\xrightsquigarrow B\mid C\)[5]. Instead of using the common notation, \(A\mathrel{\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904 pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule heigh t 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.9 99954pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}}} \mid C\)[5]; using the common notation, \(A\mathrel{\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904 pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule heigh t 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}{\hbox{ \hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\rm l$}}}}} \mid C\)[5], for the same purpose, in this paper we opt for a set of notation that more clearly describes the _type_ of the underlying path: (1) the half-arrowheads indicate that both endpoints are unrestricted in terms of arrowhead or tail; (2) the wildcard character '\(*\)' means the path can have zero, one or several colliders. Thus, \(A\xrightsquigarrow*\xrightsquigarrow B\) refers to all paths from \(A\) to \(B\). The advantages of this set of notation will become clearer in Sections 3 and 4.
### Latent projection
In many cases, a graph can contain variables that are not observed (or observable1) and it is convenient to contemplate the _latent projection_ of the graph onto the set of observed variables (Pearl and Verma, 1991). Let \(\mathcal{G}=(\operatorname{V},\mathcal{D},\mathcal{B})\) be an ADMG and let \(\operatorname{V}=\operatorname{\widetilde{V}}\cup\operatorname{U}\) be a partition (so \(\operatorname{\widetilde{V}}\cap\operatorname{U}=\emptyset\)). The latent projection of \(\mathcal{G}\) onto \(\operatorname{\widetilde{V}}\), written as \(\operatorname{\widetilde{\mathcal{G}}}=\mathcal{G}(\operatorname{\widetilde{V}})\), is defined as the ADMG \(\operatorname{\widetilde{\mathcal{G}}}=(\operatorname{\widetilde{V}}, \operatorname{\widetilde{\mathcal{D}}},\operatorname{\widetilde{\mathcal{B}}})\) with directed and bidirected edges given by
Footnote 1: We say a variable is _observed_ if the variable has been measured and recorded, and we say a variable is _observable_ if the variable can be measured. Typically, the former notion is used when analyzing a study and the latter when designing a study. For convenience, we do not differentiate the two and use “observed” throughout.
* \(A\to B\) in \(\operatorname{\widetilde{\mathcal{G}}}\) if there exists a directed path in \(\mathcal{G}\) on which all the non-endpoint vertices are in \(\operatorname{U}\) (such paths are denoted as \(A\xrightsquigarrow B\)), and
* \(A\leftrightarrow B\) in \(\operatorname{\widetilde{\mathcal{G}}}\) if there exists a path from \(A\) to \(B\) such that the path has two endpoint arrowheads and no colliders and has all the non-endpoint vertices in \(\operatorname{U}\) (such paths are denoted as \(A\xrightsquigarrow B\)).
Trivially, \(A\to B\) in \(\mathscr{G}\) implies \(A\to B\) in \(\widetilde{\mathscr{G}}\) and \(A\leftrightarrow B\) in \(\mathscr{G}\) implies \(A\leftrightarrow B\) in \(\widetilde{\mathscr{G}}\). Using the squiggly line notation, latent projection can be compactly stated as
\[A\begin{Bmatrix}\rightarrow\\ \leftarrow\\ \end{B}\;[\mathscr{G}(\widetilde{\mathrm{V}})]\quad\iff\quad A\begin{Bmatrix} \operatorname*{\text{\small{via U}}}\\ \operatorname*{\text{\small{via U}}}\\ \operatorname*{\text{\small{via U}}}\\ \operatorname*{\text{\small{\text{\small{\text{\small{\text{\small{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\
**Definition 2**.: Consider an ADMG \(\mathcal{G}\) over a vertex set \(\mathrm{V}\). For distinct \(A,B\in\mathrm{V}\), let \(\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B})\) denote the set of all paths from \(A\) to \(B\) in \(\mathcal{G}\), and \(\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B})\) denote the subset of all directed paths. Define the set of _confounding paths_ from \(A\) to \(B\) as
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}):=\{\pi\in \mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}):\pi\text{ has two endpoint arrowheads}\}.\]
Those confounding paths without colliders are called _confounding arcs_:
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}):=\{\pi\in\mathcal{P}_{ \mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}):\pi\text{ has no collider}\}.\]
A confounding arc immediately induces _non-causal_ association between the endpoints when none of its non-endpoint vertices is conditioned on. A confounding arc between \(A\) and \(B\) is essentially two directed paths of the form
\[A\xLeftrightarrow{U_{1}}\xLeftrightarrow{B}\quad\text{or}\quad A\xLeftrightarrow{U_ {1}}\leftrightarrow{U_{2}}\xLeftrightarrow{B}.\]
As our notation '\(\xLeftrightarrow{*}\xLeftrightarrow\)' suggests, a confounding path is either a confounding arc or a concatenation of multiple confounding arcs. In the latter case, the confounding path may induce non-causal association when all the colliders (or their descendants) are conditioned on. To further describe paths that are _m-connected_ given a set \(C\), we adopt the following notation.
**Definition 3**.: For distinct \(A,B\in\mathrm{V}\) and \(C\subseteq\mathrm{V}\setminus\{A,B\}\), define
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B} \mid C) :=\{\pi\in\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B }):\pi\text{ is m-connected given }C\},\] \[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}\mid C) :=\{\pi\in\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}):\pi\text{ is m-connected given }C\},\] \[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B }\mid C) :=\{\pi\in\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B }):\pi\text{ is m-connected given }C\},\] \[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}\mid C) :=\{\pi\in\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}):\pi\text{ is m-connected given }C\}.\]
By definition, we have
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}\mid C)\subseteq\mathcal{P}_{ \mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}\mid C).\]
Also, observe that when no variable is conditioned on, a m-connected confounding path must be a confounding arc, i.e.
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}\mid\emptyset )=\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{B}\mid\emptyset)=\mathcal{P}_{ \mathcal{G}}(A\xLeftrightarrow{B}).\]
Thus, in general, we have
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}\mid\emptyset )\neq\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}),\]
so '\(\emptyset\)' on the left hand side cannot be omitted. Furthermore, we have
\[\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B})=\bigcup_{C \subseteq\mathrm{V}\setminus\{A,B\}}\mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow {*}\xLeftrightarrow{B}\mid C).\]
Recall that in Section 2 we write
\[A\xLeftrightarrow{*}\xLeftrightarrow{B}\mid C \iff A\text{ and }B\text{ are m-connected given }C \iff \mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}\mid C)\neq\emptyset,\]
and
\[A\xLeftrightarrow{\not}\xLeftrightarrow{B}\mid C \iff A\text{ and }B\text{ are m-separated given }C \iff \mathcal{P}_{\mathcal{G}}(A\xLeftrightarrow{*}\xLeftrightarrow{B}\mid C)=\emptyset.\]
Similarly, we define the following refined relations.
**Definition 4** (Refined m-connection/separation).: Let \(\mathcal{G}\) be an ADMG over vertex set \(\mathrm{V}\). For any distinct \(A,B\in\mathrm{V}\) and \(C\subseteq\mathrm{V}\setminus\{A,B\}\), we write
\[A\left\{\begin{array}{c}\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{ \xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{\x}}}}}}}}}}}}}\\ \xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\or{\xor{\x{\x{\x{ \x{\}}}}}}}}}}}}}}\\ \end{array}\right\}B\mid C\;[\mathcal{G}]\quad\iff\quad\mathcal{P}_{ \mathcal{G}}\left(A\left\{\begin{array}{c}\xor{\xor{\xor{\xor{\xor{\xor{\xor{ \xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{\x{\x{\x{\x{ \x{{\x{ \x{ \x{ }}}}}}}}}}}}}\\ \xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\xor{\x{\x{\x{\x{{ \x{\x{{\x{ \x{ \x{ { }}}}}}}}}}}}}\\ \end{array}\right\}B\mid C\right)\neq\emptyset\]
and
\[A\left\{\begin{array}{c}\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{ \xor{\x{\x{\x{\x{\x{\x{\x{\x{\x{\x{\x{\x{ }}}}}}}}}}}}}}}}\\ \xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\xor{\x{\x{\x{\x{\x{ \x{\x{\x{\x{{\x{\x{ }}}}}}}}}}}}}}}}}} \end{array}\right\}B\mid C\right)=\emptyset.\]
Note that the relations above pertaining to '\(\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{ \x{\x{\x{\x{\x{\x{\x{\x{\x{\x{{\x}}}}}}}}}}}}}}}}}}}\) and \(\x\x\x\)' are symmetric in \(A\) and \(B\). The result below directly follows from definition.
**Lemma 1** (Monotonicity of m-connected arcs).: _For \(\widetilde{C}\subseteq C\subseteq\mathrm{V}\setminus\{A,B\}\), we have_
\[A\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{\x{\x{ \x{\x{\x{\x{\x{\x{\x{\x{{ }}}}}}}}}}}}}}}}} \mid C\;\right\mid C\implies A\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{ \x{\x{\x{\x{\x{\x{\x{\x{\x{{\x{{ \}}}}}}}}}}}}}}}}B \mid C\;\implies A\xor{B}\mid\widetilde{C}.\]
Next, we offer our definition of a sufficient adjustment set, which is symmetric in \(X\) and \(Y\), and prove its equivalence to Pearl's back-door criterion; the proof can be found in Appendix A.
**Definition 5** (Adjustment set).: Let \(\mathcal{G}\) be an ADMG over vertex set \(\mathrm{V}\). For distinct \(X,Y\in\mathrm{V}\), we say \(S\subseteq\mathrm{V}\setminus\{X,Y\}\) is an _adjustment set_ if
\[S\cap(\mathrm{de}(X)\cup\mathrm{de}(Y))=\emptyset.\]
An adjustment set \(S\) is _sufficient_ if
\[X\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{\x{\x{\x{ \x{\x{\x{\x{\x}}}}}}}}}}}}}}}}Y\mid S.\]
Moreover, a sufficient adjustment set \(S\) is _minimal_ if none of its proper subsets is also sufficient.
**Theorem 1** (Symmetric back-door criterion).: _Let \(\mathcal{G}\) be an ADMG over vertex set \(\mathrm{V}\) and consider distinct \(X,Y\in\mathrm{V}\)._
1. _If_ \(X\notin\mathrm{de}(Y)\) _and_ \(S\) _is a sufficient adjustment set, then_ \(S\) _controls for confounding between_ \((X,Y)\)_._
2. _If_ \(X\in\mathrm{an}(Y)\)_, then_ \(S\) _is a sufficient adjustment set if and only if_ \(S\) _satisfies the back-door criterion._
In view of Theorem 1, confounder selection boils down to finding an adjustment set \(S\) that blocks all the confounding paths '\(\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{\x{\x{\x{\xx{\x{\x{\x}}}}}}}}}}}}}}\)' between \(X\) and \(Y\); thus \(S\) necessarily blocks all confounding arcs '\(X\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\xor{\x{\x{\x{\x{\x{\x{\xx{\x{\xx{\x{\x{ }}}}}}}}}}}}}}}Y\)'. However, by monotonicity (Lemma 1), blocking confounding arcs is considerably _easier_ than blocking confounding paths -- expanding \(S\) does not introduce new m-connected confounding arcs but can introduce new m-connected confounding paths due to colliders. This observation motivates our procedure in Section 5 that blocks confounding paths by blocking confounding arcs.
## 4 Properties of refined m-connection
Our approach to confounder selection is based on iterative graph expansion, a process that inverts a sequence of latent projections. Thus, it is important to understand how confounding paths behave under latent projection. In fact, similar to the ordinary m-connection/separation, their refined forms in Definition 3 are also preserved by latent projection.
**Theorem 2** (Latent projection preserves m-connection and refined m-connection).: _Let \(\mathcal{G}\) be an ADMG over vertex set \(V\). For any distinct \(A,B\in\mathrm{V}\), \(C\subseteq V\setminus\{A,B\}\), and vertex set \(\widetilde{V}\) such that \(V\supseteq\widetilde{V}\supseteq\{A,B\}\cup C\), we have_
\[A\left\{\begin{array}{c}\xRightarrow\\ \xLeftrightarrow\\ \xLeftrightarrow*\xLeftrightarrow\end{array}\right\}B\mid C\ [\mathcal{G}]\quad \iff\quad A\left\{\begin{array}{c}\xRightarrow\\ \xLeftrightarrow\\ \xLeftrightarrow*\xLeftrightarrow\end{array}\right\}B\mid C\ [\mathcal{G}( \widetilde{V})].\]
The last statement is the preservation of m-connection and the preceding statements are about its refinements. Theorem 2 immediately implies the next result.
**Corollary 1**.: _Let \(\mathcal{G},\mathcal{G}^{\prime}\) be two latent projections of the same ADMG. For vertices \(A,B\) and vertex set \(C\) on both graphs, it holds that_
\[A\left\{\begin{array}{c}\xRightarrow\\ \xLeftrightarrow\\ \xLeftrightarrow*\xLeftrightarrow\\ \xLeftrightarrow*\xLeftrightarrow\end{array}\right\}B\mid C\ [\mathcal{G}]\quad\iff\quad A\left\{ \begin{array}{c}\xRightarrow\\ \xLeftrightarrow\\ \xLeftrightarrow*\xLeftrightarrow\\ \xLeftrightarrow*\xLeftrightarrow\end{array}\right\}B\mid C\ [\mathcal{G}^{\prime}].\]
_Remark 1_.: Theorem 2 can be further strengthened as follows: m-connections by arcs (i.e., without colliders) are preserved by latent projection with matching endpoint arrowheads and tails; m-connections by general paths are also preserved by latent projection, but only endpoint _arrowheads_ are preserved. This means that, for example, the following implication with only matching arrowheads is true (this can be seen from our proof of Theorem 2):
\[A\xRightarrow*\xLeftrightarrow B\mid C\ [\mathcal{G}]\quad\implies\quad A \xLeftrightarrow*\xLeftrightarrow B\mid C\ [\mathcal{G}(\widetilde{V})],\]
but, in general,
\[A\xRightarrow*\xLeftrightarrow B\mid C\ [\mathcal{G}]\quad\not\implies\quad A \xRightarrow*\xLeftrightarrow B\mid C\ [\mathcal{G}(\widetilde{V})].\]
To illustrate the last point, consider Fig. 2, where \(\mathcal{G}^{\prime},\mathcal{G}^{\prime\prime}\) are successive latent projections of \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) is a latent projection of \(\mathcal{F}\). We have the following observations:
1. The relation \(A\xRightarrow*\xLeftrightarrow B\mid C\) holds in \(\mathcal{G}\) (path \(A\to E\leftrightarrow F\gets B\)) and \(\mathcal{G}^{\prime}\) (path \(A\to C\leftrightarrow F\gets B\)) but not in \(\mathcal{G}^{\prime\prime}\).
2. The relation \(A\xRightarrow*\xLeftrightarrow D\mid C\) holds in \(\mathcal{G}\) (path \(A\to E\leftrightarrow F\gets B\to D\)) and \(\mathcal{G}^{\prime}\) (path \(A\to C\leftrightarrow F\gets B\to D\)), but not in \(\mathcal{G}^{\prime\prime}\). Instead, \(A\xLeftrightarrow*\xLeftrightarrow D\mid C\) holds in \(\mathcal{G}^{\prime\prime}\) (path \(A\gets B\to D\)).
3. With wildcard '+' denoting one or more colliders, the relation \(A\xLeftrightarrow+\xLeftrightarrow B\mid C\) holds in \(\mathcal{F}\) (path \(A\gets D\leftrightarrow E\leftrightarrow B\)) but not in \(\mathcal{G}^{\prime}\).
### Proof of Theorem 2
We prove Theorem 2 by re-characterizing (refined) m-connection using \(m^{*}\)_-connected simple walks_, denoted as \(\mathcal{W}^{s}(\cdot)\), as a proxy to m-connected paths. In particular, the first equivalence in Theorem 2 follows from Lemmas 2 and 3 below via the following equivalence diagram:
\[A\xRightarrow B\mid C\ [\mathcal{G}]\quad\stackrel{{\text{Def. \ref{def
Here, \(\mathcal{W}_{\mathcal{G}}^{s}(A\mathop{\rightsquigarrow}B\mid C)\) is the collection of all simple, m\({}^{*}\)-connected directed walks from \(A\) to \(B\) given \(C\); the notions of m\({}^{*}\)-connectedness and simple walks are introduced shortly. All other equivalences in Theorem 2 can be proved in exactly the same way.
Recall that a walk is a sequence of adjacent edges of any type or orientation. Next we briefly explain why reasoning with walks instead of paths simplifies the proof. Given a path, one may have to look beyond the path to determine its m-connectedness. That is, the notion of a m-connected path is _relative_ to the graph: the same path may be m-connected in one graph but not m-connected in another. For example, in Figure 3, the path \(A\to D\gets B\) is m-connected given \(C\) in \(\mathcal{G}_{1}\) but not m-connected given \(C\) in \(\mathcal{G}_{2}\). This issue can be avoided by defining a notion of m-connection for walks, which we call _m\({}^{*}\)-connection_, that does not involve the descendants of any collider; this notion has been used in the literature, e.g., van der Zander et al. (2019).
**Definition 6** (m\({}^{*}\)-connected walk).: A walk is said to be _m\({}^{*}\)-connected_ given a vertex set \(C\) if every non-collider on the walk is not contained by \(C\) and every collider on the walk is contained by \(C\).
In Figure 3, the walk \(A\to D\gets B\) is not m\({}^{*}\)-connected given \(C\) in both \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\). Yet, the walk \(A\to D\to C\gets D\gets B\) (present in \(\mathcal{G}_{1}\) but not in \(\mathcal{G}_{2}\)) is always m\({}^{*}\)-connected given \(C\), irrespective of the graph where the walk resides.
Let \(\mathcal{W}_{\mathcal{G}}(A\mathop{\rightsquigarrow}*\mathop{\rightsquigarrow}B \mid C)\) denote all the m\({}^{*}\)-connected walks from \(A\) to \(B\) given \(C\) in an ADMG \(\mathcal{G}\). Other notations are introduced in a similar way. Specifically, we write
\[\mathcal{W}_{\mathcal{G}}(A\mathop{\rightsquigarrow}*\mathop{ \rightsquigarrow}B\mid C) :=\{\pi\in\mathcal{W}_{\mathcal{G}}(A\mathop{\rightsquigarrow}* \mathop{\rightsquigarrow}B\mid C):\pi\text{ has two endpoint arrowheads}\},\] \[\mathcal{W}_{\mathcal{G}}(A\mathop{\rightsquigarrow}B\mid C) :=\{\pi\in\mathcal{W}_{\mathcal{G}}(A\mathop{\rightsquigarrow}* \mathop{\rightsquigarrow}B\mid C):\pi\text{ has no collider}\}.\]
Note that \(\mathcal{P}(A\mathop{\rightsquigarrow}B\mid C)\subseteq\mathcal{W}(A\mathop{ \rightsquigarrow}*B\mid C)\) is true because confounding arcs have no colliders, but \(\mathcal{P}(A\mathop{\rightsquigarrow}*\mathop{\rightsquigarrow}B\mid C) \subseteq\mathcal{W}(A\mathop{\rightsquigarrow}*\mathop{\rightsquigarrow}B \mid C)\) is generally not true because a m-connected path may not
Figure 3: Two simple ADMGs illustrating the difference between m-connection and m\({}^{*}\)-connectedness.
be m\({}^{*}\)-connected. Nevertheless, it is shown below that the existence of m-connected paths, as one may expect, is equivalent to the existence of m\({}^{*}\)-connected walks.
Since we are interested in the preservation of endpoint arrowheads by latent projection, we will focus on _simple walks_; see Remark 2 below.
**Definition 7** (simple walk).: A walk is _simple_ if its endpoints are visited only once by the walk.
We add a superscript '\(s\)' to \(\mathcal{W}(\cdot)\) to indicate that only simple walks are concerned. For example, \(\mathcal{W}^{\mathrm{s}}_{\mathcal{S}}(A\xleftrightarrow{*}\xleftrightarrow{B }\mid C)\) denotes all the m\({}^{*}\)-connected confounding simple walks from \(A\) to \(B\) given \(C\), i.e.,
\[\mathcal{W}^{\mathrm{s}}_{\mathcal{S}}(A\xleftrightarrow{*}\xleftrightarrow{ B}\mid C):=\{\pi\in\mathcal{W}_{\mathcal{S}}(A\xleftrightarrow{*}\xleftrightarrow{B} \mid C):\pi\text{ is simple}\}\,.\]
It is easy to see that there exists a m\({}^{*}\)-connected walk between \(A\) and \(B\) given \(C\) if and only if there exists a m\({}^{*}\)-connected simple walk between \(A\) and \(B\) given \(C\), that is,
\[\mathcal{W}_{\mathcal{S}}(A\xleftrightarrow{*}\xleftrightarrow{B}\mid C)= \emptyset\quad\iff\quad\mathcal{W}^{\mathrm{s}}_{\mathcal{S}}(A\xleftrightarrow {*}\xleftrightarrow{B}\mid C)=\emptyset.\]
As mentioned earlier, Theorem 2 is proved by applying the next two key lemmas. Lemma 2 shows that refined m-connection can be re-characterized in terms of m\({}^{*}\)-connected simple walks. Further, Lemma 3 shows that m\({}^{*}\)-connected simple walks are preserved by latent projection. The proofs of these lemmas are deferred to Appendix A.
**Lemma 2**.: _Let \(\mathcal{S}\) be an ADMG over vertex set \(\mathrm{V}\). For any distinct \(A,B\in\mathrm{V}\) and \(C\subseteq\mathrm{V}\!\setminus\!\{A,B\}\), we have_
\[\mathcal{P}_{\mathcal{S}}\left(A\left\{\begin{array}{c}\xleftrightarrow{ \xleftrightarrow{\xleftrightarrow{\xleftrightarrow{\xleftrightarrow{\xleftrightarrow{ \xleftrightarrow{\xleftrightarrow{\xleftrightarrow{\xleftrightarrow{\xleftrightarrow{ \xleftrightarrow{\xleftrightarrow{\xleftrightarrow{\xleftrightarrow{ \xleftrightarrow{\xleftrightarrow{\
Proof.: Apply Theorem 2 with \(\widetilde{V}=\{A,B\}\cup C\). Because \(C\) contains all vertices other than \(A,B\) in \(\mathcal{G}(\widetilde{V})\), an m-connected arc must be a single directed or bidirected edge. Further, any non-endpoint vertex on any m-connected path in \(\mathcal{G}(\widetilde{V})\) must be a collider.
We make two further remarks. First, recall that in the literature of graphical models, a _district_ in an ADMG is defined as a maximal set of vertices connected by bidirected edges (see, e.g., Evans and Richardson, 2014). That is, \(A\) and \(B\) are in the same district if and only if \(A\!\leftrightarrow\!*\!\leftrightarrow\!B\). By the third equivalence relation in Corollary 2, an adjustment set \(S\) is sufficient for \(A\) and \(B\) if and only if \(A\) and \(B\) are in different districts of \(\mathcal{G}(\{A,B\}\cup S)\). We call this the _district criterion_ for confounder selection. It provides a simple and useful way to check whether a set of selected confounders is indeed sufficient. Second, the last equivalence relation in Corollary 2 says that \(A\) and \(B\) are m-connected given \(C\) if and only if \(A\) and \(B\) are "collider-connected" in \(\mathcal{G}(\{A,B\}\cup C)\), which is related to the notion of _Markov blanket_ for ADMGs in Richardson et al. (2023).
### Graphoid-like properties
In this subsection, we consider extending the refined m-connection/separation relations in Definition 4 to disjoint sets \(A,B,C\subset\mathrm{V}\). For example, we write
\[A\!\leftrightarrow\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!*\!* \!*\!
Iterative graph expansion
### Primary adjustment set
A primary adjustment set is any adjustment set that blocks all the _confounding arcs_\(\left\uplus\right\rangle\)', i.e., confounding paths with no colliders, between two vertices. They are the building blocks when we try to find sufficient adjustment sets by iteratively expanding the graph, a procedure that is introduced in the next subsection.
**Definition 9** (Primary adjustment set).: Given two distinct vertices \(A,B\) and a set \(S\) such that \(A,B\notin S\), an adjustment set \(C\) for \(A,B\) is called _primary relative to \(S\)_ if \(A\left\uplus\right\rangle B\mid S\cup C\). When \(S=\emptyset\), we simply say that \(C\) is a primary adjustment set for \(A\) and \(B\). Further, \(C\) is called _minimal primary_ (relative to \(S\)) if none of its proper subsets is primary (relative to \(S\)).
By definition, if \(C\) is a (minimal) primary adjustment set for \(A\) and \(B\), it is also a (minimal) primary adjustment set for \(B\) and \(A\). Further, because the relation \(\left\uplus\right\rangle\neq\left\uplus\right\rangle\) implies the relation \(\left\uplus\right\rangle\), any sufficient adjustment set must be primary, but a primary adjustment set need not to be sufficient. For example, in the graph \(A\leftrightarrow D\leftrightarrow B\), the adjustment set \(\{D\}\) for \((A,B)\) is primary but not sufficient.
**Lemma 4**.: _If \(C\) is a minimal primary adjustment set for \(A\) and \(B\) relative to \(S\), then \(C\cap S=\emptyset\) and \(C\subseteq\operatorname{an}(A)\cup\operatorname{an}(B)\)._
Proof.: Suppose \(A\left\uplus\right\rangle B\mid S\cup\widetilde{C}\) for some \(\widetilde{C}\). Let \(C=\widetilde{C}\cap(\operatorname{an}(A)\cup\operatorname{an}(B))\setminus S\). It is easy to show that any confounding arc \(A\left\ltimes B\right.\) that is not m-connected given \(S\cup\widetilde{C}\) remains not m-connected given \(S\cup C\). Thus, \(A\left\ltimes B\mid S\cup C\). The result then follows from the definition of minimal primary adjustment set.
**Theorem 4**.: _Let \(\mathcal{G}\) be an ADMG over vertex set \(V\). Suppose \(S\subset V\) is a minimal sufficient adjustment set for distinct \(X,Y\in V\). Then, for any \(\widetilde{S}\subset S\), there exists distinct \(Z_{1},Z_{2}\in\widetilde{S}\cup\{X,Y\}\) such that_
\[Z_{1}\leftrightarrow Z_{2}\;[\mathcal{G}(\widetilde{S}\cup\{X,Y\})]\quad \text{and}\quad Z_{1}\not\leftrightarrow Z_{2}\;[\mathcal{G}(S\cup\{X,Y\})]. \tag{1}\]
_Moreover, there exists a non-empty minimal primary adjustment set \(C\subseteq S\setminus\widetilde{S}\) for \((Z_{1},Z_{2})\) relative to \(\widetilde{S}\setminus\{Z_{1},Z_{2}\}\)._
Before we prove Theorem 4, note that by Corollary 2, equation (1) is equivalent to
\[Z_{1}\left\ltimes Z_{2}\mid\widetilde{S}\cup\{X,Y\}\setminus\{Z_{1},Z_{2}\} \;[\mathcal{G}]\quad\text{and}\quad Z_{1}\left\ltimes Z_{2}\mid S\cup\{X,Y\} \setminus\{Z_{1},Z_{2}\}\;[\mathcal{G}].\]
Because \(Z_{1},Z_{2}\in\widetilde{S}\cup\{X,Y\}\) and \(\widetilde{S}\) is an adjustment set, we know \(Z_{1}\) and \(Z_{2}\) are not descendants of \(X\) or \(Y\). Therefore, we can safely remove \(\{X,Y\}\) from the conditioning sets and (1) is also equivalent to
\[Z_{1}\left\ltimes Z_{2}\mid\widetilde{S}\setminus\{Z_{1},Z_{2}\}\;[\mathcal{G} ]\quad\text{and}\quad Z_{1}\left\ltimes Z_{2}\mid S\setminus\{Z_{1},Z_{2}\} \;[\mathcal{G}]. \tag{2}\]
To interpret the conclusions of this theorem, imagine trying to find a minimal sufficient adjustment set \(S\) by iteratively adding vertices. Suppose \(\widetilde{S}\subseteq S\) is our current estimate. As long as \(\widetilde{S}\) is not yet sufficient, we can find \(Z_{1},Z_{2}\in\widetilde{S}\cup\{X,Y\}\) such that there is a confounding arc between them that is not blocked by \(\widetilde{S}\). To block it, we can try to find a primary adjustment set \(C\) for \(Z_{1},Z_{2}\) relative to \(\widetilde{S}\) and add \(C\) to \(\widetilde{S}\) and make \(\widetilde{S}\cup C\) our next estimate. This theorem states that for at least one choice of \(C\), the new estimate \(\widetilde{S}\cup C\) will remain a subset of (and possibly the same as) \(S\). If \(\widetilde{S}\cup C\) is still not sufficient, we may iterate the process until we eventually obtain the set \(S\). In other words, Theorem 4 essentially shows that all minimal sufficient adjustment sets for \((X,Y)\) can be found by recursively expanding a bidirected edge with its minimal primary adjustment sets; this is the basis of Algorithm 1 below.
Proof of Theorem 4.: Because \(\widetilde{S}\) is a proper subset of a minimal sufficient adjustment set \(S\), \(\widetilde{S}\) is still an adjustment set but is not sufficient. By Corollary 2, we have
\[X\xleftrightarrow{*}\xleftrightarrow{Y}\mid\widetilde{S}\;[\mathcal{G}]\quad \implies\quad X\leftrightarrow{*}\xleftrightarrow{Y}[\mathcal{G}(\widetilde{S} \cup\{X,Y\})].\]
Therefore, there exists a bidirected path
\[X=D_{0}\leftrightarrow\cdots\leftrightarrow D_{k}=Y,\quad k\geq 1\quad\text{in } \;\mathcal{G}(\widetilde{S}\cup\{X,Y\}).\]
We claim that there exists \(j\in\{0,\ldots,k-1\}\) such that \(D_{j}\not\leftrightarrow D_{j+1}\) in \(\mathcal{G}(S\cup\{X,Y\})\). Otherwise, the same bidirected path also appears in \(\mathcal{G}(S\cup\{X,Y\})\) (because \(\widetilde{S}\subset S\)), so by Corollary 2,
\[X\leftrightarrow{*}\xleftrightarrow{Y}\;[\mathcal{G}(S\cup\{X,Y\})]\quad \implies\quad X\xleftrightarrow{*}\xleftrightarrow{Y}\mid S,\]
contradicting the assumption that \(S\) is sufficient. By our choice, we have \(D_{j},D_{j+1}\in\widetilde{S}\cup\{X,Y\}\) and \(D_{j}\leftrightarrow D_{j+1}\) in \(\mathcal{G}(\widetilde{S}\cup\{X,Y\})\). Hence, we have shown the existence of \((D_{j},D_{j+1})\), rewritten as \((Z_{1},Z_{2})\), as desired.
For the second conclusion, observe that Eq. (2) holds, that is, \(S\setminus\{Z_{1},Z_{2}\}\) is primary for \((Z_{1},Z_{2})\) but \(\widetilde{S}\setminus\{Z_{1},Z_{2}\}\) is not. The existence of such a minimal primary adjustment set \(C\) then follows from the definition.
### Iterative graph expansion algorithm
We now introduce our procedure ConfounderSelect that is based on two sub-routines: SelectEdge and FindPrimary. See Algorithm 1 for the pseudo-code of an implementation using a priority queue \(\mathcal{Q}\) that admits a Pop and a Push operation; see also Appendix B for a recursive version of the algorithm that is shorter but less flexible. We leave the choices of the priority index and of the SelectEdge subroutine to the next subsection, as they do not affect the validity of the algorithm. In practice, it is through the FindPrimary subroutine that the information about the underlying graph is elicited from the user; we discuss its implementation in Section 5.4.
In this algorithm, all possible bidirected edges in the current graph \(\mathcal{G}(\bar{S})\), where \(\bar{S}:=S\cup\{X,Y\}\) and \(S\) is our working adjustment set, are classified into three kinds:
1. the set \(\mathcal{B}_{n}\) that contains all the bidirected edges that are absent or already blocked,
2. the set \(\mathcal{B}_{y}\) that contains all the bidirected edges that exist or assumed to exist, which the algorithm will not attempt to block, and
3. the set \(\mathcal{B}_{u}:=(\bar{S}\times\bar{S})\setminus(\mathcal{B}_{n}\cup\mathcal{B }_{y})\) that contains all the uncertain bidirected edges that the algorithm will attempt to block.
In other words, the algorithm maintains \(\mathcal{B}_{u}\cup\mathcal{B}_{y}\) as a superset of the bidirected edges in \(\mathcal{G}(\bar{S})\). In each iteration, the subroutine SelectEdge\((X,Y,S,\mathcal{B}_{y},\mathcal{B}_{n})\) selects an uncertain bidirected edge \(\pi\) from \(\mathcal{B}_{u}\). The subroutine FindPrimary is then called to find the primary adjustment sets for the two end points of \(\pi\) relative to the current adjustment set. If \(\pi\) is already blocked by the current adjustment set (i.e., the empty set is returned as a primary adjustment set), then \(\pi\) is moved to \(\mathcal{B}_{n}\). Otherwise, the algorithm then attempts to move \(\pi\) to \(\mathcal{B}_{n}\) by expanding \(\pi\) with every primary adjustment set that is returned by the subroutine FindPrimary. Finally, it attempts to make no expansion by moving \(\pi\) to \(\mathcal{B}_{y}\). Because the vertex set \(\mathrm{V}\) is assumed to be finite, this algorithm will eventually terminate.
**Theorem 5** (Soundness and completeness of iterative graph expansion).: _Let \(\mathcal{G}\) be an ADMG over vertex set \(\mathrm{V}\). The following two statements hold for any distinct \(X,Y\in\mathrm{V}\) such that \(X\notin\mathrm{de}(Y)\)._
1. _Suppose for any_ \(A,B\in\mathrm{V}\) _and_ \(S^{\prime}\subseteq\mathrm{V}\setminus\{A,B\}\)_, when_ FindPrimary\((A,B;S^{\prime})\neq\emptyset\)_, every_ \(C\in\textsc{FindPrimary}(A,B;S^{\prime})\) _is a primary adjustment set for_ \(A,B\) _relative to_ \(S^{\prime}\) _in_ \(\mathcal{G}\)_. Then every element in the output of_ ConfounderSelect\((X,Y)\) _is a sufficient adjustment set for_ \(X,Y\)_._
2. _Suppose further that_ FindPrimary\((A,B;S^{\prime})\) _contains every minimal primary adjustment set for_ \(A,B\) _relative to_ \(S^{\prime}\) _in_ \(\mathcal{G}\)_. Then the output of_ ConfounderSelect\((X,Y)\) _contains all minimal sufficient adjustment sets for_ \(X,Y\)_._
Proof.: Statement 1 says that the graph expansion algorithm is sound. It follows from the district criterion (Corollary 2), as an adjustment set \(S\) is only added to the output \(\mathcal{R}\) in lines 8-11 of Algorithm 1 when \(X\) and \(Y\) are not connected by bidirected edges in \(\mathcal{B}_{y}\cup\mathcal{B}_{u}\) (and hence in \(\mathcal{G}(\bar{S})\)). Statement 2 says that the graph expansion algorithm is complete for identifying minimal sufficient adjustment sets. It directly follows from Theorem 4.
### Practical considerations
Algorithm 1 provides a sound and complete template for confounder selection, which relies on subroutines FindPrimary, SelectEdge and a priority index specified for the queue \(\mathcal{Q}\). In practice, instead of finding all the (minimal) sufficient adjustment sets, often the goal is to find _one_ such set _quickly_, i.e., with only a few attempts of graph expansion. This goal can be facilitated by choosing SelectEdge and the
priority index properly, as discussed below. We leave the implementation of FindPrimary to the next subsection.
To find a (minimal) sufficient adjustment set quickly, we recommend the following _min-cut_ strategy. Suppose the priority queue \(\mathcal{Q}\) is implemented such that (1) \(\textsc{Pop}(\mathcal{Q})\) returns an element with the lowest index, and (2) in case of a tie, returns the element that is last pushed to \(\mathcal{Q}\). We choose the priority index to be
\[\min\text{-cut}(S,\mathcal{B}_{y},\mathcal{B}_{n}):=\text{minimal number of edges removed from }(\bar{S}\times\bar{S})\setminus(\mathcal{B}_{y}\cup\mathcal{B}_{n})\text{ to disconnect }X\text{ and }Y\text{.}\]
In case of \(X\leftrightarrow*\leftrightarrow Y\) by edges in \(\mathcal{B}_{y}\), let \(\min\text{-cut}(S,\mathcal{B}_{y},\mathcal{B}_{n}):=\infty\). Because \(\min\text{-cut}=0\) whenever the district criterion is satisfied (line 9, Algorithm 1), this choice prioritizes those candidates needing the fewest number of expansions. Accordingly, we recommend a subroutine SelectEdge(\(X\), \(Y\), \(S\), \(\mathcal{B}_{y}\), \(\mathcal{B}_{n}\)) that returns an edge that lies _on_ the min-cut, with ties broken in a way that best suits the problem. This strategy is adopted by the examples in Section 6 and Appendix C, where the min-cut of the popped graph is marked in red.
### Finding primary adjustment sets
Given two vertices \(A,B\) and a set \(S\) such that \(A,B\notin S\), recall that an adjustment set \(S^{\prime}\) for \(A,B\) is primary relative to \(S\) if \(A\,\mbox{\textcircled{\char 31}}\,B\mid S\cup S^{\prime}\). Suppose \(\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{ \bar{ \bar{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)
Theorem 2, \(A\xleftrightarrow{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
1. We start with \[\mathcal{Q}=\left[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/q1} \end{array}\right],\ \ \ \mathcal{R}=\{\}.\]
2. We have \[\textsc{FindPrimary}(X,Y;\emptyset,\mathtt{True})=\{\{F\},\{T,G\},\{N,G\},\{O,G \},\{N,D\},\{N,W\},\{N,E\}\}\] and \[\mathcal{Q}=\left[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/q1} \end{array}\right],\ \ \ \mathcal{R}=\{\}.\]
3. We have \(\textsc{FindPrimary}((X,F);\emptyset,\mathtt{True})=\{\{O\},\{T\}\}\) and \[\mathcal{Q}=\left[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/q1} \end{array}\right],\ \ \ \mathcal{R}=\{\}.\]
4. Note that \(O\xleftrightarrow{\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
6. Noting that \(X\xleftrightarrow{\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
directed edges between the variables in the working graph. Second, by construction our method guards against the so-called collider bias or M-bias (Greenland et al., 1999). More broadly, the district criterion can be employed to identify such bias for any given adjustment set. Finally, we would like to stress again that confounder selection is crucial for both the design and the analysis of an observational study. As pointed out by Rubin (2008), "for objective causal inference, design trumps analysis". Building on the success of causal graphs to represent structural assumptions, more work on using causal graphs to aid the design of observational studies is much needed.
|
2309.15749 | Phases of Pseudo-Nambu-Goldstone Bosons | We study the vacuum dynamics of pseudo-Nambu-Goldstone bosons (pNGBs) for
$SO(N+1) \rightarrow SO(N)$ spontaneous and explicit symmetry breaking. We
determine the magnitude of explicit symmetry breaking consistent with an EFT
description of the effective potential at zero and finite temperatures. We
expose and clarify novel additional vacuum transitions that can arise for
generic pNGBs below the initial scale of $SO(N+1) \rightarrow SO(N)$
spontaneous symmetry breaking, which may have phenomenological relevance. In
this respect, two phenomenological scenarios are analyzed: thermal and
supercooled dark sector pNGBs. In the thermal scenario the vacuum transition is
first-order but very weak. For a supercooled dark sector we find that,
depending on the sign of the explicit symmetry breaking, one can have a
symmetry-restoring vacuum transition $SO(N-1) \rightarrow SO(N)$ which can be
strongly first-order, with a detectable stochastic gravitational wave
background signal. | Fotis Koutroulis, Matthew McCullough, Marco Merchand, Stefan Pokorski, Kazuki Sakurai | 2023-09-27T16:06:12Z | http://arxiv.org/abs/2309.15749v1 | # Phases of Pseudo-Nambu-Goldstone Bosons
###### Abstract
We study the vacuum dynamics of pseudo-Nambu-Goldstone bosons (pNGBs) for \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) spontaneous and explicit symmetry breaking. We determine the magnitude of explicit symmetry breaking consistent with an EFT description of the effective potential at zero and finite temperatures. We expose and clarify novel additional vacuum transitions that can arise for generic pNGBs below the initial scale of \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) spontaneous symmetry breaking, which may have phenomenological relevance. In this respect, two phenomenological scenarios are analyzed: thermal and supercooled dark sector pNGBs. In the thermal scenario the vacuum transition is first-order but very weak. For a supercooled dark sector we find that, depending on the sign of the explicit symmetry breaking, one can have a symmetry-restoring vacuum transition \(\mathrm{SO}(N-1)\to\mathrm{SO}(N)\) which can be strongly first-order, with a detectable stochastic gravitational wave background signal.
## 1 Introduction
PNGBs [1; 2] arise in nature, as phonons, magnons, pions and in a broad range of theoretical scenarios. It is no surprise that they are abundant. It is a theorem that whenever a continuous global symmetry is spontaneously broken that NGBs will arise [2]. Furthermore, it is widely believed that there can be no exact continuous global symmetries in nature (more precisely, in gravitational theories [3; 4; 5; 6; 7]), in which case any NGB will, in reality, be a pNGB. Thus, while the effective field theory (EFT) description of the low-energy behaviour of exact NGBs is an interesting object for theoretical study, it is likely that in nature the physics below the scale of spontaneous symmetry breaking is dominated by the scalar potential generated for pNGBs, since it contains the most relevant operators.
Since the structure of the pNGB potential determines the vacuum dynamics it is well-motivated to map the connections between explicit symmetry breaking sources in a UV theory and the vacuum structure and dynamics in the IR, since this aspect is physically relevant for pNGBs that are realised in nature. Once this map is firmly established one can then determine and/or classify the plausible phases of pNGB vacua and their dynamics.
Ref. [8] established the first part of this programme for an \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) spontaneous and explicit symmetry breaking pattern. The fundamental building blocks of explicit symmetry breaking were found to be the irrep spurions of \(\mathrm{SO}(N+1)\) which preserve an \(\mathrm{SO}(N)\) subgroup. Each such spurion gives rise, in the IR, to a unique Gegenbauer scalar potential which is an eigenfunction of the Laplacian on the \(N\)-sphere. Any general pNGB potential for \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) can thus be decomposed as a sum of Gegenbauer polynomials. Note that this is strongly analogous to the solution of the Hydrogen wavefunction
in quantum mechanics. The angular momentum \(|j,0\rangle\) eigenstates correspond to a non-zero expectation value for the spin-\(j\) irrep of \(\mathrm{SO}(3)\) which gives rise to the \(j^{\mathrm{th}}\) Legendre polynomial, which is simply an \(\mathrm{SO}(3)\to\mathrm{SO}(2)\) Gegenbauer polynomial. Any wavefunction which is a superposition of angular momentum eigenstates may be written as a sum of Legendre polynomials. Thus what we are familiar with for angular momentum in Hydrogen maps to the pNGBs of \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) breaking, where the spatial rotation global symmetry becomes an internal global symmetry.
With this organisation of pNGB potentials complete the next logical step, which is to understand the vacuum dynamics, is the focus of this work. Throughout we are concerned with the same \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) spontaneous and explicit symmetry breaking pattern. We focus for the most part, as a benchmark, on a single Gegenbauer pNGB potential, in the understanding that the lessons learned will map, in a straightforward way, into a sum of Gegenbauer potentials for any form of pNGB potential.
We begin by ascertaining the conditions under which the EFT description of the potential is valid, both at zero and finite temperature (specifically in the region of an interesting vacuum transition). This effectively places a quantitative constraint on the magnitude of the explicit symmetry breaking tolerable. Violation of this constraint implies a potential for which one does not have a controlled series expansion in the explicit symmetry breaking, whether at tree-level or at higher loop orders.
Subject to this constraint we then explore the vacuum dynamics for pNGBs, which we find to be rich and varied. It should be noted that throughout there is explicit \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) breaking thus, in terms of exact global symmetries, there is no formal phase transition, since only \(\mathrm{SO}(N)\) is an exact symmetry of the Lagrangian. However, since this explicit symmetry breaking is small, one does have a sense in which the fields, which play the role of order parameters, undergo vacuum transitions.
In this work we find that below the scale of spontaneous \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) breaking, which is driven by the development of a non-zero value for the \(\mathrm{SO}(N+1)\) radial mode, there are generically additional pNGB vacuum transitions. There is an additional critical temperature at which the pNGBs themselves develop a vacuum expectation value, triggering a further stage of spontaneous \(\mathrm{SO}(N)\to\mathrm{SO}(N-1)\) breaking. This breaking is due to the explicit symmetry breaking, but the change in order parameter is independent of the magnitude of the explicit symmetry breaking. The reverse can also occur, with a pattern of \(\mathrm{SO}(N+1)\to\mathrm{SO}(N-1)\) breaking followed by a further stage of \(\mathrm{SO}(N-1)\to\mathrm{SO}(N)\) symmetry restoration at lower temperatures.
It follows to determine the nature of these pNGB vacuum transitions. There are two classes to consider, namely thermal and supercooled. In the thermal case we find that the transition is generically weakly first-order. On the other hand, when the pNGB sector is supercooled we find that the vacuum transition, leading to symmetry restoration, can be strong enough to generate detectable GW signatures. We finish with conclusions and future speculations.
pNGB Potential Regime of Validity
We consider an EFT containing the pNGBs \(\psi\) arising from the spontaneous breaking of an approximate global symmetry at the scale \(f\). We define the action at zero temperature as
\[\mathcal{L}=\tfrac{1}{2}g_{ij}(\psi)\partial_{\mu}\psi^{i}\partial^{\mu}\psi^{j}+ \mathcal{O}(\partial^{4})-\varepsilon V_{\varepsilon}(\psi)-\varepsilon^{2}V_ {\varepsilon^{2}}(\psi)-\mathcal{O}(\varepsilon^{3})+...+\mathcal{L}^{\rm CT} \ \, \tag{1}\]
where we have Taylor expanded in derivatives and in \(\varepsilon\), which is, by assumption for pNGBs, a small parameter associated with a source of explicit symmetry breaking. \(\mathcal{L}^{\rm CT}\) represents the counterterms required for renormalisation.
Before commencing with any concrete calculations some considerations are in order concerning the validity of this EFT. To be effective, it must be valid for some range of energies and field scales. For the former, scattering amplitudes involving derivatives will scale as \((p^{2}/M^{2})^{j}\), where \(j\) is some integer and \(M\) is the cutoff energy of the EFT, often associated with the mass of the radial mode of spontaneous symmetry breaking or some other UV scale such as the mass scale of intermediate vector resonances. In any case, the EFT description breaks down, by assumption, whenever \(|p^{2}|\sim M^{2}\).
Equally important is the parameter \(\varepsilon\). In order to be considered pNGBs there must be some range of field values over which there is some sensible notion of perturbative calculability within the EFT and of a scale separation with the UV. For pNGBs the field range is periodic in the spontaneous symmetry-breaking scale \(\sim 2\pi f\). Due to this periodicity we will require that the EFT description is valid and affords a degree of perturbative calculability over all pNGB field values.
To determine the potential limits on the magnitude of \(\varepsilon\) it is helpful to consider the case of pions. Were the quark masses to be comparable to the QCD scale, or the QED gauge coupling to be \(e\sim 4\pi\) in the vicinity of the QCD scale, there would be no sense in which one would have had light pions at all, as they would naturally have mass at the QCD scale. Following this, it is tempting to diagnose EFT validity using the pNGB masses. However, mass-scale separation alone seems insufficient. For instance, in a scenario with two large sources of explicit symmetry breaking \(\varepsilon_{1},\varepsilon_{2}\sim 1\) one could in principle fine-tune their independent contributions to a pNGB potential to give a small mass-squared in the global vacuum, generating a scale separation \(m_{\psi}^{2}\ll M^{2}\). However, one would have no control over perturbative corrections to the form of the pNGB potential, either at tree-level at the matching scale or in the IR at higher loops, due to the underlying magnitude of explicit symmetry breaking. We must therefore be more pragmatic in determining the requirement on \(\varepsilon\) for the EFT description to be valid. The condition cannot simply be that \(m_{\psi}^{2}\ll M^{2}\), which is seemingly necessary but not sufficient. Therefore we opt for the imprecise, but practical, condition that the pNGB potential at \(\mathcal{O}(\varepsilon)\) must be a good approximation to the full potential with all quantum corrections included. In other words, while \(\mathcal{O}(\varepsilon^{2})\) and higher terms will exist, they must not qualitatively alter the form of the pNGB potential.
The one-loop Coleman-Weinberg potential provides a useful diagnostic in this respect. For pNGBs this is given by [8; 9; 10]
\[V^{\rm CW}=\frac{1}{2}{\rm Tr}\int\frac{d^{4}p}{(2\pi)^{4}}\log\left[p^{2}+ \varepsilon g^{-1}\left(\frac{\delta^{2}V_{\varepsilon}}{\delta\psi^{2}}- \frac{\delta V_{\varepsilon}}{\delta\psi}\,\Gamma\right)\right]\ \, \tag{2}\]
where \(\Gamma\) are the Christoffel symbols. The field-dependent curvature (or mass-squared) entering this expression is
\[\mathcal{M}_{\varepsilon}^{2}(\psi)=\varepsilon g^{-1}\left(\frac{\delta^{2}V_{ \varepsilon}}{\delta\psi^{2}}-\frac{\delta V_{\varepsilon}}{\delta\psi}\, \Gamma\right)\ \, \tag{3}\]
whose trace is simply the Laplace-Beltrami operator acting on the space spanned by the pNGBs. Notably, this depends on the geometry of the manifold on which the pNGBs live. In all of our applications we will be interested in the scenarios in which the spontaneous symmetry breaking pattern is
\[\frac{\text{SO}(N+1)}{\text{SO}(N)}\cong\mathcal{S}^{N}\ \, \tag{4}\]
which we recall consists of the set of points a fixed distance from the origin in \(\mathcal{R}^{N+1}\). For the sake of illustration, we focus on scenarios in which the explicit symmetry breaking follows the same pattern, preserving the \(\text{SO}(N)\) subgroup. As a result, we may parameterise the \(N\) Goldstone bosons on this manifold through the unit vector living in \(\mathcal{R}^{N+1}\) as
\[\boldsymbol{\phi}=f\sin\frac{\Pi}{f}\begin{pmatrix}\text{n}_{1}\\ \text{n}_{2}\\ \vdots\\ \text{n}_{N}\\ \cot\frac{\Pi}{f}\end{pmatrix}\, \tag{5}\]
where \(\mathbf{n}\cdot\mathbf{n}=1\). Thus, in this picture, \(\Pi/f\) essentially corresponds to the angle between the Goldstone boson direction and a given arbitrarily chosen axes in \(\mathcal{R}^{N+1}\).
In these coordinates we have that the relevant mass-squared matrix is
\[\mathcal{M}_{\varepsilon}^{2}(\Pi)=\varepsilon\begin{pmatrix}\frac{\cot(\frac {\Pi}{f})}{f}V_{\varepsilon}^{\prime}\mathbb{1}_{N-1}&\mathbb{0}\\ \mathbb{0}&V_{\varepsilon}^{\prime\prime}\end{pmatrix}. \tag{6}\]
where
\[V_{\varepsilon}^{\prime}\equiv\frac{\partial V_{\varepsilon}}{\partial\Pi} \qquad\text{and}\qquad V_{\varepsilon}^{\prime\prime}\equiv\frac{\partial^{2 }V_{\varepsilon}}{\partial\Pi^{2}}\ . \tag{7}\]
Thus, considering the traces of products of this matrix which will arise in perturbative calculations, it suffices to consider the Laplace-Beltrami operator
\[\Delta_{\mathcal{S}^{N}}V_{\varepsilon}=V_{\varepsilon}^{\prime\prime}+(N-1) \cot\frac{\Pi}{f}\frac{V_{\varepsilon}^{\prime}}{f}\ . \tag{8}\]
As a result, truncating the momentum integral at the UV-cutoff, the zero-temperature
effective potential at one-loop is
\[V = V^{(0)}+V^{\rm CW}+V^{\rm CT}\] \[= \varepsilon\left[V_{\varepsilon}+\frac{M^{2}}{32\pi^{2}}\Delta_{{ \cal S}^{N}}V_{\varepsilon}+V_{\varepsilon}^{\rm CT}\right]+\] \[\varepsilon^{2}\bigg{[}V_{\varepsilon^{2}}+\frac{1}{64\pi^{2}} \bigg{\{}\left(V_{\varepsilon}^{\prime\prime}\right)^{2}\bigg{(}\log\left( \frac{\varepsilon}{M^{2}}V_{\varepsilon}^{\prime\prime}\right)-\frac{1}{2} \bigg{)}\] \[+(N-1)\left(\frac{\cot\frac{\Pi}{f}}{f}V_{\varepsilon}^{\prime} \right)^{2}\bigg{(}\log\left(\frac{\varepsilon}{M^{2}}\frac{\cot\frac{\Pi}{f}} {f}V_{\varepsilon}^{\prime}\right)-\frac{1}{2}\bigg{)}\bigg{\}}\] \[+\frac{M^{2}}{32\pi^{2}}\Delta_{{\cal S}^{N}}V_{\varepsilon^{2}} +V_{\varepsilon^{2}}^{\rm CT}\bigg{]}+{\cal O}(\varepsilon^{3})+...\]
Here the terms denoted \(V^{\rm CT}\) represent the counterterms required to renormalise the pNGB potential and \(V^{(0)}\) is the tree-level scalar potential. Thus we see that if \(\Delta_{{\cal S}^{N}}V_{\varepsilon}\) has a very different functional form to \(V_{\varepsilon}\), the counterterm potential cannot be similar in form to \(V_{\varepsilon}\), implying some level of fine-tuning between UV/threshold corrections, which must exist, and the bare potential in order to realise the form of \(V_{\varepsilon}\). If, however, they are of a similar functional form then the \({\cal O}(\varepsilon)\) corrections will not destabilise the pNGB potential at that order. We will return to this possibility in due course.
More immediately relevant is that the \({\cal O}(\varepsilon^{2})\) effective potential corrections may significantly modify the qualitative nature of the potential. This would signify the breakdown of the effective description of the pNGB potential. Thus we will only work with EFTs for the pNGBs in which \(\varepsilon\) is sufficiently small that the physics of the zero-temperature potential is well described at leading order in \(\varepsilon\), hence
\[V\approx\varepsilon\left(V_{\varepsilon}+\frac{M^{2}}{32\pi^{2}}\Delta_{{\cal S }^{N}}V_{\varepsilon}+V_{\varepsilon}^{\rm CT}\right)\ \, \tag{10}\]
is a reasonable approximation to the pNGB potential at zero temperature. This can only be diagnosed on a case-by-case basis, and so we leave further discussion of this aspect until a specific model has been chosen.
Now moving to finite temperature and following by analogy with the Coleman-Weinberg potential, under the same set of assumptions, the full finite-temperature potential at one-loop is, to a leading approximation,
\[V(T)=V^{(0)}+V^{\rm CW}+V^{\rm CT}+V^{\rm T}\ \, \tag{11}\]
where [11]
\[V^{\rm T} = \frac{T^{4}}{2\pi^{2}}{\rm Tr}J_{B}\left(\frac{{\cal M}^{2}(\Pi)} {T^{2}}\right)\ \, \tag{12}\] \[= \frac{T^{4}}{2\pi^{2}}\left(J_{B}\left(\frac{\varepsilon V_{ \varepsilon}^{\prime\prime}}{T^{2}}\right)+(N-1)J_{B}\left(\frac{\varepsilon \cot\left(\frac{\Pi}{f}\right)V_{\varepsilon}^{\prime}}{fT^{2}}\right)\right), \tag{13}\]
and the function \(J_{B}\) is
\[J_{B}(x)=\int_{0}^{\infty}dyy^{2}\log\left(1-\exp^{-\sqrt{y^{2}+x}}\right)\ . \tag{14}\]
Since we now have a new energy scale in the theory, \(T\), we ought to reconsider the conditions under which one has an appropriate description of the physics. For \(T\to 0\) we have that \(V^{\rm T}\to 0\), as expected, thus at very low temperatures we may simply use the zero-temperature effective potential already described.
At high temperatures we may also perform an expansion, in which case
\[V^{\rm T}\approx-N\frac{\pi^{2}}{90}T^{4}+\varepsilon\frac{T^{2}}{24}\Delta_{ \mathcal{S}^{N}}V_{\varepsilon}-\frac{T}{12\pi}\left(\varepsilon V_{ \varepsilon}^{\prime\prime}\right)^{3/2}-(N-1)\frac{T}{12\pi}\left( \varepsilon\cot\frac{\Pi}{f}\frac{V_{\varepsilon}^{\prime}}{f}\right)^{3/2}+...\ . \tag{15}\]
The validity of this expansion rests on two separate aspects. The first is that the high-temperature expansion should be convergent, hence when the system lies at high enough temperatures we require that the physics is, to a good approximation, described by the second term alone, with the third remaining a subleading correction. The second aspect concerns the non-analyticity of the \(J_{B}\) function, and hence of the third term of eq. (15). This non-analyticity generates imaginary terms in the effective potential in regions where \(\partial^{2}V^{(0)}(\Pi)/\partial\Pi^{2}<0\). Since the effective potential is, by definition, a real scalar quantity this signals a breakdown in the effective description of the physics.
Without committing to a specific model in which one can calculate the magnitude of the various terms this is as far as we may proceed, thus we now commit to a specific class of scenarios.
## 3 Gegenbauer Goldstones
Experience with many physical systems, including electrostatics and thermodynamics, suggests that when one encounters the Laplacian the natural functions to work with are the eigenfunctions, satisfying an equation of the form \(\Delta_{\mathcal{S}^{N}}V_{\varepsilon}(\Pi)\propto V_{\varepsilon}(\Pi)\). This is an eigenfunction problem and the solutions which are analytic in \(\Pi\) are the well-known Gegenbauer polynomials [8]
\[\Delta_{\mathcal{S}^{N}}G_{n}^{\frac{N-1}{2}}(\cos\Pi/f)=-\frac{n(n+N-1)}{f^{ 2}}G_{n}^{\frac{N-1}{2}}(\cos\Pi/f)\ \, \tag{16}\]
where the eigenvalues and eigenfunctions are characterised by the two integers, \(N\geq 1\) and \(n\geq 0\). In the application to the pNGB potential, these integers are related to the explicit symmetry breaking pattern \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) realised by a symmetry-breaking spurion in the \(n\)-index symmetric irrep of \(\mathrm{SO}(N+1)\)[8].
Motivated by this we will thus consider a zero-temperature pNGB potential of the form
\[V(\Pi,0) \approx \varepsilon_{n}V_{\varepsilon_{n}}+\mathcal{O}(\varepsilon^{2}) \tag{17}\] \[\approx \varepsilon_{n}f^{2}M^{2}G_{n}^{\frac{N-1}{2}}(\cos\Pi/f)+ \mathcal{O}(\varepsilon^{2})+...\ \.\]
where note that from now on \(\varepsilon\) would carry the subscript \(n\) to distinguish the above choice from the general pNGB case of eq. (10). No summation over the index \(n\) is implied. The typical shape of the Gegenbauer potential at zero temperature (\(T=0\)) is shown in the left (\(\varepsilon_{n}<0\)) and right (\(\varepsilon_{n}>0\)) panels of Fig. 1. Note that for positive \(\varepsilon_{n}\) the global minimum is at a scale \(\langle\Pi\rangle\sim 5.1f/n\)[8], whereas for negative \(\varepsilon_{n}\) the global minimum is at the origin. Importantly, this potential is radiatively stable, since at leading order in this spurion only this term can arise irrespective of the UV physics. Since any general potential may be constructed from a linear sum of Gegenbauer polynomials the lessons learnt from studying the single polynomial case will, in generic cases, extend to more general pNGB potentials that can arise for the \(\mathrm{SO}(N+1)\to\mathrm{SO}(N)\) case.
### pNGB Potentials at Zero Temperature
With this model we may now return to our general requirement of eq. (10). We consider the zero-temperature potential at one-loop
\[V^{(1)}(\Pi,0) \approx \varepsilon_{n}\left[\left(1-\frac{n(n+(N-1))M^{2}}{32\pi^{2}f^{ 2}}\right)V_{\varepsilon_{n}}+V_{\varepsilon_{n}}^{\mathrm{CT}}\right]\] \[+\varepsilon_{n}^{2}\left[V_{\varepsilon_{n}^{2}}+V_{\varepsilon _{n}^{2}}^{\mathrm{CT}}-\frac{1}{128\pi^{2}}\left(\left(V_{\varepsilon_{n}}^ {\prime\prime}(\Pi)\right)^{2}+(N-1)\frac{\cot^{2}\frac{\Pi}{f}}{f^{2}}\Big{(} V_{\varepsilon_{n}}^{\prime}(\Pi)\Big{)}^{2}\right)+...\right]\,\]
where the ellipses denote the logarithmic terms. We see that at \(\mathcal{O}(\varepsilon_{n})\) the quadratic divergence may be absorbed into a counterterm of the same functional form as the initial potential, reflecting the radiative stability of this potential. However, we also see that, regardless of the form of the potential at \(\mathcal{O}(\varepsilon_{n}^{2})\), there are calculable terms proportional to
Figure 1: _A cartoon picture showing the functional form of the Gegenbauer thermal effective potential given by eq. (7), for the temperature asymptotics \(T=0\) and \(T\gg 0\), when the symmetry-breaking parameter, \(\varepsilon_{n}\), is either positive or negative. The high-temperature limit terminates below the radial mode mass \(M\), otherwise the original, approximate, symmetry is restored and the effective description of the model in terms of pNGBs is lost. Cooling down will lead to \(SO(N)\) symmetry restoration or breaking depending on the sign of \(\varepsilon_{n}\)._
\(\varepsilon_{n}^{2}\). In order for the EFT to be valid it is necessary that these terms are subdominant to the leading one.
Since it is the point at which the second derivative of the potential is maximal in magnitude, to establish the maximal permitted value of \(\varepsilon_{n}\) we now focus our discussion around the origin of field space. The Gegenbauer potential and its derivatives scale there as
\[V_{\varepsilon_{n}}(0) = f^{2}M^{2}\frac{(n+N-2)!}{n!(N-2)!}\,\] \[V_{\varepsilon_{n}}^{\prime\prime}(\Pi)\Big{|}_{\Pi=0} = \cot\frac{\Pi}{f}\left.\frac{V_{\varepsilon_{n}}^{\prime}(\Pi)}{ f}\right|_{\Pi=0}=-M^{2}(N-1)\frac{(n+N-1)!}{(n-1)!N!}\ . \tag{10}\]
Thus we find the condition
\[\frac{\varepsilon_{n}^{2}}{128\pi^{2}}N\Big{(}V_{\varepsilon_{n}}^{\prime \prime}(0)\Big{)}^{2}\ll\varepsilon_{n}V_{\varepsilon_{n}}(0)\, \tag{11}\]
which, under eq. (10), is reduced to
\[|\varepsilon_{n}|\ll 128\pi^{2}\,\frac{f^{2}}{M^{2}}\frac{N!(n-1)!}{(n+N-1)!n (N-1)(n+N-1)}\equiv\varepsilon_{n,\max}^{0}\, \tag{12}\]
as a necessary condition for the EFT expansion to be valid at zero temperature, hence the upper-script \(0\) in \(\varepsilon_{n,\max}\) refers to the zero-temperature case.
### pNGB Potentials at Finite Temperature
After renormalization, for this class of potentials the high (enough) temperature form is approximately
\[V(\Pi,T)\approx\varepsilon_{n}f^{2}M^{2}\left(1-\frac{n(n+N-1)}{24}\frac{T^{2 }}{f^{2}}\right)G_{n}^{\frac{N-1}{2}}(\cos\Pi/f)+\mathcal{O}(\varepsilon^{2}) +...\ . \tag{13}\]
Thus, for temperatures satisfying
\[T^{2}\gtrsim T_{F}^{2}=\frac{24}{n(n+N-1)}f^{2}\ \, \tag{14}\]
where we refer to \(T_{F}\) as the "Flipping Temperature", the overall sign of the scalar potential has changed, indicating a transition in the position of the global minimum relative to the zero-temperature potential, see Fig. 1. The functional form of the scalar potential remains unchanged up to the overall factor. We must, however, determine whether we may trust the EFT expansion at this temperature by checking the magnitude of the next term in the finite-temperature expansion.
We proceed as for the zero-temperature case, but now using the thermal potential in eq. (15). The effective potential becomes
\[V(\Pi,T) \approx -N\frac{\pi^{2}T^{4}}{90}+\varepsilon_{n}\Big{[}1-\frac{n(n+N-1) T^{2}}{24f^{2}}\Big{]}V_{\varepsilon_{n}}(\Pi) \tag{15}\] \[- \frac{T\left(\varepsilon_{n}V_{\varepsilon_{n}}^{\prime\prime}( \Pi)\right)^{\frac{3}{2}}}{12\pi}-(N-1)\frac{T\left(\cot\frac{\Pi}{f}\, \varepsilon_{n}V_{\varepsilon_{n}}^{\prime}(\Pi)\right)^{\frac{3}{2}}}{12\pi f ^{3/2}}+\mathcal{O}(\varepsilon_{n}{}^{2})\ \.\]
Focusing around the origin of the field space and noting that the second derivative of the Gegenbauer polynomial is negative there, the relevant constraint reads
\[\Big{|}\frac{T^{2}}{24}\varepsilon_{n}\Delta_{\mathcal{S}^{N}}V_{\varepsilon_{n} }(0)\Big{|}\gg\Big{|}N\frac{T}{12\pi}\left(\varepsilon_{n}V^{\prime\prime}_{ \varepsilon_{n}}(0)\right)^{3/2}\Big{|}. \tag{3.10}\]
This is a necessary condition for the validity of the EFT expansion at a given temperature. For \(T\approx T_{F}\) we get
\[|\varepsilon_{n}|\ll 6\pi^{2}\frac{f^{2}}{M^{2}}\frac{N!(n-1)!}{(n+N-1)!n(N-1) (n+N-1)}\equiv\varepsilon_{n,\max}^{T_{F}}\ . \tag{3.11}\]
This is a stronger bound than at zero temperature, since
\[\varepsilon_{n,\max}^{T_{F}}=\frac{3}{64}\varepsilon_{n,\max}^{0}\ . \tag{3.12}\]
The condition eq. (3.10) is necessary for validity at any temperature but not sufficient. A stronger bound is obtained for \(T=T_{\rm Crit}\), the 'Critical Temperature', at which the vacuum transition is initiated. In general \(T_{\rm Crit}>T_{F}\), with the former defined as the temperature where the potential energy of the two relevant phases becomes degenerate (or the two phases have equal free energy density)
\[V(0,T_{\rm Crit})=V(\left\langle\Pi\right\rangle,T_{\rm Crit})\, \tag{3.13}\]
where \(\left\langle\Pi\right\rangle\) is the pNGB value at the degenerate vacuum. From Fig. 1 note that no matter which cooling-down picture we consider, the potential admits one global minimum around the field-space origin justifying our choice of \(V(0,T_{\rm Crit})\) as the free energy of one of the degenerate phases.
Using the effective potential of eq. (3.9), assuming for now \(\varepsilon_{n}>0\), the above equality gives
\[T_{\rm Crit}^{2}+[B_{\varepsilon}T_{F}^{2}]\frac{T_{\rm Crit}}{f}-T_{F}^{2}= 0\ \, \tag{3.14}\]
with the solution
\[T_{\rm Crit}=\frac{1}{2}\Bigg{[}-B_{\varepsilon}+\sqrt{\frac{4f^{2}}{T_{F}^{ 2}}+B_{\varepsilon}^{2}}\Bigg{]}\frac{T_{F}^{2}}{f}\ . \tag{3.15}\]
\(B_{\varepsilon}\) is a dimensionless parameter defined as
\[B_{\varepsilon} = \frac{f\Delta V_{\varepsilon,3/2}}{12\pi\Delta V_{\varepsilon}} \approx\frac{f}{T_{F}}\Bigg{\{}\frac{T_{F}N\big{(}\varepsilon_{n}V^{\prime \prime}_{\varepsilon_{n}}(0)\big{)}^{\frac{3}{2}}}{12\pi\,\varepsilon_{n}V_{ \varepsilon_{n}}(0)}\Bigg{\}}-\frac{f\big{(}\varepsilon_{n}V^{\prime\prime}_{ \varepsilon_{n}}(\left\langle\Pi\right\rangle)\big{)}^{\frac{3}{2}}}{12\pi \varepsilon_{n}V_{\varepsilon_{n}}(0)} \tag{3.16}\]
where we have defined
\[\Delta V_{\varepsilon,3/2}=N\big{(}\varepsilon_{n}V^{\prime\prime}_{ \varepsilon_{n}}(0)\big{)}^{\frac{3}{2}}-\big{(}\varepsilon_{n}V^{\prime \prime}_{\varepsilon_{n}}(\left\langle\Pi\right\rangle)\big{)}^{\frac{3}{2}}\ \, \tag{3.17}\]
and
\[\Delta V_{\varepsilon}=\varepsilon_{n}V_{\varepsilon_{n}}(0)\Bigg{(}1-\frac{ V_{\varepsilon_{n}}(\left\langle\Pi\right\rangle)}{V_{\varepsilon_{n}}(0)} \Bigg{)}\approx\varepsilon_{n}V_{\varepsilon_{n}}(0)>0\ . \tag{3.18}\]
The notion of \(T_{\rm Crit}\) and the validity of the EFT breaks down if \(B_{\varepsilon}\) has large imaginary part. Note that the term included in \(\{\cdots\}\) above, which is purely imaginary, has been used in eq. (10) to derive the \(\varepsilon\) bound of eq. (11). However, that bound is not sufficient to make the left hand side of eq. (13) (and as a consequence \(B_{\varepsilon}\)) to a good approximation real. It is found that only for an \(|\varepsilon_{n}|\) which is at least \({\cal O}(10^{-2})\) smaller than \(\varepsilon_{n,\rm max}^{T_{F}}\) the \(\frac{f}{T_{F}}\{\cdots\}\) term can safely be neglected from \(B_{\varepsilon}\) and the latter then becomes
\[B_{\varepsilon}\approx-\frac{f\big{(}\varepsilon_{n}V_{\varepsilon}^{\prime \prime}(\langle\Pi\rangle)\big{)}^{\frac{3}{2}}}{12\pi\varepsilon_{n}V_{ \varepsilon_{n}}(0)}<0\quad\text{and}\quad|B_{\varepsilon}|\ll 1\ \, \tag{19}\]
and is real so we can safely evaluate the critical temperature. This stronger bound is used in this paper as the sufficient condition for the validity of the EFT in the whole relevant range of temperatures. Under that condition we obtain that \(T_{\rm Crit}\gtrsim T_{F}\) within a few percent. The two temperatures are sometimes identified in our qualitative discussion but kept distinct in the numerical calculations.
To summarise, we see that for this class of pNGB potentials there are hierarchies of vacuum transitions. Starting from zero temperature as the temperature is raised there will be a vacuum transition in the vicinity of the flipping temperature. Depending on the sign of the spurion this will be from zero pNGB vev to a non-vanishing one, with \(\langle\Pi\rangle\propto f/n\), or vice-versa. The nature of this transition is not yet clear from this analysis, yet its existence is clear. Going to even higher temperatures, above the mass scale of the radial mode in the UV completion the standard symmetry-restoring transition occurs. These scenarios are illustrated in Fig. 2.
It is surprising and rather non-trivial that for a single spontaneous symmetry breaking scenario, with a single explicit symmetry-breaking spurion in a symmetric irrep one has a hierarchy of vacuum transitions at hierarchical scales. It remains to determine the nature of this new vacuum transition.
Figure 2: _Schematic phase diagram for radiatively and thermally stable pNGB potentials, for \(\varepsilon_{n}>0\) (left) and \(\varepsilon_{n}<0\) (right). Throughout there is explicit breaking \(SO(N+1)\to SO(N)\). At high temperatures, above the mass of the radial mode, an approximate \(SO(N+1)\) is restored. For \(\varepsilon_{n}>0\) at lower temperatures, \(SO(N+1)\) is spontaneously broken and at some lower temperature the exact \(SO(N)\) is also spontaneously broken. Whereas for \(\varepsilon_{n}<0\) at lower temperatures, \(SO(N+1)\) is spontaneously broken to \(SO(N-1)\) and at some lower temperature the exact \(SO(N)\) is restored._
Cosmological Gegenbauer Phases
Having outlined the general phase structure of pNGB potentials it remains to determine any potential observable consequences of the additional pNGB vacuum transitions. We consider a dark sector (DS) containing pNGBs with two initial conditions after the end of inflation; thermal and supercooled, however in both cases colder than the visible sector. Given the natural origins and ubiquity of light pNGBs in quantum field theories, and given the clear evidence for the existence of dark matter, a DS scenario is well motivated and plausible. In both cases we also investigate potential stochastic GW Background signatures arising from the vacuum transitions.
### Hot Dark Sector
We assume that the early universe dynamics is governed by the inflaton which, at the end of inflation, starts to oscillate about the minimum of its potential thus, due to its coupling to the Standard Model fields, the universe enters the reheating period. At the same time we consider a DS of pNGBs which is completely decoupled from (or may have an extremely small coupling to) the SM, such that it will not thermalize with the SM fields. The DS temperature, \(T_{h}\), could be above or below the visible one, \(T_{v}\), depending on how strongly each sector couples to the inflaton. The ratio of temperatures after reheating, \(\xi_{\rm DS}=T_{h}/T_{v}\), is heavily constrained by Big Bang Nucleosynthesis (BBN) and Cosmic Microwave Background (CMB) measurements [12; 13].
As noted, we assume \(\xi_{\rm DS}<1\). This type of scenario has been investigated in [14; 15]. The case of \(\xi_{\rm DS}>1\) is more delicate since it requires an out-of-equilibrium mechanism to inject entropy back into the SM before BBN, see e.g. [16]. For model-independent studies regarding the constraints on DS vacuum transition parameters see also [17; 18].1
Footnote 1: Here we will not deal with the case where \(\xi_{\rm DS}=1\), which could happen either by thermalization of the DS with the SM thermal bath or due to specific initial conditions where the inflaton couples democratically to both sectors. We escape the former by assuming the DS has a negligible interaction or never comes into contact with SM and the latter by considering a different evolution of the two sectors during reheating.
A general investigation of the nature of the transition is challenging and essentially beyond the reach of standard computations. However, subject to the requirement of small enough \(\varepsilon_{n}\), discussed in the previous section, we may have some control in the vicinity of the flipping temperature.
To proceed let us recall that the scalar potential in the DS is a Gegenbauer polynomial. The vacuum structure of such a potential is non-trivial given that different local minima coexist for a wide range of temperatures (see Fig. 1). Analysing its thermal history in the following, a vacuum transition is expected to occur. Particularly, for \(\varepsilon_{n}>0\), \(\Pi\) obtains a non-zero vacuum expectation value and spontaneously breaks the SO(N) symmetry.
Before getting into a description of the phase transition details let us present an analytic estimate for the transition strength \(\alpha\), assuming it takes place around \(T\approx T_{F}\). To quantify \(\alpha\) we use the latent heat released normalized to the radiation energy density, which can be written as
\[\alpha(T)\equiv\frac{1}{\rho_{R}}\left(\Delta V(\Pi,T)-\frac{T}{4}\Delta\frac {\partial V(\Pi,T)}{\partial T}\right)\, \tag{32}\]
where the difference between the false and true vacuum is taken. The energy density is
\[\rho_{R} = \frac{\pi^{2}\,g_{\Pi}^{*}T_{h}^{4}}{30}+\frac{\pi^{2}\,g_{\rm SM}^{ *}(T_{v})T_{v}^{4}}{30} \tag{30}\] \[= \frac{\pi^{2}\,T_{h}^{4}}{30}\Big{(}N+\frac{g_{\rm SM}^{*}(T_{v}) }{\xi_{DS}^{4}}\Big{)}\]
Since we consider a phase transition within the DS, the Hubble rate and the other relevant parameters are functions of \(T_{h}\). We keep \(T_{v}\) as a fixed initial parameter and the number of degrees of freedom in the DS corresponds to the number of pNGBs, i.e., \(g_{\Pi}^{*}=N\). We evaluate the radiation degrees of freedom of the SM, \(g_{\rm SM}^{*}\), from tabulated data in [19] and we keep them constant for temperatures in the vicinity of the phase transition.
Making use of the high-temperature expansion we have that the potential energy difference between false and true vacua is
\[\Delta V(\Pi,T) \approx V(0,T)-V(\left<\Pi\right>,T)=\left[1-\frac{T^{2}}{T_{F}^{2}} \right]\Delta V_{\varepsilon}-\frac{T\,\Delta V_{\varepsilon,3/2}}{12\pi}\, \tag{31}\]
while the partial derivative with respect to temperature becomes
\[\frac{T}{4}\Delta\frac{\partial V(\Pi,T)}{\partial T} = \frac{T}{4}\left[\frac{\partial V(\Pi,T)}{\partial T}\Big{|}_{ \Pi=0}-\frac{\partial V(\Pi,T)}{\partial T}\Big{|}_{\Pi=(\Pi)}\right] \tag{32}\] \[\approx -\frac{1}{2}\frac{T^{2}}{T_{F}^{2}}\Delta V_{\varepsilon}-\frac{ 1}{4}\frac{T\,\Delta V_{\varepsilon,3/2}}{12\pi}\,\]
thus
\[\alpha(T) \approx \frac{\Delta V_{\varepsilon}}{\rho_{R}}\Bigg{(}\bigg{[}1-\frac{1} {2}\frac{T^{2}}{T_{F}^{2}}\bigg{]}-\frac{3}{4}\frac{T\,\Delta V_{\varepsilon, 3/2}}{12\pi\Delta V_{\varepsilon}}\Bigg{)}. \tag{33}\]
Focusing around \(T_{F}\), which is used as a proxy for the nucleation temperature \(T_{n}\) since we have verified they are very close numerically, the second term in the above equation reduces to the \(\{\cdots\}\) term of eq. (25) which, as follows from the discussion above eq. (26), has to be very small for the validity of the EFT. Thus, the transition strength becomes
\[\alpha(T_{F})\approx\frac{\Delta V_{\varepsilon}}{\rho_{R}}\left[1-\frac{1}{2} \frac{T_{F}^{2}}{T_{F}^{2}}\right]=\frac{\Delta V_{\varepsilon}}{2\rho_{R}}\ . \tag{34}\]
By setting \(\varepsilon_{n}=10^{-2}\varepsilon_{n,\rm max}^{T_{F}}\) we obtain
\[\alpha(T_{F})\lesssim\frac{0.002}{\left(1+\frac{g_{\rm SM}^{*}(T_{v})}{\xi_{ \rm DS}^{4}N}\right)}\ . \tag{35}\]
The phase transition is weak because of the strong upper bound on \(\varepsilon_{n}\), which also controls the magnitude of the explicit breaking of the original symmetry. This value of \(\alpha\) corresponds to, at most, a very weakly first-order transition and suppressed gravitatonal wave spectrum.
Since the phase transition occurs at finite temperature under the presence of a non-negligible thermal plasma formed out of a system of pNGBs, the expanding bubble walls
transmit a substantial energy density and pressure to the surrounding plasma. Hence, the dominant source of GW production is the motion of the plasma itself, expressed in the form of sound waves. As described in greater detail in an app. (A), for the GW spectrum, under the assumption of small \(\alpha\), the peak of the spectrum is [20; 21; 22; 23]
\[\Omega_{\rm sw}({\rm Peak})h^{2}\approx 4\times 10^{-7}\,\left(R_{*}H_{*}\right)^ {2}(\kappa_{\rm sw}\,\alpha)^{\frac{3}{2}}\,, \tag{4.8}\]
where \(\kappa_{\rm sw}\) encodes kinetic energy normalized to vacuum energy. We evaluate the efficiency factor \(\kappa_{\rm sw}\) using the numerical fits of [24]. \(R_{*}\) is the average bubble size at collision. As described in app. (A), we find that numerically, at the time of the transition, one has \(R_{*}H_{*}\sim 10^{-6}\). Hence we expect at most to have a spectral peak of magnitude
\[\Omega_{\rm sw}({\rm Peak})h^{2}\lesssim 4\times 10^{-23}\,, \tag{4.9}\]
well below the expected reach of future gravitational wave detectors.
### Supercooled Dark Sector
Let us now explore the extreme possibility that our pNGB DS is supercoooled, parameterized as \(\xi_{\rm DS}\approx 0\). This may occur if, for instance, the DS is very weakly coupled to the inflaton. We also discuss the role of \(\varepsilon_{n}\)'s sign. In the previous section we have assumed that \(\varepsilon_{n}>0\). However, in principle, \(\varepsilon_{n}\) can be either positive or negative and, as we explain in the following, the choice of sign impacts the cosmology of the DS.
It is possible that the expansion rate of the universe is initially much faster than the bubble nucleation rate in a supercooled DS.2 As a consequence the DS can enter a period of supercooling, remaining in a local minimum until quantum tunneling towards another local or a global minimum takes place.
Footnote 2: Since the DS is almost decoupled from the SM it will evolve independently, so we consider that the visible sector is “frozen” to a given temperature \(T_{v}\).
For \(\varepsilon_{n}\gtrsim 0\) the vacuum dynamics of a supercooled DS is governed by the zero-temperature potential of eq. (3.2). Such a scenario has interesting phenomenology as the associated potential possesses various local minima and as a consequence the supercooled DS could in principle exhibit successive vacuum transitions, depicted on Fig. 3, via tunneling. For an indicative example we consider the case when the DS is initially in the minimum depicted by the purple dot in Fig. 3 with associated vev \(\langle\Pi_{\rm purple}\rangle\). We calculate the probability of tunneling towards its nearest neighbor blue dot with associated vev \(\langle\Pi_{\rm blue}\rangle\). For this transition it is clear that the barrier between the vacua is large compared to the energy difference between them, therefore the thin wall approximation [25] is a well motivated analytic approach. According to this approximation and following [15], the probability of nucleating a critical bubble via quantum tunneling is
\[\Gamma_{4}=A_{4}\ e^{-S_{4}}\equiv\frac{1}{R_{0}^{4}}\left(\frac{S_{4}}{2\pi }\right)^{2}e^{-S_{4}} \tag{4.10}\]
where \(S_{4}\) is the \(O(4)\)-symmetric bounce solution and \(R_{0}\) is the size of the nucleating bubble.
Moreover, following the cosine-like approximation to the Gegenbauer potential provided in Eq. (12) of [8], and employing the triangle approximation to the cosine potential, for which anlytic expression was derived in [26], in the thin wall approximation the bounce action \(S_{4}\) scales as
\[S_{4}\approx\frac{32\pi^{2}}{3}\frac{(\Delta V_{\rm Max}(\Pi))^{2}(\Delta\Pi)^{ 4}}{(\Delta V(\Pi))^{3}}\ \, \tag{41}\]
where \(\Delta\Pi\) is the leading order change in vev between vacua, \(\Delta V(\Pi)\) is the change in vacuum energy between the two vacua and \(\Delta V_{\rm Max}(\Pi)\) is the change in vacuum energy between the vacuum and the top of the barrier between them.
The resulting expression for the bounce, in the large \(n\) limit, is
\[S_{4}\sim\frac{2^{3-n-N}n^{2}\pi^{5}\Gamma(n+N)}{3(N-1)^{4}\Gamma\left(\frac{n +1}{2}\right)\Gamma\left(\frac{N}{2}\right)\Gamma\left(\frac{n+N-1}{2}\right) }\times\frac{\varepsilon_{n,\rm max}^{0}}{\varepsilon_{n}}\ \, \tag{42}\]
which ultimately scales proportional to \(n!/((n/2)!)^{2}\), quickly becoming very large for large \(n\). We also have that
\[R_{0}^{4}\approx\frac{S_{4}}{\pi^{2}\Delta V(\Pi)} \tag{43}\]
so substituting the above relations back to eq. (40) it becomes clear that for \(\varepsilon_{n,\rm max}^{0}/\varepsilon_{n}\) satisfying the criteria for a controlled EFT expansion the exponential becomes extremely small. The condition for a successful completion of the vacuum transition is
\[\Gamma_{4}\gtrsim H^{4}\ \, \tag{44}\]
Figure 3: _Successive tunneling towards the true vacuum for the benchmark scenario \(n=15,N=4\). The colouring shows that we move from a higher \(\langle\Pi\rangle\) (purple dot) down to smaller values until the DS reaches the deepest minimum (red dot)._
hich is difficult to fulfil. In conclusion, if the DS is for some reason localized at the purple dot then it will face an extremely slow decay rate, compared with the expansion of the universe, such that it will never completely tunnel to the blue dot in a time scale which is relevant, leading to an eternally-inflating DS.
Naturally one is led to consider the other tunneling possibilities. Naively for transitions closer to the true global minimum one, such as the green to red dot vacuum transition (see Fig. 3), one does not expect a dramatic change since the difference in vacuum energy and the height of the barrier grow in a correlated manner, however eq. (4.11) suggests that the change in vacuum energy may ultimately dominate such that faster tunnelling may be possible. In such transitions the energy difference is comparable to the barrier height, hence the thin wall approximation cannot be trusted and a numerical analysis of the bounce action is required. To this end we rely again on a modified version of CosmoTransitions [27] code. The numerical analysis of the bounce solution as a function of \(\varepsilon_{n}\), for the benchmark scenario studied here, is shown in Fig. 4, demonstrating that only a case of a large \(\varepsilon_{n}\), well above the upper value for an effective description of the pNGB potential, admits values of \(S_{4}\) which could allow the vacuum transition to complete.
To conclude, we find that a supercooled vacuum transition in a DS with a single Gegenbauer potential and \(\varepsilon_{n}>0\), is highly unlikely to successfully complete unless \(\varepsilon_{n}\) violates the EFT bound, in which case calculability is called into question.
#### PT from a flipped potential
Now consider the case with \(\varepsilon_{n}<0\), as displayed in Fig. 5. We focus on the transition from the second minimum to the origin. Notice that this process corresponds to a symmetry-restoring phase transition since the pNGB order parameter \(\Pi\) has a zero vev in the true
Figure 4: _The bounce solution \(S_{4}\) evaluated numerically as a function of \(\varepsilon\) for the green dot \(\rightarrow\) red dot transition as they are represented in Fig. 3._
vacuum. This transition is outside the validity of the thin-wall approximation thus we compute the constant decay rate, eq. (4.10), numerically. To estimate the bubble radius at nucleation, \(R_{0}\), we use the value at which the field profile function is halfway between the two minima.
The Hubble rate is written as
\[H^{2}\equiv\frac{\pi^{2}g_{SM}^{*}(T_{v})T_{v}^{4}}{90M_{\rm Pl}^{2}}+\frac{ \Delta V(\Pi,0)}{3M_{\rm Pl}^{2}}\, \tag{4.15}\]
where the first term comes from the standard radiation degrees of freedom. The second term above is the vacuum contribution and we have assumed that the DS temperature remains negligibly small. For simplicity, we fix the value of \(\varepsilon_{n}=10^{-2}\varepsilon_{n,\rm max}^{0}\) and the resonance mass scale to \(M=4\pi f\). Thus only \(N\), \(n\) and the symmetry breaking scale \(f\) are free parameters.
The tunnelling rate \(\Gamma_{4}\) is independent of the visible sector temperature and instead all the temperature dependence is encoded in eq. (4.15). We also find that the polynomial order \(n\) has a negligible impact on the decay rate. Once one fixes \(N\), \(n\) and \(f\), one has that \(\Gamma_{4}/H^{4}\propto 1/T_{v}^{8}\) for large \(T_{v}\). As the temperature drops the vacuum contribution starts dominating the Hubble rate and \(\Gamma_{4}/H^{4}\approx\rm const\). This behavior is displayed in Fig. 6 for \(N=10\) and \(n=20\) and several values of symmetry breaking scale \(f\). One can observe from this figure that the nucleation temperature is directly proportional to the compositeness scale \(f\), as expected on dimensional grounds. Notice that if a transition is too slow to occur at \(T_{v}=0\) then it cannot start for any \(T_{v}\). In addition, since the potential is effectively temperature-independent, the strength parameter of the phase transition is approximately
\[\alpha(T_{v})\approx\frac{\Delta V(\Pi,0)}{\rho_{R}}. \tag{4.16}\]
Figure 5: _Inverted tree-level Gegenbauer potential. With the transition from the green dot to the red dot considered._
In Fig. 7 we show this transition strength (colorbar) alongside the behavior of the nucleation temperature as a function of symmetry breaking scale for two benchmark values of \(N\). The number of pNGBs, \(N\), significantly impacts the possible range of nucleation temperature due to the fact that, in our chosen parametrization, \(N\) affects the barrier height and thus, through the bounce action, impacts the tunneling rate exponentially. The lines terminate at the symmetry breaking scale \(f\) for which the nucleation rate matches the minimum value \(\Gamma_{4}\approx H^{4}\), as can be inferred from Fig. 6. Close to this point, the nucleation condition becomes numerically ambiguous. For smaller values of \(f\) the lines are truncated at values with extremely weak vacuum transitions. It can be observed that the strongest phase transitions are associated with the largest possible symmetry breaking scale and can attain values \(\alpha\approx\mathcal{O}(1)\).
For very strong phase transitions the latent heat released accelerates the wall to relativistic velocities and the effects of the thermal plasma are suppressed. Thus the DS plasma of pNGBs exerts negligible friction on the wall and one has \(v_{w}\approx 1\). In this case the GW signal is sourced by the collision of the walls and not by the sound waves, thus the treatment differs from sect. 4.1. To estimate the time scale of the transition we consider the bubble number density, which for a constant decay rate reads [28]3
Footnote 3: In this expression, the gamma function \(\Gamma(x)\) should not be confused with the decay rate \(\Gamma_{4}\).
\[\frac{1}{R_{*}^{3}}=\frac{1}{4}\left(\frac{\Gamma_{4}}{v_{w}}\right)^{3/4} \Gamma\left(\frac{1}{4}\right)\left(\frac{3}{\pi}\right)^{1/4}=\frac{1}{8\pi} \frac{\beta^{3}}{v_{w}^{3}}. \tag{4.17}\]
Figure 6: _Ratio of nucleation rate to Hubble volume as a function of visible sector temperature for different values of the compositeness scale. The horizontal line marks the nucleation condition while the vertical lines help visualize the intersection point. At high temperatures \(\Gamma_{4}/H^{4}\propto 1/T_{v}^{8}\) while as the temperature drops the vacuum contribution begins dominating the Hubble rate and \(\Gamma_{4}/H^{4}\approx\text{const}\)._
he GW spectrum from bubble collisions is estimated as [18]
\[\Omega_{\rm GW}(f)h^{2}=\tilde{\Omega}\times S\left(\frac{f_{g}}{f_{\rm col}} \right), \tag{4.18}\]
where we write the amplitude in terms of mean bubble separation as
\[\tilde{\Omega}\approx 1.7\times 10^{-5}\ \tilde{\Omega}_{\rm bw}(H_{\rm min}R_{ \ast})^{2}(8\pi)^{-2/3}\left(\frac{\kappa_{\phi}\alpha(T_{p})}{1+\alpha(T_{p} )}\right)^{2}\left(\frac{g_{\ast}(T_{p})}{100}\right)^{-1/3}, \tag{4.19}\]
Figure 8: _GW spectrum from bubble collisions for the strongest signals found. The red contours are the violin curves for the NANOGrav \(15\) yr data obtained from [29] using the public tool [30]. The integrated sensitivity curves for LISA and BBO were obtained using [31]._
Figure 7: _Nucleation temperature as a function of symmetry breaking scale \(f\) with the colorbar displaying the strength parameter \(\alpha\) at the percolation temperature._
where \(H_{\rm min}^{2}=\Delta V/3M_{\rm Pl}^{2}\), the coefficient \(\kappa_{\phi}\) is obtained from the detonation approximation from [24] and the spectral function is given by
\[S(x)=\frac{19x^{14/5}}{5+14x^{19/5}}. \tag{102}\]
After red-shifting the peak amplitude we have that
\[f_{\rm col}=1.7\times 10^{-5}(R_{*}H_{\rm min})^{-1}(8\pi)^{1/3}\left(\frac{T_{p }}{100\ {\rm GeV}}\right)\left(\frac{g_{*}(T_{p})}{100}\right)^{1/6}\left(\frac{f_{\rm peak }}{\beta}\right)\ \ {\rm Hz}, \tag{103}\]
with \(f_{\rm peak}/\beta\approx 0.2\), \(\tilde{\Omega}_{\rm bw}\approx 0.08\). In the expressions above we have used a slightly more precise percolation temperature, at which the probability to find a region of space-time still in the false vacuum has decreased to about \(P(T_{p})\sim e^{-1}\).
We show, in Fig. 8, the predicted GW spectrum from bubble collisions for three benchmark values of \(N\) where in each case we select the value of \(f\) which maximises the strength of the phase transition. We display the sensitivities of the future detectors LISA [32; 33] and BBO [34]. As we can observe from this figure, the case \(N=9\) could potentially explain the recently observed common-red spectrum from the NANOGrav 15 yr data [29] which is shown as the gray curves.
## 5 Summary and conclusions
The vacuum structure and dynamics of theories possessing pNGB fields in the IR is of theoretical interest and physical importance. Indeed, the vacuum structure of QCD itself is a rich subject rendered tractable by studying the vacuum structure of the pNGBs [35; 36; 37; 38]. In this work we have explored a complementary facet of pNGB vacua which arises if explicit symmetry breaking occurs due to a spurion in a non-minimal representation. Here, again, there are metastable vacua, however they exist for different field values, as described in [8]. In this work, we have focused on the same \({\rm SO}(N+1)\to{\rm SO}(N)\) symmetry breaking pattern and investigated the resulting vacuum dynamics, which are found to be much richer than one might naively expect.
Our main result is that the 'primary' phase transition associated with spontaneous \({\rm SO}(N+1)\to{\rm SO}(N)\) breaking when the radial mode obtains a vacuum expectation value is not the end of the story. Below this scale the pNGBs will typically undergo additional vacuum transitions unless the sources of explicit symmetry breaking take the most minimal form.
These vacuum transitions may occur in two ways. Thermally, there is a second critical temperature scale, the 'Flipping Temperature', which scales proportional to \(T_{F}\propto f/n\) and can thus naturally be well below the spontaneous symmetry breaking scale \(f\). Crucially, at this temperature the functional form of the pNGB potential remains the same, to leading order in the spurion. However, the overall sign flips, such that the higher temperature minimum becomes the lower temperature maximum, and vice-versa for the higher temperature maximum. As a result, in the vicinity of the flipping temperature an additional vacuum
transition occurs. We find this is likely weakly first-order, at least for parameters consistent with a controlled EFT.
The second possibility arises non-thermally, if the pNGB sector becomes supercooled in a metastable state, which is not implausible given the existence of \(\sim n\) different metastable vacua. In this case multiple vacuum transitions can occur, with the most likely being to a nearest neighbour. As the field approaches the global minimum the final vacuum transition can be strong enough to generate observable GWs.
The vacuum structure of our universe is of prime importance and interest in physics. It determines the ultimate fate of the observable universe and may carry lessons about the deep UV and quantum gravity itself [39]. Spontaneous symmetry breaking is ubiquitous in nature, for which Nambu-Goldstone bosons are the physical manifestation of the vacuum structure. Similarly, pNGBs manifest, through their vacuum structure, patterns of explicit symmetry breaking. As a result, physically relevant lessons concerning the vacuum structure and cosmological dynamics of nature may be learned by studying pNGBs, perhaps even the case in which the Higgs boson is a pNGB; a case we leave to further study.
## Acknowledgements
The authors would like to thank Marek Lewicki and Andreas Mantziris for useful discussions. The research of F.K., M.M., S.P. and K.S. leading to these results has received funding from the Norwegian Financial Mechanism for years 2014-2021, grant nr DEC-2019/34/H/ST2/00707. M.M. also acknowledges support from the Polish National Science Center grant 2018/31/D/ST2/02048. K.S. is partially supported by the National Science Centre, Poland, under research grant 2017/26/E/ST2/00135.
## Appendix A Hot Sector Calculations
We now detail a numerical investigation of the phase transition for a hot DS. The theory of the vacuum decay from a local false minima to the true global minima at zero and finite temperature has been studied extensively [40; 41; 42; 43; 44]. When the temperature is non-negligible the transition proceeds through thermal fluctuations by the nucleation of true vacuum bubbles within the space filled with false vacuum energy. The probability of decay per unit time and volume is
\[\Gamma_{3}(T)=\left(\frac{S_{3}(T)}{2\pi T}\right)^{3/2}T^{4}e^{-S_{3}(T)/T}\, \tag{10}\]
where \(S_{3}(T)/T\) is the finite temperature Euclidean action of our pNGB model and is less than the zero-temperature one, \(S_{4}\), around \(T_{F}\).
The true vacuum bubble nucleates when the decay rate becomes comparable to the expansion rate of the universe. Namely, we define the bubble nucleation temperature by
\[\Gamma_{3}\approx H^{4}\big{|}_{T\equiv T_{n}}\, \tag{11}\]
where the Hubble rate is given by
\[H^{2}=\frac{\rho_{R}}{3M_{\rm Pl}^{2}}+\frac{\Delta V(\Pi,T)}{3M_{\rm Pl}^{2} }\, \tag{12}\]
which includes the contribution from the potential energy difference between false and true minima and \(M_{\rm Pl}=2.4\times 10^{18}\) GeV is the reduced Planck mass.
As mentioned earlier, the hidden and visible sectors have independent temperatures and cool at different rates. From eq. (4.2) above we can read off the total effective number of degrees of freedom as
\[g_{*}=\left(N+\frac{g_{\rm SM}^{*}(T_{v})}{\xi_{DS}^{4}}\right)\.\] (A.4)
The time scale of the transition is given by
\[\frac{\beta}{H}\equiv T\frac{d}{dT}\left(\frac{S_{3}(T)}{T}\right)\bigg{|}_{T \to T_{n}}\,.\] (A.5)
To compute the action we solve the equation of motion for the system, also known as the bounce solution. This can be considerably simplified by considering the parametrization of eq. (2.5) and allowing for a vev only in the \(\Pi\) direction such that
\[\Box\Pi-\frac{\partial V(\Pi,T)}{\partial\Pi}=0\ \.\] (A.6)
We use a modified version of the publicly available code CosmoTransitions [27] to compute the Euclidean action.
Finally, it is necessary to have an estimate for the bubble wall velocity. This requires an out-of-equilibrium computation of the deviation from equilibrium of all the particle distribution functions. While this is still a very active area of research [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61], here we will adopt the analytic estimate of [57; 58]
\[v_{w}=\begin{cases}\sqrt{\frac{\Delta V}{\alpha\rho_{R}}}&\text{for}\quad \sqrt{\frac{\Delta V}{\alpha\rho_{R}}}<v_{J}(\alpha)\,\\ 1&\text{for}\quad\sqrt{\frac{\Delta V}{\alpha\rho_{R}}}\geq v_{J}(\alpha)\,, \end{cases}\] (A.7)
where \(\alpha\) is the transition strength given in eq. (4.1) and \(v_{J}=\frac{1}{\sqrt{3}}\frac{1+\sqrt{3\alpha^{2}+2\alpha}}{1+\alpha}\) the Chapman-Jouguet velocity which defines the upper limit for which hydrodynamic solutions can be found. Although this result is valid for simple extensions of the SM, in our case, we expect it to give us a realistic estimate. The reason is that we expect the friction force on the bubble wall to become significant due to the mass of the pNGBs at the metastable vacuum.
The sound wave source template reads4[20; 21; 22; 23] as a function of the frequency, \(f_{g}\),
Footnote 4: We notice that there are several templates for the GW which derive from fits to different numerical simulations. In particular the template we use do not match those of, e.g. [18] but we nevertheless expect that our conclusion remain qualitatively the same regardless of which template is used.
\[\Omega_{\rm sw}(f_{g})h^{2}=4.13\times 10^{-7}\ (R_{*}H_{*})\left(1-\frac{1}{ \sqrt{1+2\tau_{\rm sw}H_{*}}}\right)\left(\frac{\kappa_{\rm sw}\,\alpha}{1+ \alpha}\right)^{2}\left(\frac{100}{g_{*}}\right)^{\frac{1}{3}}S_{\rm sw}(f_{g })\,,\] (A.8)
where \(R_{*}\) is the average bubble size at collision and the spectral function is
\[S_{\rm sw}(f_{g})=\left(\frac{f_{g}}{f_{\rm sw}}\right)^{3}\left[\frac{4}{7}+ \frac{3}{7}\left(\frac{f_{g}}{f_{\rm sw}}\right)^{2}\right]^{-\frac{7}{2}}\,,\] (A.9)
\(g_{*}\) is given in eq. (101) and all the quantities of the GW spectrum are evaluated at the nucleation temperature \(T_{*}=T_{n}\approx T_{F}\). The frequency at the peak of the spectrum is given by
\[f_{\rm sw}\,=2.6\times 10^{-5}\,{\rm Hz}\,(R_{*}H_{*})^{-1}\left(\frac{T_{*}}{1 00{\rm GeV}}\right)\left(\frac{g_{*}}{100}\right)^{\frac{1}{6}}\,, \tag{102}\]
while the duration of the sound wave source reads [62; 63; 64; 22]
\[\tau_{\rm sw}H_{*}=\frac{H_{*}R_{*}}{U_{f}}\,,\quad U_{f}\approx\sqrt{\frac{3} {4}\frac{\alpha}{1+\alpha}\kappa_{\rm sw}}\,. \tag{103}\]
For the mean bubble separation we use
\[H_{*}R_{*}\approx(8\pi)^{\frac{1}{3}}\left(\frac{\beta}{H}\right)^{-1}. \tag{104}\]
For all the computations that follow in this subsection we have fixed the explicit symmetry breaking parameter to \(\varepsilon_{n}=10^{-2}\times\varepsilon_{n,{\rm max}}^{T_{F}}\). For the UV scale at which we expect new resonances to appear we have fixed \(M=4\pi f\). Lastly, since the details of the dynamics of the finite temperature phase transition are to a good approximation controlled by the DS flipping temperature \(T_{F}\), our only free parameters for this analysis are \(n\), \(N\), \(f\), \(\xi_{\rm DS}\) and \(T_{v}\) where the visible sector temperature is fixed above \(T_{F}\).
We present the predictions of the GW spectrum in Fig. 9 and Fig. 10 below. In Fig. 9 we display the variation of the signal as a function of the Gegenabuer polynomial order, \(n\), while fixing \(N=4\), \(f=1\) TeV and \(T_{v}=2T_{F}\). We notice that the amplitude of the
Figure 9: _The GW spectrum for sound waves with \(n=10-58\) (orange curves). The explicit symmetry breaking parameter has been set to \(\varepsilon=10^{-2}\times\varepsilon_{n,max}^{T_{F}}\). The symmetry breaking scale was fixed to \(f=1\) TeV and the number of pNGBs to \(N=4\). The temperature of the visible sector was fixed to \(T_{v}=2T_{F}\) for each benchmark._
signal is very small compared with the expected experimental sensitivities, in particular we find \(\alpha\approx 0.002\) (in agreement with our analytic prediction for the transition strength given in eq. (4.7)), \(\beta/H\approx 10^{6}\) and \(v_{w}\approx 0.06\). We do not observe strong dependence on the polynomial order \(n\). Recall that \(T_{n}\approx T_{F}\sim f/n\), hence the flipping temperature is numerically very close to the critical and the nucleation temperature.
In Fig. 10, we instead vary the ratio of hidden to visible temperatures by choosing different values of \(T_{v}/T_{F}\) while setting \(N=4\), \(n=20\) and \(f=1\) TeV. In this case we notice a substantial reduction in the amplitude as we increase the temperature hierarchy. This is expected as the amplitude formula eq. (A.8) is inversely proportional to the total number of degrees of freedom, in agreement with the results of [14]. Furthermore we have verified numerically that varying other parameters of the potential do not substantially change the amplitude of the signal and, irrespectively of the adopted benchmark, we obtain a strength parameter of about \(\alpha\approx 0.002\) while for the inverse timescale \(\beta/H\approx 10^{6}\) and \(v_{w}\approx 0.06\). These numerical values are indicative of a very weak and quick transition, if not a crossover, motivating our initial choice of using \(T_{n}\) in the GW template formula rather than the percolation temperature.
|
2309.13338 | Liminf approximation sets for abstract rationals | The Jarn\'ik-Besicovitch theorem is a fundamental result in metric number
theory which concerns the Hausdorff dimension for certain limsup sets. We
discuss the analogous problem for liminf sets. Consider an infinite sequence of
positive integers, $S=\{q_{n}\}_{n\in\mathbb{N}}$, exhibiting exponential
growth. For a given $n$-tuple of functions denoted as $\Psi:=~(\psi_1,
\ldots,\psi_n)$, each of the form $\psi_{i}(q)=q^{-\tau_{i}}$ for
$(\tau_{1},\dots,\tau_{n})\in\mathbb{R}^{n}_{+}$, we calculate the Hausdorff
dimension of the set of points that can be $\Psi$-approximated for all
sufficiently large $q\in S$. We prove this result in the generalised setting of
approximation by abstract rationals as recently introduced by Koivusalo,
Fraser, and Ramirez (LMS, 2023). Some of the examples of this setting include
the real weighted inhomogeneous approximation, $p$-adic weighted approximation,
Diophantine approximation over complex numbers, and approximation on missing
digit sets. | Mumtaz Hussain, Ben Ward | 2023-09-23T11:21:03Z | http://arxiv.org/abs/2309.13338v1 | # Liminf approximation sets for abstract rationals
# Liminf approximation sets for abstract rationals
Mumtaz Hussain
Department of Mathematics, University of California, Berkeley, CA 94720, USA [email protected]
Benjamin Ward
Department of Mathematics, University of California, Berkeley, CA 94720, USA [email protected]
November 3, 2021
###### Abstract.
The Jarnik-Besicovitch theorem is a fundamental result in metric number theory which concerns the Hausdorff dimension for certain limsup sets. We discuss the analogous problem for liminf sets. Consider an infinite sequence of positive integers, \(S=\{q_{n}\}_{n\in\mathbb{N}}\), exhibiting exponential growth. For a given \(n\)-tuple of functions denoted as \(\Psi:=\ (\psi_{1},\ldots,\psi_{n})\), each of the form \(\psi_{i}(q)=q^{-\tau_{i}}\) for \((\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\), we calculate the Hausdorff dimension of the set of points that can be \(\Psi\)-approximated for all sufficiently large \(q\in S\). We prove this result in the generalised setting of approximation by abstract rationals as recently introduced by Koivusalo, Fraser, and Ramirez (LMS, 2023). Some of the examples of this setting include the real weighted inhomogeneous approximation, \(p\)-adic weighted approximation, Diophantine approximation over complex numbers, and approximation on missing digit sets.
## 1. Introduction
Dirichlet's Theorem in Diophantine approximation asserts that for any \(n\)-tuple of positive real numbers \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\) with \(\tau_{1}+\cdots+\tau_{n}=1\), every \(\mathbf{x}=(x_{1},\ldots,x_{n})\in[0,1]^{n}\), and any \(N\in\mathbb{N}\) there exists \(1\leq q\leq N\) such that
\[\|qx_{i}\|<N^{-\tau_{i}}\quad(1\leq i\leq n)\,,\]
where \(\|\cdot\|\) denotes the minimum distance to the nearest integer. A corollary one can deduce from Dirichlet's theorem is that for every irrational \(\mathbf{x}\in[0,1]^{n}\) there exists an infinite sequence of integers \(\{q_{j}(\mathbf{x})\}_{j\in\mathbb{N}}\) such that
\[\|q_{j}(\mathbf{x})x_{i}\|<q_{j}(\mathbf{x})^{-\tau_{i}}\quad(1\leq i\leq n)\,,\]
for every \(j\in\mathbb{N}\). In order to keep the statement true for all \(\mathbf{x}\in[0,1]^{n}\) one cannot improve on the size of the summation of the components of \(\boldsymbol{\tau}\). Indeed, if
\[\tau_{1}+\cdots+\tau_{n}>1\,,\]
then an easy application of the Borel-Cantelli lemma implies that Lebesgue-almost no points in \([0,1]^{n}\) can be \(\boldsymbol{\tau}\)-approximated by rationals infinitely often. However, the following was proven by Rynne [27].
**Theorem 1.1** (Rynne, 1998).: _For \(n\)-tuple \(\boldsymbol{\tau}\) with \(\tau_{1}+\cdots+\tau_{n}>1\) then_
\[\dim_{\mathcal{H}}\left\{x\in[0,1]^{n}:\begin{array}{c}\|qx_{i}\|<q^{-\tau_ {i}}\quad(1\leq i\leq n)\\ \text{for infinitely many }q\in\mathbb{N}\end{array}\right\}=\min_{1\leq j \leq n}\left\{\frac{n+1+\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i} )}{\tau_{j}+1}\right\}\,,\]
_where \(\dim_{\mathcal{H}}\) denotes the Hausdorff dimension._
For the definition of Hausdorff measure and dimension see SS 3.1. This result is the weighted analogue of the classical Jarnik-Besicovitch theorem in Diophantine approximation [6, 18].
In this article, we consider the reverse of the setup given above. That is, given an infinite sequence of positive integers \(\{q_{j}\}_{j\in\mathbb{N}}\) and an \(n\)-tuple \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\) with \(\tau_{1}+\cdots+\tau_{n}\geq\ 1\) for how many \(\mathbf{x}\in[0,1]^{n}\) does it hold that
\[\|q_{j}x_{i}\|<q_{j}^{-\tau_{i}}\quad(1\leq i\leq n)\,,\]
for all sufficiently large \(j\in\mathbb{N}\). This set can trivially be seen to be a null set in terms of Lebesgue measure (even when \(\tau_{1}+\cdots+\tau_{n}=1\)) by considering the measure of each individual layer and taking the limit as \(j\to\infty\). For the Hausdorff dimension, a corollary of our main result (stated in subsection 1.3) gives us the following.
**Theorem 1.2**.: _Let \(\{q_{j}\}_{j\in\mathbb{N}}\) be an infinite sequence of positive integers and \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\ \mathbb{R}_{+}^{n}\) an \(n\)-tuple of positive real numbers. Suppose that_
\[\lim_{j\to\infty}\frac{\log q_{j}}{\log q_{j-1}}=k>1 \tag{1}\]
_exists and \(\max_{1\leq i\leq n}\tau_{i}<k-1<\infty\). Then_
\[\dim_{\mathcal{H}}\left\{\boldsymbol{x}\in[0,1]^{n}:\begin{array}{c}\|q_{j }x_{i}\|<q_{j}^{-\tau_{i}}\quad(1\leq i\leq n)\\ \text{for all sufficiently large }j\in\mathbb{N}\end{array}\right\}=\min_{1 \leq j\leq n}\left\{\frac{n-\frac{1}{k-1}\sum\limits_{i=1}^{n}\tau_{i}+\sum \limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}+1}\right\}\,.\]
_Remark 1.3_.: As an idea of the sorts of sequences \(S\) one can choose satisfying (1) note that for any \((a,b)\in\mathbb{N}_{>1}^{2}\) the sequence \(\{a^{b^{c}}\}_{c\in\mathbb{N}}\) has
\[\lim_{j\to\infty}\frac{\log q_{j}}{\log q_{j-1}}=b\,.\]
Let
\[\mathcal{F}=\left\{\{a^{b^{c}}\}_{c\in\mathbb{N}}:(a,b)\in\mathbb{N}_{>1}^{2} \right\}\,.\]
Then Theorem 1.2 and the countable stability of the Hausdorff dimension (see Section 3.1 for more details) gives us that for any \(n\)-tuple \(\boldsymbol{\tau}\) of finite positive real numbers
\[\dim_{\mathcal{H}}\left\{\mathbf{x}\in[0,1]^{n}:\,\exists\,S\in\mathcal{F}\ \text{s.t.}\begin{array}{c}\|q_{j}x_{i}\|<q_{j}^{-\tau_{i}}\quad(1\leq i \leq n)\\ \text{for all sufficiently large }j\in\mathbb{N}\end{array}\right\}=\min_{1\leq j \leq n}\left\{\frac{n+\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{ \tau_{j}+1}\right\}\,.\]
_Remark 1.4_.: Compared to the Theorem of Rynne, note that the set defined in Theorem 1.2 is of strictly smaller dimension, even if the sequence \(\{q_{j}\}_{j\in\mathbb{N}}\) is taken to be increasingly sparse. In fact, one can deduce from [27] that for \(n\)-tuple \(\boldsymbol{\tau}\) with \(\tau_{1}+\cdots+\tau_{n}>0\) and sequence \(S\) satisfying (1) then
\[\dim_{\mathcal{H}}\left\{\mathbf{x}\in[0,1]^{n}:\begin{array}{c}\|q_{j}x_{i} \|<q_{j}^{-\tau_{i}}\quad(1\leq i\leq n)\\ \text{for infinitely many }j\in\mathbb{N}\end{array}\right\}=\min_{1\leq j \leq n}\left\{\frac{n+\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{ \tau_{j}+1}\right\}\,.\]
This follows from [27, Theorem 1] and the observation that the value \(v(Q)\) (\(v(S)\) in our case) appearing in [27, Theorem 1] is equal to zero for the sequences we are considering.
_Remark 1.5_.: Theorem 1.2 is an extension of the recent work of the first-named author and Shi who proved the result in the non-weighted setting [15]. Theorem 1.2 is a corollary of our main result which we state in the next section. In particular, we do not require the limit (1) to exist, we only need to calculate the \(\liminf\) of the sequence. See Theorem 1.6 and Theorem 2.1 for more details.
In order to state our main theorem we recall and expand upon the definition of _abstract rationals_ as introduced in [11]. We then recall the notation of [15] on the exponential shrinking problem in the non-weighted case and redefine these sets for abstract rationals. Our main results are then given, which are followed by a series of applications. These include the real weighted inhomogeneous approximation, \(p\)-adic weighted approximation, complex Diophantine approximation, and approximation on missing digit sets. In SS3-5 we prove our main results.
### Abstract rationals
Fix \(n\in\mathbb{N}\) and for each \(1\leq i\leq n\), let \((F_{i},d_{i},\mu_{i})\) be a non-empty totally bounded metric space equipped with a \(\delta_{i}\)-Ahlfors regular measure \(\mu_{i}\) with support \(F_{i}\). That is, for any \(x_{i}\in F_{i}\) and \(0<r<r_{0}\) for some bounded \(r_{0}\) there exists constants \(0<c_{1,i}\leq c_{2,i}<\infty\) such that
\[c_{1,i}r^{\delta_{i}}\leq\mu_{i}(B_{i}(x_{i},r))\leq c_{2,i}r^{\delta_{i}},\]
where for any \(x_{i}\in F_{i}\) and \(r>0\) we write
\[B_{i}(x_{i},r)=\left\{y\in F_{i}:d_{i}(x_{i},y)<r\right\}.\]
Let
\[F=\prod_{i=1}^{n}F_{i},\quad d(\cdot,\cdot)=\max_{1\leq i\leq n}d_{i}(\cdot, \cdot),\quad\mu=\prod_{i=1}^{n}\mu_{i}\]
so that \((F,d,\mu)\) is the product metric space. For any two subsets \(A,B\subset F_{i}\) by \(d_{i}(A,B)\) we mean
\[d_{i}(A,B)=\inf\{d_{i}(a,b):a\in A,b\in B\},\]
and for any \(\mathbf{x}=(x_{1},\ldots,x_{n})\in F\) and \(r>0\) we write
\[B((x_{1},\ldots,x_{n}),r)=\prod_{i=1}^{n}B_{i}(x_{i},r).\]
Let \(\mathcal{N}\) be an infinite countable set, and let \(\beta:\mathcal{N}\to\mathbb{R}_{+}\) be function which associates a weight \(\alpha\mapsto\beta(\alpha)=\beta_{\alpha}\). In line with [11] For each \(1\leq i\leq n\) and each \(q\in\mathcal{N}\) define the \(\beta\)_-abstract rationals of level \(q\) in \(F_{i}\)_ by fixing a _maximal \(\beta_{q}^{-1}\)-separated_ set of points \(P_{i}(q)\subset F_{i}\), where
* \(\beta_{q}^{-1}\)_-separated_ means that for all \(p_{1},p_{2}\in P_{i}(q)\) we have that \(d_{i}(p_{1},p_{2})\geq\beta_{q}^{-1}\), and
* _maximal_ means that for all \(x\in F_{i}\) there exists \(p\in P_{i}(q)\) such that \(d_{i}(p,x)<\beta_{q}^{-1}\).
That is, all the points are reasonably separated from each other and we cannot fit any more level \(q\) abstract rationals without being too close to an already present abstract rational. Let
\(P(q)=\prod_{i=1}^{n}P_{i}(q)\) denote the product of abstract rationals of level \(q\) in \(F\), and let
\[\mathcal{Q}=\bigcup_{q\in\mathcal{N}}P(q)\]
be the set of \(\beta\)_-abstract rationals in \(F\)_.
As an example consider
\[F_{i}= \,[0,1]^{d}\,,\quad d_{i}=|\cdot|\text{ the maximum norm},\] \[\mathcal{N}= \,\mathbb{N}\,,\qquad\beta(q)=q\,,\] \[P_{i}(q)=\left\{\left(\frac{p_{1}}{q},\ldots,\frac{p_{d}}{q} \right):0\leq p_{1},\ldots,p_{d}\leq q\right\}=\frac{1}{q}\mathbb{Z}^{d}\cap[0,1]^{d}\,,\quad(1\leq i\leq n)\,.\]
Then \(\mathcal{Q}=\mathbb{Q}^{dn}\cap[0,1]^{dn}\). Generally one can take each \(F_{i}\) to be any bounded convex body in \(\mathbb{R}^{d}\), and the lattice \(\frac{1}{q}\mathbb{Z}^{d}\) can be shifted by any vector.
### Liminf approximation sets
In [15], Hussain and Shi introduced an exponential shrinking problem stated as follows. Given a sequence \(S=\{q_{j}\}_{j\in\mathbb{N}}\) of positive integers, fixed \(\theta=(\theta_{1},\ldots,\theta_{n})\in[0,1]^{n}\), and \(\tau\geq 1\), consider the set
\[\Lambda^{S}_{\mathbb{Q}^{n}}(\tau):=\left\{\mathbf{x}\in[0,1]^{n}:\max_{1\leq i \leq n}||q_{j}x_{i}-\theta_{i}||<q_{j}^{-\tau}\quad\text{ for all }j\geq 1\right\}.\]
This set was introduced to answer a question related to a problem posed by Schleischitz in [28]. Under certain conditions on \(\tau\) and \(S\), they provide the exact Hausdorff dimension of the set \(\Lambda^{S}_{\mathbb{Q}^{n}}(\tau)\). In this article, we generalise the above setting by considering weighted approximation (the approximation function can vary between coordinate axes) by abstract rationals (which includes approximation by rationals as a special case).
Fix a sequence \(S=\{q_{j}\}_{j\in\mathbb{N}}\) with each \(q_{j}\in\mathcal{N}\) and weight vector \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\). Define the set
\[\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau}):=\left\{\mathbf{x}\in F:\begin{array} []{c}d_{i}(x_{i},P_{i}(q_{j}))<\beta(q_{j})^{-\tau_{i}}\quad(1\leq i\leq n) \\ \text{ for all }j\in\mathbb{N}\end{array}\right\},\]
and the set
\[\widehat{\Lambda}^{S}_{\mathcal{Q}}(\boldsymbol{\tau}):=\left\{\mathbf{x}\in F :\begin{array}{c}d_{i}(x_{i},P_{i}(q_{j}))<\beta(q_{j})^{-\tau_{i}}\quad(1 \leq i\leq n)\\ \text{ for all sufficiently large }j\in\mathbb{N}\end{array}\right\}.\]
The latter set is a slight relaxation of the former set since we only require the points to be eventually always close to a sequence of abstract rationals. We may write the latter set in terms of the former by defining for any \(t\in\mathbb{N}\)
\[\sigma^{t}S=\{q_{i+t}\}_{i\in\mathbb{N}},\]
that is, \(\sigma\) is the left shift on sequence \(S\). Then
\[\widehat{\Lambda}^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\bigcup_{t\in\mathbb{N }}\Lambda^{\sigma^{t}S}_{\mathcal{Q}}(\boldsymbol{\tau}).\]
### Main results
We prove the following results on the Hausdorff dimension of the sets introduced above.
**Theorem 1.6**.: _Let \((F,d,\mu)\) be a product space of non-empty totally bounded metric spaces each equipped with Ahlfors regular measures. Let \(\mathcal{N}\) be an infinite countable set \(\mathcal{N}\), \(\beta:\mathcal{N}\to\mathbb{R}_{+}\), and \(\mathcal{Q}\) be a set of \(\beta\)-abstract rationals in \(F\). Fix an infinite sequence \(S\) contained in \(\mathcal{N}\) over which \(\beta\) is unbounded and strictly increasing. Suppose that_
\[\inf_{j\in\mathbb{N}}\frac{\log\beta(q_{j})}{\log\beta(q_{j-1})}=h_{S}>1,\ \ \text{and}\ \ \liminf_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}\log\beta(q_{i})}{\log\beta(q_ {j})}=\alpha_{S}.\]
_For any \(\boldsymbol{\tau}\) such that \(h_{S}>\tau_{i}>1\) for each \(1\leq i\leq n\), we have that_
\[\dim_{\mathcal{H}}\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\min_{1\leq k \leq n}\left\{\frac{1}{\tau_{k}}\left(\sum\limits_{i=1}^{n}\delta_{i}-\alpha _{S}\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}+\sum\limits_{j:\tau_{k}\geq \tau_{j}}(\tau_{k}-\tau_{j})\delta_{j}\right)\right\}.\]
_Remark 1.7_.: The infimum condition of Theorem 1.6 cannot be replaced by a \(\liminf\) condition. To see this consider for example \(F=[0,1]\), \(\beta(q)=q\),
\[\mathcal{Q}=\left\{\frac{p+\theta}{q}:(p,q)\in\mathbb{Z}\times\mathbb{N}\ \text{and}\ \frac{p+\theta}{q}\in[0,1]\right\},\]
and the sequence
\[S=\{2,3,3^{h},3^{h^{2}},\dots\}.\]
For large \(h\) and suitable choice of \(\theta\in[0,1]\) it can be shown that the sets of points
\[\{x\in[0,1]:\|2x+\theta\|<2^{-h+1}\}\ \text{and}\] \[\{x\in[0,1]:\|3x+\theta\|<3^{-h+1}\}\]
are disjoint and so \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\emptyset\).
_Remark 1.8_.: In order to state our result for the general case of abstract rationals, it is necessary to suppose each \(\tau_{i}<h_{S}\). Without this assumption additional considerations are necessary. See [15, Remark 1.2] for a discussion on complications in the case of real simultaneous approximation (non-weighted).
The infimum condition can be replaced by a \(\liminf\) condition if we consider the set \(\widehat{\Lambda}^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\).
**Theorem 1.9**.: _Let \((F,d,\mu)\), \(\mathcal{N}\), \(\beta\), and \(\mathcal{Q}\) be constructed as above. Fix an infinite sequence \(S\) contained in \(\mathcal{N}\) over which \(\beta\) is unbounded and strictly increasing and suppose that_
\[\liminf_{j\to\infty}\frac{\log\beta(q_{j})}{\log\beta(q_{j-1})}=h>1,\ \ \text{and}\ \ \liminf_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}\log\beta(q_{i})}{\log\beta(q_ {j})}=\alpha_{S}.\]
_For any \(\boldsymbol{\tau}\) such that \(h>\tau_{i}>1\) for each \(1\leq i\leq n\), we have that_
\[\dim_{\mathcal{H}}\widehat{\Lambda}^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\min_ {1\leq k\leq n}\left\{\frac{1}{\tau_{k}}\left(\sum\limits_{i=1}^{n}\delta_{i}- \alpha_{S}\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}+\sum\limits_{j:\tau_{k} \geq\tau_{j}}(\tau_{k}-\tau_{j})\delta_{j}\right)\right\}.\]
It should be noted that Theorem 1.9 essentially follows from Theorem 1.6 and the countable stability of the Hausdorff dimension. For completeness, we provide the proof at the end of SS3.4.
**Acknowledgments:** The research of both authors is supported by the Australian Research Council discovery project 200100994.
## 2. Applications
We begin with the classical setting of real approximation by rational numbers, which we generalise to the weighted inhomogeneous setting. We then give similar statements in the case of \(p\)-adic approximation. In later applications, we give the statement in the simplified one-dimensional homogeneous setting. It should be clear from the application in the real weighted inhomogeneous setting that the one-dimensional case readily generalises to the higher dimensional weighted setting. Indeed, the only calculations required to apply Theorems 1.6-1.9 is to show our setup aligns with some approximation by abstract rationals. Once this is done in one dimension it is clear that the product space also satisfies the criteria of abstract rationals.
The notion of abstract rationals admits a broad range of applications, though in some instances it is not immediately clear whether a set satisfies the properties of being abstract rationals. The main work in each of these applications is constructing the sets of abstract rationals. The list of applications presented below is far from exhaustive. For example, in increasing levels of difficulty, one could consider formal power series approximation, approximation by irrational rotations, and approximation of real manifolds by rational numbers. The latter two cases seem particularly challenging.
### Real approximation
The classical study of approximation of real numbers by rationals is extensive, see [5] for a survey of the foundational results and [16, 20, 22, 27, 30] for the more recent weighted analogies of such results. In this section let
\[F_{i}=[0,1]\,,\quad d_{i}= |\cdot|\,,\quad\mu_{i}=\lambda\,,\] \[\text{so}\ \left(F,d,\mu\right)= ([0,1]^{n},|\cdot|,\lambda_{n})\,,\]
for \(|\cdot|\) the usual max norm on real space, \(\lambda\) the Lebesgue measure, and \(\lambda_{n}\) the \(n\)-dimensional Lebesgue measure. Let
\[\mathcal{N}= \mathbb{N}\,,\qquad\beta(q)=q\,,\qquad\theta:\mathcal{N}\to[0,1] ^{n}\,,\text{ and }\] \[P_{i}(q)= \left\{\frac{p+\theta_{i}(q)}{q}:p\in\mathbb{Z}\ \ \text{ and }\ \frac{p+\theta_{i}(q)}{q}\in[0,1]\right\}\,,\ \ (1\leq i\leq n)\]
be the \(q^{-1}\)-abstract rationals of level \(q\) in \([0,1]\).
Observe that each \(P_{i}(q)\) can be seen as a subset of the shifted lattice \(\frac{1}{q}\mathbb{Z}+\theta_{i}(q)\), thus it is clear each point is \(q^{-1}\) separated, and furthermore the set is maximal. Note we only have to show that each \(P_{i}(q)\) is a well defined set of \(q^{-1}\)-abstract rationals of level \(q\) in \([0,1]\). The higher dimensional product space result follows immediately. Hence
\[\mathcal{Q}=\bigcup_{q\in\mathbb{N}}\prod_{i=1}^{n}\left\{\frac{p+\theta_{i} (q)}{q}:p\in\mathbb{Z}\ \ \text{ and }\ \frac{p+\theta_{i}(q)}{q}\in[0,1]\right\}\]
is a well-defined set of abstract rationals.
For \(\theta(q)=0\) for all \(q\in\mathbb{N}\) this is the standard homogeneous setting, and for \(\theta(q)=(\theta_{1},\ldots,\theta_{n})\) fixed this is the standard inhomogeneous setting.
Let \(S=\{q_{i}\}_{i\in\mathbb{N}}\) be an increasing sequence of positive integers and define the sets
\[W_{n}^{S}(\boldsymbol{\tau}) =\left\{\mathbf{x}\in[0,1]^{n}:\begin{array}{c}\|q_{j}x_{i}- \theta(q_{j})_{i}\|<q_{j}^{-\tau_{i}}\quad(1\leq i\leq n)\\ \text{ for all }j\in\mathbb{N}\end{array}\right\}\,,\] \[\widehat{W}_{n}^{S}(\boldsymbol{\tau}) =\left\{\mathbf{x}\in[0,1]^{n}:\begin{array}{c}\|q_{j}x_{i}- \theta(q_{j})_{i}\|<q_{j}^{-\tau_{i}}\quad(1\leq i\leq n)\\ \text{ for all sufficiently large }j\in\mathbb{N}\end{array}\right\}\,.\]
Note that dividing through by \(q_{j}\) in the inequalities in \(W_{n}^{S}(\boldsymbol{\tau})\) gives us \(W_{n}^{S}(\boldsymbol{\tau})=\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau}+1)\), and similarly \(\widehat{W}_{n}^{S}(\boldsymbol{\tau})=\widehat{\Lambda}_{\mathcal{Q}}^{S}( \boldsymbol{\tau}+1)\). Notice by choice \(\beta(q)=q\) for any strictly increasing sequence of positive integers \(S\) we immediately have that \(\beta\) is unbounded and strictly increasing. Applying Theorem 1.6 to this setting we immediately have the following.
**Theorem 2.1**.: _Let \(S\) be an increasing sequence of integer with_
\[\inf_{j\to\infty}\frac{\log q_{j}}{\log q_{j-1}}=h_{S}>1,\ \text{ and }\ \liminf_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}= \alpha_{S}.\]
_For any \(\boldsymbol{\tau}\) such that \(h_{S}-1>\tau_{i}>0\) for each \(1\leq i\leq n\), we have that_
\[\dim_{\mathcal{H}}W_{n}^{S}(\boldsymbol{\tau})=\min_{1\leq k\leq n}\left\{ \frac{1}{\tau_{k}+1}\left(n-\alpha_{S}\sum\limits_{i=1}^{n}\tau_{i}+\sum_{i: \tau_{k}\geq\tau_{i}}(\tau_{k}-\tau_{i})\right)\right\}.\]
_Remark 2.2_.: This result is a generalisation of [15, Theorem 1.1] to the weighted setting, where it was proven that for \(\tau=\tau_{1}=\cdots=\tau_{n}\) with \(h_{S}-1>\tau>0\) that
\[\dim_{\mathcal{H}}W_{n}^{S}(\tau)=\frac{n}{\tau+1}(1-\alpha_{S}\tau),\]
which agrees with the theorem above. Unlike [15], in our setting the inhomogeneity \(\theta\) is allowed to vary over the sequence \(S\).
By applying Theorem 1.9 instead of Theorem 1.6 we have the following result.
**Theorem 2.3**.: _Let \(S\) be an increasing sequence of integer with_
\[\liminf_{j\to\infty}\frac{\log q_{j}}{\log q_{j-1}}=h_{S}>1,\ \text{ and }\ \liminf_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}= \alpha_{S}.\]
_For any \(\boldsymbol{\tau}\) such that \(h_{S}-1>\tau_{i}>0\) for each \(1\leq i\leq n\), we have that_
\[\dim_{\mathcal{H}}W_{n}^{S}(\boldsymbol{\tau})=\min_{1\leq k\leq n}\left\{ \frac{1}{\tau_{k}+1}\left(n-\alpha_{S}\sum\limits_{i=1}^{n}\tau_{i}+\sum_{i: \tau_{k}\geq\tau_{i}}(\tau_{k}-\tau_{i})\right)\right\}.\]
In later applications we will only give a results aligning to setups of the form \(\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\). It should be clear that the statements relating to \(\widehat{\Lambda}_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\) follow immediately.
### \(p\)-adic weighted approximation
Fix a prime number \(p\) and let
\[F=\mathbb{Z}_{p}\,,\quad d=\mid\cdot\mid_{p},\quad\mu=\mu_{p}\,,\]
for \(\mathbb{Z}_{p}\) the ring of \(p\)-adic integers, \(\mid\cdot\mid_{p}\) the \(p\)-adic norm, and \(\mu_{p}\), the \(p\)-adic Haar measure normalised at \(\mu_{p}(\mathbb{Z}_{p})=1\). For metric properties of the classical sets of Diophantine approximation in \(p\)-adic space see [1, 17, 24], and for the more recent weighted setting see [4, 12, 22].
Due to the ultrametric properties of \(p\)-adic space, it is slightly more complicated to construct layers of abstract rationals. We opt for the following setup. Let
\[\mathcal{N}=\{p^{k}:k\in\mathbb{N}\}\,,\quad\text{ and }\quad P(q)=\left\{ \frac{a}{q-1}:1\leq a\leq q\right\}\,.\]
Note that for \(\frac{a}{q-1},\frac{a^{\prime}}{q-1}\in P(q)\) with \(a\neq a^{\prime}\) we have that
\[\left|\frac{a}{q-1}-\frac{a^{\prime}}{q-1}\right|_{p}=|q-1|_{p}^{-1}|a-a^{ \prime}|_{p}=|a-a^{\prime}|_{p}\geq p^{-k}=|q|^{-1},\]
and so \(P(q)\) is \(q^{-1}\)-separated for each \(q\in\mathcal{N}\). Thus define \(\beta(q)=|q|\).
To show each \(\beta\)-abstract rationals of level \(q\) is maximal observe that there are \(p^{k}\) points contained in \(P(p^{k})\) and each of these lies in \(\mathbb{Z}_{p}\) by the coprimality of \((p^{k}-1,p)\). The \(p\)-adic max norm balls
\[\bigcup_{\frac{a}{p^{k}-1}\in P_{i}(p^{k})}B\left(\frac{a}{p^{k}-1},p^{-k}\right)\]
are disjoint and furthermore are a cover of \(\mathbb{Z}_{p}\). To see this write each \(x\in\mathbb{Z}_{p}\) as their \(p\)-adic expansion
\[x=\sum_{i=1}^{\infty}x_{i}p^{-i}\quad x_{i}\in\{0,1,\ldots,p-1\}. \tag{2}\]
Since each \(\frac{a}{p^{k}-1}\in P(p^{k})\) is \(p^{-k}\)-separated, each of their corresponding \(p\)-adic expansions must differ over the first \(k\) coefficients. There are \(p^{k}\) different \(p\)-adic expansions over the first \(k\) coefficients so the \(p^{k}\) abstract rationals of level \(p^{k}\) cover each possible expansion. Thus any \(x\in\mathbb{Z}_{p}\) belongs to some ball in the union (2), and so every \(x\in\mathbb{Z}_{p}\) is \(p^{-k}\) close to an abstract rational of level \(p^{k}\).
For each \(q\in\mathcal{N}\) let \(\mathcal{Q}_{p}=\bigcup_{q\in\mathcal{N}}P(q)\subset\mathbb{Z}_{p}\). For \(\tau\in\mathbb{R}_{+}\) and sequence \(S=\{p^{k_{j}}\}_{j\in\mathbb{N}}\) with \(k_{j}\in\mathbb{N}\) an increasing sequence define
\[\mathcal{W}_{\mathbb{Z}_{p}}^{S}(\tau)=\left\{x\in\mathbb{Z}_{p}:\left|x- \frac{a}{p^{k_{j}}-1}\right|_{p}<p^{-k_{j}\tau}\ \ \text{for some}\ \frac{a}{p^{k_{j}}-1}\in P(p^{k_{j}})\ \text{and for all}\ j\in\mathbb{N}\right\}\]
Note that \(\mathcal{W}_{\mathbb{Z}_{p}}^{S}(\tau)=\Lambda_{\mathcal{Q}_{p}}^{S}(\tau)\), and observe that since the sequence \(S\) is strictly increasing \(\beta(q)=|q|\) is unbounded and strictly increasing. Hence applying Theorem 1.6 we have the following.
**Theorem 2.4**.: _Let \(S=\{p^{k_{j}}\}_{j\in\mathbb{N}}\) be a sequence of positive integers with \(k_{j}\in\mathbb{N}\) increasing and let_
\[\inf_{j\in\mathbb{N}}\frac{k_{j}}{k_{j-1}}=h_{S}>1,\ \text{ and }\ \liminf_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}k_{i}}{k_{j}}=\alpha_{S}.\]
_For any \(\tau\) such that \(h_{S}>\tau>1\) we have that_
\[\dim_{\mathcal{H}}\mathcal{W}^{S}_{\mathbb{Z}_{p}}(\boldsymbol{\tau})=\frac{1 }{\tau}\left(1-\alpha_{S}(\tau-1)\right)\,.\]
For completeness in this application, we also provide the weighted result. Redefine
\[F =\mathbb{Z}_{p}^{n}\,,\qquad d=\max_{1\leq i\leq n}|\cdot|_{p}\,, \qquad\mu=\mu_{p,n}=\prod_{i=1}^{n}\mu_{p}\,,\] \[\mathcal{N} =\{p^{k}:k\in\mathbb{N}\}\,,\quad P(q)=\prod_{i=1}^{n}P_{i}(q)= \prod_{i=1}^{n}\left\{\frac{a_{i}}{q-1}:1\leq a\leq q\right\}\,,\] \[\mathcal{Q}_{p,n} =\bigcup_{q\in\mathcal{N}}P(q)\subset\mathbb{Z}_{p}^{n}\,.\]
By our above calculations in the one dimensional case each \(P_{i}(q)\) are \(\beta\)-abstract rationals of level \(q\) in \(\mathbb{Z}_{p}\) so \(P(q)=\prod_{i=1}^{n}P_{i}(q)\) are \(\beta\)-abstract rationals of level \(q\) in \(\mathbb{Z}_{p}^{n}\). For \(n\)-tuple \(\boldsymbol{\tau}\in\mathbb{R}_{+}^{n}\) and sequence \(S=\{p^{k_{j}}\}_{j\in\mathbb{N}}\) with \(k_{j}\) an increasing sequence let
\[\mathcal{W}^{S}_{\mathbb{Z}_{p}^{n}}(\boldsymbol{\tau})=\left\{\mathbf{x}\in \mathbb{Z}_{p}^{n}:\quad\begin{array}{l}\left|x-\frac{a_{i}}{p^{k_{j}}-1} \right|_{p}<p^{-k_{j}\tau_{i}}\quad(1\leq i\leq n)\\ \text{ for some }\frac{\mathbf{a}}{p^{k_{j}}-1}\in P(p^{k_{j}})\text{ and for all }j\in\mathbb{N}\end{array}\right\}\,.\]
**Theorem 2.5**.: _Let \(S=\{p^{k_{j}}\}_{j\in\mathbb{N}}\) be a sequence of positive integers with \(k_{j}\in\mathbb{N}\) increasing and let_
\[\inf_{j\in\mathbb{N}}\frac{k_{j}}{k_{j-1}}=h_{S}>1,\ \text{ and }\ \liminf_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}k_{i}}{k_{j}}=\alpha_{S}.\]
_For any \(\boldsymbol{\tau}\) such that \(h_{S}>\tau_{i}+1>0\) for each \(1\leq i\leq n\) we have that_
\[\dim_{\mathcal{H}}\mathcal{W}^{S}_{n}(\boldsymbol{\tau})=\min_{1\leq k\leq n} \left\{\frac{1}{\tau_{k}}\left(n-\alpha_{S}\sum\limits_{i=1}^{n}(\tau_{i}-1)+ \sum\limits_{i:\tau_{k}\geq\tau_{i}}(\tau_{k}-\tau_{i})\right)\right\}.\]
### Complex Diophantine approximation
The classical setting of Diophantine approximation of complex numbers by Gaussian integers has been studied by a range of authors, see for example [7, 13, 14] and [8, Sections 4-6]. In the complex case, we consider the following setup. Let
\[F=\mathfrak{F}=[-\tfrac{1}{2},\tfrac{1}{2}]\times[-\tfrac{1}{2},\tfrac{1}{2}]i \,,\quad d=\|\cdot\|_{2}\,,\quad\mu=\lambda_{2}\,,\]
for \(\|\cdot\|_{2}\) the Euclidean norm. Note that \(\delta=2\) in this setting. Let
\[\mathcal{N}=\{a+bi:a,b\in\mathbb{Z}\}\,,\ \ \beta(a+bi)=\|a+bi\|_{2}=a^{2}+b^{2}\,.\]
Then, for any \(q\in\mathcal{N}\), the set of Gaussian integers, let
\[P(q)=\left\{\frac{p}{q}:p\in\mathcal{N}\text{ and }\frac{p}{q}\in\mathfrak{F} \right\}\,.\]
This set can be associated to a lattice on \(\mathbb{R}^{2}\) with Euclidean distance \(\|\frac{p}{q}-\frac{p^{\prime}}{q}\|_{2}\geq\|q\|_{2}^{-1}\), see [8, Section 4.5]. Hence \(P(q)\) is \(\|q\|_{2}^{-1}\)-separated and maximal, and so \(\mathcal{Q}_{\mathbb{C}}=\bigcup_{q\in\mathcal{N}}P(q)\) is a well defined set of \(\beta\)-abstract rationals on \(\mathcal{F}\). Let \(S=\{q_{j}\}_{j\in\mathbb{N}}\) be a sequence of Gaussian integers with strictly increasing norm, and let \(\tau\in\mathbb{R}_{+}\). Define the set
\[\mathfrak{W}_{\mathbb{C}}^{S}(\tau)=\left\{z\in\mathfrak{F}:\left\|z-\frac{p} {q_{j}}\right\|_{2}<\|q_{j}\|_{2}^{-\tau-1}\text{ for some }\frac{p}{q_{j}}\in P(q_{j}) \text{ and for all }j\in\mathbb{N}\right\}.\]
Note by the condition that \(S\) is a sequence of Gaussian integers with strictly increasing norm \(\beta(q)=\|q\|_{2}\) is strictly increasing and unbounded. Furthermore note that \(\mathfrak{W}_{\mathbb{C}}^{S}(\tau)=\Lambda_{\mathcal{Q}_{\mathbb{C}}}^{S}(\tau)\). Applying Theorem 1.6 we have the following.
**Theorem 2.6**.: _Let \(S=\{q_{j}\}_{j\in\mathbb{N}}\) be a sequence of Gaussian integers with strictly increasing norm. Let \(h_{S}\) and \(\alpha_{S}\) be defined and satisfy the conditions as in Theorem 1.6. For any \(\tau\in\mathbb{R}_{+}\) such that \(h_{S}>\tau>1\) we have that_
\[\dim_{\mathcal{H}}\mathfrak{W}_{\mathbb{C}}^{S}(\tau)=\frac{2}{\tau+1}\left(1 -\alpha_{S}\tau\right).\]
### Approximation on missing digit sets
Diophantine approximation on fractals has been an area of intense study, particularly since Mahler's 1984 paper [25] in which he asked how well points inside the middle-third Cantor set could be approximated by rational points either i) inside, or ii) outside of the middle-third Cantor set. This question has since been generalised significantly. For classical approximation results in this setting see [2, 3, 10, 19, 21, 23, 29] and [20, 30] for the higher dimensional weighted setting. Let \(b\in\mathbb{N}_{\geq 3}\) be fixed and let \(J\subset\{0,1,\ldots,b-1\}\) denote a proper subset with \(\#J\geq 2\). For each \(j\in J\) define the maps \(f_{j}:[0,1]\to[0,1]\) by
\[f_{j}(x)=\frac{x+j}{b},\]
and let \(\Phi=\{f_{j}:j\in J\}\). Consider the self-similar iterated function system \(\Phi\) and let \(\mathcal{C}(b,J)\) be the attractor of \(\Phi\). That is, \(\mathcal{C}(b,J)\) is the unique non-empty compact subset of \([0,1]\) such that
\[\mathcal{C}(b,J)=\bigcup_{j\in J}f_{j}(\mathcal{C}(b,J)).\]
Call \(\mathcal{C}(b,J)\) the \((b,J)\)-missing digit set. As an example note that \(\mathcal{C}(3,\{0,2\})\) is the classical middle-third Cantor set. These sets can be equipped with a self similar measure \(\mu_{\mathcal{C}}\) defined as
\[\mu_{\mathcal{C}}(B)=\mathcal{H}^{\gamma(b,J)}(B\cap\mathcal{C}(b,J))\,,\]
which was shown by Mauldin and Urbanski [26] to be \(\gamma(b,J)\)-Ahlfors regular where
\[\gamma(b,J)=\dim_{\mathcal{H}}\mathcal{C}(b,J)=\frac{\log\#J}{\log b}.\]
. Concisely, let
\[F= \mathcal{C}(b,J)\,,\quad d=|\cdot|\,,\quad\mu=\mu_{\mathcal{C}}\,,\] \[\mathcal{N}=\mathbb{N}\,,\quad\text{ and }\ \beta(k)=b^{k}\,.\]
We now construct the abstract rationals in this setting. Henceforth, fix any point \(z\in\mathcal{C}(b,J)\) and for any \(k\in\mathcal{N}\) let
\[P(k)=\{f_{\mathbf{i}}(z):\mathbf{i}\in J^{k}\},\]
where \(f_{\mathbf{i}}=f_{i_{1}}\circ\cdots\circ f_{i_{n}}\) for the finite word \(\mathbf{i}=(i_{1},\ldots,i_{n})\in J^{n}\). Note that for each \(f_{\mathbf{v}}(z),f_{\mathbf{u}}(z)\in P(k)\) with \(\mathbf{v}\neq\mathbf{u}\) we have that
\[|f_{\mathbf{v}}(z)-f_{\mathbf{u}}(z)|= \left|\left(\sum_{i=1}^{k}v_{i}b^{-i}+\sum_{i=k+1}^{\infty}z_{i- k}b^{-i}\right)-\left(\sum_{i=1}^{k}u_{i}b^{-i}+\sum_{i=k+1}^{\infty}z_{i-k}b^{- i}\right)\right|\] \[= \left|\sum_{i=1}^{k}(v_{i}-u_{i})b^{-i}\right|.\]
Since \(\mathbf{u}\neq\mathbf{v}\) they must differ in at least one digit. Suppose that \((u_{1},\ldots,u_{t})=(v_{1},\ldots,v_{t})\) but \(u_{t+1}\neq v_{t+1}\) for some \(0\leq t\leq k-1\). Then
\[\left|\sum_{i=1}^{k-1}(v_{i}-u_{i})b^{-i}\right| =\left|\sum_{i=t+1}^{k}(v_{i}-u_{i})b^{-i}\right|\] \[\geq b^{-(t+1)}-\left|\sum_{i=t+2}^{k}(v_{i}-u_{i})b^{-i}\right|\] \[\geq b^{-(t+1)}-(b^{-(t+1)}-b^{-k})\] \[=b^{-k}.\]
Hence \(P(k)\) is \(b^{-k}\)-separated, and so it is natural to take \(\beta(k)=b^{k}\). To show it is maximal consider any point \(x\in\mathcal{C}(b,J)\). Then there exists word \(\mathbf{i}\in J^{\mathbb{N}}\) such that
\[x=\lim_{n\to\infty}f_{(i_{1},\ldots,i_{n})}(z).\]
Hence,
\[|f_{(i_{1},\ldots,i_{k})}(z)-x|=\left|f_{(i_{1},\ldots,i_{k})}(z)-\lim_{n\to \infty}f_{(i_{1},\ldots,i_{n})}(z)\right|\leq b^{-k}\]
and so \(P(k)\) is maximal. Let \(\mathcal{Q}(,\mathcal{C}(b,J),z)=\bigcup_{k\in\mathbb{N}}P(k)\).
Let \(S=\{k_{j}\}_{j\in\mathbb{N}}\) be a sequence of increasing integers. For \(\tau\in\mathbb{R}_{+}\) define
\[\mathbf{W}^{S}_{\mathcal{C}(b,J)}(\tau,z):=\left\{x\in\mathcal{C}(b,J):|x-f_{ \mathbf{i}}(z)|<b^{-k_{j}\tau}\quad\mathbf{i}\in J^{k_{j}}\,,\quad\text{ for all }j\in\mathbb{N}\right\}.\]
Note since \(S\) is strictly increasing function \(\beta(k)=b^{k}\) is increasing and unbounded on \(S\). Furthermore \(\mathbf{W}^{S}_{\mathcal{C}(b,J)}(\tau,z)=\Lambda^{S}_{\mathcal{Q}(\mathcal{C }(b,J),z)}(\tau)\) and so by Theorem 1.6 we have the following.
**Theorem 2.7**.: _Let \(S=\{k_{j}\}_{j\in\mathbb{N}}\) be a sequence of positive integers and let \(h_{S}\) and \(\alpha_{S}\) be defined by \(S\) and satisfy the conditions as in Theorem 1.6. For any \(\tau\in\mathbb{R}_{+}\) such that \(h_{S}>\tau>1\) we have that_
\[\dim_{\mathcal{H}}\mathbf{W}^{S}_{\mathcal{C}(b,J)}(\tau,z)=\frac{\gamma(b,J) }{\tau}\left(1-\alpha_{S}(\tau-1)\right).\]
## 3. Proof of Theorem 1.6
Before giving the proof of Theorem 1.6 we show that \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\) can be written as a \(\liminf\) sequence of rectangles. This \(\liminf\) set will make for much easier calculation of the upper and lower bound of \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\). For \(1\leq i\leq n\) and \(j\in\mathbb{N}\), let
\[E_{j,i}=\bigcup_{p_{i}\in P_{i}(q_{j})}B_{i}(p_{i},\beta(q_{j})^{-\tau_{i}}),\]
where \(q_{j}\) denotes the \(j\)th value in the sequence \(S\), and let
\[E_{j}=\prod_{i=1}^{n}E_{j,i}=\bigcup_{\mathbf{p}=(p_{1},\ldots,p_{n})\in P(q_{ j})}\prod_{i=1}^{n}B_{i}(p_{i},\beta(q_{j})^{-\tau_{i}}).\]
Then
\[\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\bigcap_{j\in\mathbb{N}}E_{j}.\]
This construction is crucial in the proof of Theorem 1.6. A brief sketch of the proof is as follows. As is standard with the calculation of the dimension of such sets, we split the proof into two parts, proving the upper and lower bound separately.
The upper bound is proven by considering the standard cover provided in the previous section. One small technicality is that the cover is a cover of rectangles, so in order to construct an efficient cover of \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\) we need to consider several different coverings of balls, depending on the side lengths of the rectangles in the layers \(E_{j}\).
The methodology of the lower bound is as follows. Firstly we construct a Cantor set \(L^{S}_{\infty}\) inside of \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\). This Cantor set is a very natural construction. Generally, starting at the layer \(E_{1}\) as defined in the construction of \(\Lambda^{S}_{\mathcal{Q}}(\tau)\) we iteratively construct the Cantor set by simply including all rectangles from \(E_{j}\) that are contained in some rectangle from the \(E_{j-1}\) layer. Hence starting at \(E_{1}\) we construct a nested Cantor set \(L^{S}_{\infty}\) contained in \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\). For the second part, we then construct some measure \(\nu\) on \(L^{S}_{\infty}\). This measure is defined naturally such that the mass is evenly distributed over each rectangle appearing in the layer, and the sum of the mass of rectangles contained in a larger rectangle of the previous layer is equal to the mass of said larger rectangle. The calculation of the Holder exponent of the measure \(\mu\) for a general ball is quite technical since our constructed Cantor set is made of rectangles. To calculate the Holder exponent we have to split into a number of cases determined by the size of the radius of our general ball relative to the side lengths of the rectangles in our Cantor set construction. Once the exponent is calculated we can apply the mass distribution principle to obtain our lower bound dimension result.
Before going into the proof we state the definitions of Hausdorff measure and dimension and state some easy, but essential, lemmas that will be required in the proof.
### Preliminaries
We begin by recalling the definition of Hausdorff measure and dimension, for a thorough exposition see [9]. Let \((F,d)\) be a metric space and \(X\subset F\). Then for any \(0<\rho\leq\infty\), any finite or countable collection \(\{B_{i}\}\) of subsets of \(F\) such that \(X\subset\bigcup_{i}B_{i}\) and
\[r(B_{i})=\inf\{r\geq 0:d(x,y)\leq r\quad(x,y\in B_{i})\}\leq\rho\]
is called a \(\rho\)_-cover_ of \(X\). Let
\[\mathcal{H}_{\rho}^{s}(X)=\inf\left\{\sum_{i}r(B_{i})^{s}\right\}\,,\]
where the infimum is taken over all possible \(\rho\)-covers \(\{B_{i}\}\) of \(X\). The \(s\)_-dimensional Hausdorff measure of \(X\)_ is defined to be
\[\mathcal{H}^{s}(X)=\lim_{\rho\to 0}\mathcal{H}_{\rho}^{s}(X).\]
For any set \(X\subset F\) denote by \(\dim_{\mathcal{H}}X\) the Hausdorff dimension of \(X\), defined as
\[\dim_{\mathcal{H}}X:=\inf\left\{s\geq 0\;:\;\mathcal{H}^{s}(X)=0\right\}\,.\]
A property that the Hausdorff dimension enjoys is that it is countably stable. That is, for a sequence of sets \(X_{i}\) we have that
\[\dim_{\mathcal{H}}\bigcup_{i}X_{i}=\sup_{i}\dim_{\mathcal{H}}X_{i}\,.\]
We will use the following lemma to calculate the lower bound dimension result.
**Lemma 3.1** (Mass Distribution Principle).: _Let \(\nu\) be a Borel probability measure supported on a subset \(X\subseteq F\). Suppose that for \(s>0\) there exists constants \(c,r_{0}>0\) such that_
\[\mu(B)\leq cr(B)^{s}\]
_for all open balls \(B\subset F\) with \(r(B)<r_{0}\). Then \(\mathcal{H}^{s}(X)\geq\frac{1}{c}\)._
We now state some easy lemmas in relation to our setup that will be used in both the upper and lower bound dimension calculations. The first lemma essentially gives us good bounds on the number of abstract rationals contained in some rectangle of certain sidlenghts.
**Lemma 3.2**.: _For any \(x_{i}\in F_{i}\), \(q\in\mathcal{N}\) and \(r>0\) such that \(r>\beta(q)^{-1}\)_
\[\frac{c_{1,i}}{c_{2,i}}2^{-\delta_{i}}(r\beta(q))^{\delta_{i}}\leq\#\left\{p \in P_{i}(q):p\in B_{i}(x_{i},r)\right\}\leq\frac{c_{2,i}}{c_{1,i}}2^{\delta_{ i}+1}(r\beta(q))^{\delta_{i}}.\]
Proof.: Note that by the \(\beta(q)^{-1}\) separated condition of \(P_{i}(q)\) the balls
\[\bigcup_{p\in P_{i}(q)}B_{i}\left(p,\tfrac{1}{2}\beta(q)^{-1}\right)\]
are disjoint, and so using a volume argument one can deduce that
\[\#\left\{p\in P_{i}(q):p\in B_{i}(x_{i},r)\right\}\leq\frac{\mu_{i}\left(B_{i }(x_{i},r)\right)}{\mu\left(B_{i}(p,\tfrac{1}{2}\beta(q)^{-1})\right)}+1\leq \frac{c_{2,i}}{c_{1,i}}2^{\delta_{i}+1}(r\beta(q))^{\delta_{i}},\]
where the second inequality follows since \(r\beta(q)>1\) and \(c_{2,i}c_{1,i}^{-1}\geq 1\).
A similar argument can be done for the lower bound. Note that the maximal condition of \(P_{i}(q)\) ensures, regardless of the arrangement of \(P_{i}(q)\), that at least one \(p\in P_{i}(q)\) must be contained in any ball \(B_{i}(x,2\beta(q)^{-1})\) with \(x\in F_{i}\), and by a volume argument
\[\#\left\{p\in P_{i}(q):p\in B_{i}(x_{i},r)\right\}\geq\frac{\mu_{i}\left(B_{i }(x_{i},r)\right)}{\mu_{i}\left(B_{i}(p,2\beta(q)^{-1})\right)}\geq\frac{c_{1, i}}{c_{2,i}}2^{-\delta_{i}}(r\beta(q))^{\delta_{i}}.\]
The following lemma shows us that our sequence \(S\), satisfying the conditions of Theorem 1.6, will decrease fast enough.
**Lemma 3.3**.: _For any sequence \(S=\{q_{j}\}_{j\in\mathbb{N}}\) if_
\[\inf_{j\in\mathbb{N}}\frac{\log\beta(q_{j})}{\log\beta(q_{j-1})}=h_{S}>1\]
_then_
\[\lim_{j\to\infty}\frac{j}{\log\beta(q_{j})}=0.\]
Proof.: Observe the infimum condition on \(S\) implies that for any \(j\in\mathbb{N}_{>1}\)
\[\log\beta(q_{j})>h_{S}\log\beta(q_{j-1})>h_{S}^{2}\log\beta(q_{j-2})>\dots>h_{ S}^{j-1}\log\beta(q_{1}).\]
Thus
\[\frac{j}{\log\beta(q_{j})}<\frac{j}{h_{S}^{j-1}\log\beta(q_{1})}\to 0\]
as \(j\to\infty\) since \(h_{S}>1\).
### Upper bound of Theorem 1.6
Observe that \(E_{1}\cap\dots\cap E_{j}\) is a \(\beta(q_{j})^{-\min_{i}\tau_{i}}\)-cover of \(\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\) for any \(j\in\mathbb{N}\). Consider the cover \(E_{1}\cap E_{2}\) and note that for any \(x_{i}\in F_{i}\)
\[\#\{p_{i}\in P_{i}(q_{2}):p\in B_{i}(x_{i},\beta(q_{1})^{-\tau_{i}})\}\stackrel{{ \text{Lemma \ref{lem:2}}}}{{\leq}}\frac{c_{2,i}}{c_{1,i}}2^{\delta_{i}+1}( \beta(q_{2})\beta(q_{1})^{-\tau_{i}})^{\delta_{i}},\]
and so for any \(\mathbf{x}\in F\)
\[\#\left\{\mathbf{p}\in P(q_{2}):\mathbf{p}\in\prod_{i=1}^{n}B_{i}(x_{i},\beta (q_{1})^{-\tau_{i}})\right\}\stackrel{{\text{Lemma \ref{lem:2}}}}{{\leq}}\left(\prod_{i=1}^{n}\frac{c_{2,i}}{c_{1,i}}\right)2^{n +\sum\limits_{i=1}^{n}\delta_{i}}\beta(q_{2})^{\sum\limits_{i=1}^{n}\delta_{i }}\beta(q_{1})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}.\]
We should remark here that Lemma 3.2 is applicable since
\[\beta(q_{1})^{-\tau_{i}}>\beta(q_{1})^{-h_{S}}>\beta(q_{2})^{-1}\]
by our definition of \(h_{S}\) and assumption on \(\boldsymbol{\tau}\). From now on, for ease of notation, we will write
\[\delta=\sum_{i=1}^{n}\delta_{i}\,,\quad c_{1}=\prod_{i=1}^{n}c_{1,i}\,,\quad c _{2}=\prod_{i=1}^{n}c_{2,i}\,.\]
This tells us that the cover \(E_{1}\cap E_{2}\) is composed of at most
\[\frac{c_{2}}{c_{1}}2^{n+\delta}\beta(q_{2})^{\delta}\beta(q_{1})^{-\sum \limits_{i=1}^{n}\tau_{i}\delta_{i}}\]
rectangles of the form
\[\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{2})^{-\tau_{i}}\right)\]
for \(\mathbf{p}=(p_{1},\dots,p_{n})\in P(q_{2})\). We can repeat this process iteratively to obtain that \(E_{1}\cap\dots\cap E_{j}\) can be covered by at least
\[G_{j}:=\#P(q_{1})\left(\frac{c_{2}}{c_{1}}2^{n+\delta}\right)^{j-1}\prod_{i=1 }^{j-1}\beta(q_{i+1})^{\delta}\beta(q_{i})^{-\sum\limits_{i=1}^{n}\tau_{i} \delta_{i}}\]
rectangles of the form
\[\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{j})^{-\tau_{i}}\right)\]
for some \(\mathbf{p}\in P(q_{j})\).
For now, fix some \(1\leq k\leq n\). Observe that \(\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{j})^{-\tau_{i}}\right)\) can be covered by
\[\asymp_{c_{1},c_{2}}\prod_{i=1}^{n}\max\left\{1,\beta(q_{j})^{(\tau_{k}-\tau_{ i})\delta_{i}}\right\}\]
\(n\)-dimensional balls, in \(F\), of radius \(\beta(q_{j})^{-\tau_{k}}\). Note that the implied constant is independent of \(j\). Hence
\[\mathcal{H}^{s}\left(\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\right) \ll_{c_{1},c_{2}}\text{, }\liminf_{j\to\infty}G_{j}\prod_{i=1}^{n}\max\left\{1,\beta(q_{j})^{(\tau_{k} -\tau_{i})\delta_{i}}\right\}\left(\beta(q_{j})^{-\tau_{k}}\right)^{s}\] \[\ll_{c_{1},c_{2},\#P(q_{1})}\liminf_{j\to\infty}\left(\frac{c_{2 }}{c_{1}}2^{n+\delta}\right)^{j-1}\prod_{i=1}^{j-1}\beta(q_{i+1})^{\delta} \beta(q_{i})^{-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}\beta(q_{j})^{\sum \limits_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i}}\left(\beta(q_{j})^ {-\tau_{k}}\right)^{s}. \tag{3}\]
For
\[s>\frac{1}{\tau_{k}}\left(\delta+\frac{(j-1)\log\left(\frac{c_{2}}{c_{1}}2^{n +\delta}\right)}{\log\beta(q_{j})}-\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i} \right)\frac{\sum\limits_{i=1}^{j-1}\log\beta(q_{i})}{\log\beta(q_{j})}+\sum_ {i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i}\right) \tag{4}\]
we have that (3) is finite. Furthermore, for any \(\epsilon>0\) there exists subsequence \(\{j_{t}\}_{t\in\mathbb{N}}\) and large \(t_{0}\in\mathbb{N}\) such that for any
\[s>\frac{1}{\tau_{k}}\left(\delta+\epsilon-\left(\sum_{i=1}^{n}(\tau_{i}-1) \delta_{i}\right)\alpha_{S}+\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i}) \delta_{i}\right),\]
the equation after the limit in (3) is finite for all \(j_{t}\) with \(t>t_{0}\). This follows from the definition of \(\alpha_{S}\), and Lemma 3.3. Thus
\[\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\leq\frac{1}{ \tau_{k}}\left(\delta-\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\alpha_{ S}+\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})\delta_{i}\right)\,.\]
We can repeat the above steps for each \(1\leq k\leq n\), and so
\[\dim_{\mathcal{H}}\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\leq\min_{1\leq k \leq n}\left\{\frac{1}{\tau_{k}}\left(\delta-\left(\sum_{i=1}^{n}(\tau_{i}-1) \delta_{i}\right)\alpha_{S}+\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i}) \delta_{i}\right)\right\}.\]
### Lower bound of Theorem 1.6
We begin with the construction of the Cantor subset \(L_{\infty}^{S}\) of \(\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\) before constructing a measure \(\nu\) with support \(L_{\infty}^{S}\). Finally we calculate the Holder exponent of a general ball for such measure.
#### 3.3.1. Cantor construction of \(L^{s}_{\infty}\)
The Cantor set \(L^{S}_{\infty}\) is constructed as follows:
Level 1: Recalling that
\[\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\bigcap_{j\in\mathbb{N}}E_{j}\]
Let \(L^{S}_{1}=E_{1}\).
Level 2: Let
\[L^{S}_{2}=\bigcup_{\begin{subarray}{c}\mathbf{p}\in P(q_{2}):\\ \mathbf{p}\in\prod_{i=1}^{n}B_{i}\left(p_{i},\frac{1}{2}\beta(q_{1})^{-\tau_{ i}}\right)\end{subarray}}\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{2})^{-\tau_{ i}}\right).\]
Observe that for any \(\mathbf{x}\in F\)
\[\#\left\{\mathbf{p}\in P(q_{2}):\mathbf{p}\in\prod_{i=1}^{n}B_{i}\left(x_{i}, \frac{1}{2}\beta(q_{1})^{-\tau_{i}}\right)\right\}\stackrel{{ \text{Lemma \ref{lem:1}}}}{{\geq}}\prod_{i=1}^{n}\frac{c_{1,i}}{c_{2,i}}2^{-2 \delta_{i}}(\beta(q_{2})\beta(q_{1})^{-\tau_{i}})^{\delta_{i}}\]
and so \(L^{S}_{2}\) is composed of at least
\[\frac{c_{1}}{c_{2}}2^{-2\delta}\beta(q_{2})^{\delta}\beta(q_{1})^{-\sum_{i=1 }^{n}\tau_{i}\delta_{i}} \tag{5}\]
rectangles of the form
\[\prod_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{2})^{-\tau_{i}}\right)\]
for some \(\mathbf{p}=(p_{1},\ldots,p_{n})\in P(q_{2})\). Note that (5) may be small but since we are taking integer values there is at least 1 rectangle.
Level \(L^{S}_{j}\): The argument above can be applied inductively again. Namely, for rectangle \(R_{j-1}\in L^{S}_{j-1}\)1 let
Footnote 1: We should note here that technically \(L^{S}_{j-1}\) is a union of rectangles, rather than a collection of rectangles so strictly ”\(R_{j-1}\in L^{S}_{j-1}\)” does not make sense. Hopefully it is clear that in this notation we mean one of the rectangles that is in the union of rectangles in the construction of \(L^{S}_{j-1}\).
\[L^{S}_{j}(R_{j-1})=\bigcup_{\begin{subarray}{c}\mathbf{p}\in P(q_{2}):\\ \mathbf{p}\in R_{j-1}\end{subarray}}\prod_{i=1}^{n}B_{i}(p_{i},\beta(q_{j})^{ -\tau_{i}}),\]
and
\[L^{S}_{j}=\bigcup_{R_{j-1}\in L^{S}_{j-1}}L^{S}_{j}(R_{j-1}).\]
Observe that using the same calculation as in the construction of \(L^{S}_{2}\) that
\[\#\{\mathbf{p}\in P(q_{j}):\mathbf{p}\in R_{j-1}\}\geq\frac{c_{1}}{c_{2}}2^{-2 \delta}\beta(q_{j})^{\delta}\beta(q_{j-1})^{-\sum\limits_{i=1}^{n}\tau_{i} \delta_{i}}\]
and so each set \(L_{j}^{S}(R_{j-1})\) is composed of at least
\[\frac{c_{1}}{c_{2}}2^{-2\delta}\beta(q_{j})^{\delta}\beta(q_{j-1})^{-\sum\limits _{i=1}^{n}\tau_{i}\delta_{i}} \tag{6}\]
rectangles of the form
\[\prod\limits_{i=1}^{n}B_{i}\left(p_{i},\beta(q_{j})^{-\tau_{i}}\right).\]
Note that, by our definition of \(h_{S}\) and condition that \(h_{S}>\tau_{i}\) for each \(1\leq i\leq n\), for sufficiently large \(j\) (6) will be strictly larger than \(1\). Hence the constructed Cantor set does not form a singleton.
To finish the construction define.
\[L_{\infty}^{S}=\bigcap\limits_{j\in\mathbb{N}}L_{j}^{S}.\]
#### 3.3.2. Construction of measure \(\nu\) on \(L_{\infty}^{S}\)
Define the measure \(\nu\) of \(L_{\infty}^{S}\) by
\[\nu(R_{0})=1\quad\text{ for }R_{0}=F,\]
and for each \(R_{j}\in L_{j}^{S}(R_{j-1})\) define
\[\nu(R_{j})=\nu(R_{j-1})\frac{1}{\#L_{j}^{S}(R_{j-1})}. \tag{7}\]
That is, we equally distribute the mass between rectangles in each layer. It is easy to see that the mass distribution is measure-preserving and so \(\nu\) is defined as
\[\nu(A)=\inf\left\{\sum\limits_{i}\nu(R_{i}):\bigcup R_{i}\supseteq A\quad \text{ and }R_{i}\in\bigcup\limits_{j\in\mathbb{N}}L_{\infty}^{S}\right\}\]
for any Borel set \(A\subseteq F\) is a Borel probability measure with support \(L_{\infty}^{S}\subset\Lambda_{\mathcal{Q}}^{S}(\boldsymbol{\tau})\).
Consider a rectangle \(R_{k}\in L_{k}^{S}\). Observe that
\[r(R_{k})=\max\limits_{1\leq i\leq n}\beta(q_{k})^{-\tau_{i}}=\beta(q_{k})^{- \min\limits_{1\leq i\leq n}\tau_{i}},\]
and by definition that
\[\nu(R_{k}) \overset{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_defdef_def_def_defdef_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_defdef_def_defdef_def_defdef_def_defdef_def_defdef_def_defdef_def_defdef_def_defdef_def_defdef_def_def_defdef_def_def_defdef_defdef_defdef_def_def_defdef_def_defdef_defdef_def_defdef_def_defdef_def_defdef_def_defdef_defdef_def_defdef_def_def_defdef_def_def_defdef_def_defdef_def_defdef_defdef_def_defdef_defdef_defdef_def_defdef_defdef_def_defdef_def_defdef_defdef_def_defdef_def_defdef_defdef_defdefdef_def_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_defdef_def_defdefdef_defdef_defdef_defdef_defdef_defdef_defdefdefdef_defdef_defdef_defdef_def_defdefdef_defdef_defdef_defdef_defdefdef_defdef_defdefdef_defdef_def_defdefdef_defdef_defdef_defdef_defdefdef_defdef_defdef_defdef_defdef_defdefdef_def_defdefdef_defdefdef_defdefdef_defdef_defdefdef_defdefdef_defdef_defdefdef_defdefdef_defdefdef_defdefdef_defdefdefdef_defdefdef_defdefdefdef_
Thus the Holder exponent can be calculated to be
\[\frac{\log\nu(R_{k})}{\log r(R_{k})} \geq\frac{\delta\log\beta(q_{k})+(k-1)\log\left(\frac{c_{1}}{c_{2}}2 ^{-2\delta}\right)-\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\left(\sum _{j=1}^{k-1}\log\beta(q_{j})\right)}{\min\limits_{1\leq i\leq n}\tau_{i}\log \beta(q_{k})}\] \[=\frac{1}{\min\limits_{1\leq i\leq n}\tau_{i}}\left(\delta+\log \left(\frac{c_{1}}{c_{2}}2^{-2\delta}\right)\frac{(k-1)}{\log\beta(q_{k})}- \left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\frac{\sum\limits_{j=1}^{k-1} \log\beta(q_{j})}{\log\beta(q_{k})}\right).\]
By our condition on the sequence \(S\) and Lemma 3.3 for any \(\varepsilon>0\) there exists \(k_{\varepsilon}\) such that for all \(k>k_{\varepsilon}\)
\[\left|\frac{\sum\limits_{j=1}^{k-1}\log\beta(q_{j})}{\log\beta(q_{k})}-\alpha _{S}\right|<\frac{\varepsilon}{2}\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i} \right)^{-1},\quad\text{ and }\quad\left|\log\left(\frac{c_{1}}{c_{2}}2^{-2 \delta}\right)\frac{(k-1)}{\log\beta(q_{k})}\right|<\frac{\varepsilon}{2}\,. \tag{9}\]
Hence we have that
\[\frac{\log\nu(R_{k})}{\log r(R_{k})}\geq\frac{1}{\min\limits_{1\leq i\leq n} \tau_{i}}\left(\delta-\left(\sum_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\alpha _{S}+\varepsilon\right):=s_{\min}, \tag{10}\]
Thus
\[\nu(R_{k})\ll r(R_{k})^{s_{\min}}.\]
#### 3.3.3. Holder exponent of a general ball
Fix any \(\varepsilon>0\) and let \(k_{\varepsilon}\in\mathbb{N}\) be large enough such that (10) holds for all \(k>k_{\varepsilon}\) and
\[\left|\frac{\log\left(\frac{c_{2}}{c_{1}}4^{n+\delta}\right)+(k-1)\log\left( \frac{c_{1}}{c_{2}}2^{\delta}\right)}{\log\beta(q_{k-1})}\right|<\frac{ \varepsilon}{2}\,,\quad\text{ and }\quad\left|\frac{k\log\left(\frac{c_{2}}{c_{1}}2^{2 \delta}\right)+\log 2^{n+3\delta}}{\log\beta(q_{k-1})}\right|<\frac{\varepsilon}{2}\,. \tag{11}\]
Note such choice of \(k_{\varepsilon}\) is possible by (3.3). Consider an arbitrary ball \(B(x,r)\subset F\) with \(x\in L^{S}_{\infty}\) and \(0<r<r_{0}:=\beta(q_{k_{\varepsilon}})^{-\min_{1\leq i\leq n}\tau_{i}}\). If \(B(x,r)\) intersects exactly \(1\) rectangle at each layer of \(L^{S}_{\infty}\) then
\[\nu(B(x,r))\leq\nu(R_{k})\to 0\quad\text{ as }k\to\infty,\]
and so trivially \(\nu(B(x,r))\leq r^{s}\). Thus we may assume there exists some \(k\in\mathbb{N}_{>k_{\varepsilon}}\) such that \(B(x,r)\) intersects exactly one rectangle in \(L^{S}_{k-1}\), say \(R_{k-1}\), and at least two rectangles in \(L^{S}_{k}(R_{k-1})\). Note that
\[\nu(B(x,r))=\nu(B(x,r)\cap R_{k-1}).\]
Consider the following cases:
* \(r>\beta(q_{k-1})^{-\min\limits_{1\leq i\leq n}\tau_{i}}\): Then \[\nu(B(x,r))\leq\nu(R_{k-1})\leq\left(\beta(q_{k-1})^{-\min\limits_{1\leq i\leq n }\tau_{i}}\right)^{s_{\min}}\leq r^{s_{\min}}.\]
2. \(\beta(q_{k-1})^{-\min\limits_{1\leq i\leq n}\tau_{i}}\geq r\geq\beta(q_{k-1})^{- \max\limits_{1\leq i\leq n}\tau_{i}}\): We need to find an upper bound of \[\lambda=\#\left\{\mathbf{p}\in P(q_{k}):\mathbf{p}\in\prod_{i=1}^{n}B_{i}\left(x _{i},\min\left\{r,2\beta(q_{k-1})^{-\tau_{i}}\right\}\right)\right\}.\] That is, the number of centers corresponding to rectangles from \(L_{k}^{S}(R_{k-1})\) contained in \(B(x,r)\cap R_{k-1}\). Observe that \[\lambda \stackrel{{\text{Lemma \ref{lem:bound
Then we may write
\[N_{\nu,r,k} \geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\begin{array}{c} \left(\sum\limits_{i\in\mathcal{T}_{1}}\delta_{i}\right)\!\left(-\tau_{j}\right) \log\beta\!\left(q_{k-1}\right)\!-\!\left(\sum\limits_{i\in\mathcal{T}_{2}} \tau_{i}\delta_{i}\right)\log\beta\!\left(q_{k-1}\right)\!-\!\left(\sum \limits_{i=1}^{n}\!\left(1-\tau_{i}\right)\!\delta_{i}\right)\!\left(\sum \limits_{i=1}^{k-1}\log\beta\!\left(q_{j}\right)\right)\\ -\tau_{j}\log\beta\!\left(q_{k-1}\right)\end{array}\right\}\right.\] \[\geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\begin{array}{c }\left(\sum\limits_{i\in\mathcal{T}_{1}}\delta_{i}\right)\!\left(-\tau_{j} \right)\log\beta\!\left(q_{k-1}\right)\!-\!\left(\sum\limits_{i\in\mathcal{T}_ {2}}\tau_{i}\delta_{i}\right)\log\beta\!\left(q_{k-1}\right)\!-\!\left(\sum \limits_{i=1}^{n}\!\left(1-\tau_{i}\right)\!\delta_{i}\right)\!\left(\sum \limits_{i=1}^{k-1}\log\beta\!\left(q_{j}\right)\right)\\ -\log\beta\!\left(q_{k-1}\right)\end{array}\right)\right\}\,,\] by (11). Now \[N_{\nu,r,k} \geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\begin{array}{c }\left(\sum\limits_{i\in\mathcal{T}_{1}}\delta_{i}\tau_{j}\right)+\left(\sum \limits_{i\in\mathcal{T}_{2}}\tau_{i}\delta_{i}\right)+\;\left(\sum\limits_{i =1}^{n}\!\left(1-\tau_{i}\right)\!\delta_{i}\right)\left(1+\frac{\left(\sum \limits_{i=1}^{k-2}\log\beta\!\left(q_{j}\right)\right)}{\log\beta\!\left(q_{k -1}\right)}\right)+\frac{\varepsilon}{2}\right)\right\}\] \[\geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\begin{array}{c }\left(\sum\limits_{i\in\mathcal{T}_{1}}\delta_{i}\tau_{j}\right)+\left(\sum \limits_{i\in\mathcal{T}_{2}}\tau_{i}\delta_{i}\right)+\;\left(\sum\limits_{i =1}^{n}\!\left(1-\tau_{i}\right)\!\delta_{i}\right)\left(1+\alpha_{S}\right)+ \varepsilon\end{array}\right)\right\}\,,\] by (9). Splitting the third summation into the two components we get \[N_{\nu,r,k} \geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\begin{array}{c }\delta-\sum\limits_{i=1}^{n}\tau_{i}\delta_{i}+\left(\sum\limits_{i\in \mathcal{T}_{1}}\delta_{i}\tau_{j}\right)+\left(\sum\limits_{i\in \mathcal{T}_{2}}\tau_{i}\delta_{i}\right)+\;\left(\sum\limits_{i=1}^{n}\!\left( 1-\tau_{i}\right)\!\delta_{i}\right)\alpha_{S}+\varepsilon\end{array}\right)\right\}\] \[\geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\delta+\left(\sum \limits_{i\in\mathcal{T}_{1}}\!\left(\tau_{j}-\tau_{i}\right)\!\delta_{i} \right)+\left(\sum\limits_{i=1}^{n}\!\left(1-\tau_{i}\right)\!\delta_{i} \right)\alpha_{S}+\varepsilon\right)\right\}\] \[\geq\min_{j=u,v}\left\{\frac{1}{\tau_{j}}\left(\delta-\left(\sum \limits_{i=1}^{n}\!\left(\tau_{i}-1\right)\!\delta_{i}\right)\alpha_{S}+\left( \sum\limits_{i:\tau_{j}>\tau_{i}}\!\left(\tau_{j}-\tau_{i}\right)\!\delta_{i} \right)+\varepsilon\right)\right\}=s_{u,v}\,.\] The last line follows on the observation that for \(k\) sufficiently large and \(j\) fixed, the sets \(\{i:\tau_{j}>\tau_{i}\}\) and \(T_{1}\cup\{j\}\) are the same. Clearly, the fact that \(j\) is omitted does not affect the appearing summation. This completes case ii).
* \(r\leq\beta\!\left(q_{k-1}\right)^{-\max_{1\leq i\leq n}\tau_{i}}\): We calculate \[\nu\left(B\!\left(x,r\right)\cap R_{k-1}\right)\leq\sum_{\begin{subarray}{c }R_{k}\in L_{k}^{S}\!\left(R_{k-1}\right)\\ B\!\left(x,r\right)\cap R_{k}\neq\emptyset\end{subarray}}\nu\!\left(R_{k}\right).\]
Since \(B(x,r)\) intersects at least two rectangles in \(L^{S}_{k}\), and \(x\) is contained in one of them, we have that \(r>\frac{1}{2}\beta(q_{k})^{-1}\). Hence
\[\#\left\{R_{k}\in L^{S}_{k}:B(x,r)\cap R_{k}\neq\emptyset\right\} \leq\#\left\{p\in\prod_{i=1}^{n}P_{i}(q_{k}):p\in\prod_{i=1}^{n}B_{i}(x_{i},4 r)\right\}\] \[\stackrel{{\text{Lemma \ref{lem:L1}}}}{{\leq}}\left(\frac{c_{2}}{c_{1}} \right)^{n}2^{3\delta+n}(r\beta(q_{k}))^{\delta}.\]
So
\[\nu(B(x,r)\cap R_{k-1}) \leq\frac{c_{2}}{c_{1}}2^{n+3\delta}(r\beta(q_{k}))^{\delta}\nu( R_{k})\] \[\leq\frac{c_{2}}{c_{1}}2^{n+3\delta}(r\beta(q_{k}))^{\delta}\prod _{i=1}^{k}\frac{1}{\#L^{S}_{i}(R_{i-1})}\] \[\stackrel{{\eqref{lem:L1}}}{{\leq}}(r\beta(q_{k}) )^{\delta}\left(\frac{c_{2}}{c_{1}}\right)^{k+1}2^{(n+3\delta)+2k\delta}\beta (q_{1})^{-\delta}\prod_{i=2}^{k}\beta(q_{i})^{-\delta_{i}}\beta(q_{i-1})^{ \sum\limits_{i=1}^{n}\tau_{i}\delta_{i}}\] \[\leq 2^{n+3\delta}\left(\frac{c_{2}}{c_{1}}2^{(2\delta)}\right)^{( k+1)}r^{\delta}\prod_{j=1}^{k-1}\beta(q_{j})^{\sum\limits_{i=1}^{n}(\tau_{i}-1) \delta_{i}}.\]
Hence the Holder exponent can be calculated to be
\[\frac{\log\nu(B(x,r))}{\log r} \geq\frac{\delta\log r+\left(\sum\limits_{i=1}^{n}(\tau_{i}-1) \delta_{i}\right)\left(\sum\limits_{j=1}^{k-1}\log\beta(q_{j})\right)}{\log r} +\frac{(k+1)}{\log r}\log\left(\frac{c_{2}}{c_{1}}2^{(2\delta)}\right)+\frac{ \log 2^{n+3\delta}}{\log r}\] \[\stackrel{{\eqref{lem:L1}}}{{\geq}}\delta+\frac{ \left(\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\left(\sum\limits_{j=1 }^{k-1}\log\beta(q_{j})\right)}{\log r}-\left(\max\limits_{1\leq i\leq n}\tau _{i}\right)^{-1}\frac{\varepsilon}{2}\] \[\stackrel{{\eqref{lem:L1}}}{{\geq}}\frac{1}{\max_{1 \leq i\leq n}\tau_{i}}\left(\delta\max\limits_{1\leq i\leq n}\tau_{i}-\left( \sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)\frac{\sum\limits_{j=1}^{k-1 }\log\beta(q_{j})}{\log\beta(q_{k-1})}\right)-\left(\max\limits_{1\leq i\leq n }\tau_{i}\right)^{-1}\frac{\varepsilon}{2}\] \[\stackrel{{\eqref{lem:L1}}}{{\geq}}\frac{1}{\max_{1 \leq i\leq n}\tau_{i}}\left(\delta\max\limits_{1\leq i\leq n}\tau_{i}-\left( \sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}\right)(\alpha_{S}+1)-\frac{ \varepsilon}{2}\right)-\left(\max\limits_{1\leq i\leq n}\tau_{i}\right)^{-1} \frac{\varepsilon}{2}\] \[=\frac{1}{\max\limits_{1\leq i\leq n}\tau_{i}}\left(\delta-( \alpha_{S}-\varepsilon)\sum\limits_{i=1}^{n}(\tau_{i}-1)\delta_{i}+\sum\limits_ {i=1}^{n}\left(\left(\max\limits_{1\leq i\leq n}\tau_{i}\right)-\tau_{i} \right)\delta_{i}-\varepsilon\right)\] \[=s_{\max}\,.\]
This completes case iii).
Compiling the results of i)-iii) we have that
\[\mu(B(x,r))\ll r^{\min\limits_{1\leq u,v\leq n}\{s_{\min},s_{\max},s_{u},v\}}\]
Note \(\varepsilon>0\) was arbitrary in the values of \(s_{\min},s_{\max}\) and each \(s_{u,v}\). Thus letting \(\varepsilon\to 0\) and noting that
\[\min_{1\leq u,v\leq n}\left\{s_{\min},s_{\max},s_{u,v}\right\}=s\]
gives us
\[\dim_{\mathcal{H}}\Lambda^{S}_{\mathcal{Q}}(\boldsymbol{\tau})\geq s\]
via Lemma 3.1 hence completing the proof of the lower bound.
### Proof of Theorem 1.9
To prove Theorem 1.9 via Theorem 1.6 note that by the countable stability of the Hausdorff dimension
\[\dim_{\mathcal{H}}\widehat{\Lambda}^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\sup \limits_{t\in\mathbb{N}}\dim_{\mathcal{H}}\Lambda^{\sigma^{t}S}_{\mathcal{Q}}( \boldsymbol{\tau}).\]
Observe that \(h\geq h_{\sigma^{t}S}\) for any \(t\in\mathbb{N}\). For \(\boldsymbol{\tau}\) chosen in Theorem 1.9 it may be true that there exists \(\tau_{i}\) with \(\tau_{i}>h_{\sigma^{t}S}\). However, these sets have dimension less than or equal to the exact dimension result of Theorem 1.6 so can be ignored when considering the supremum over \(t\in\mathbb{N}\). Thus without loss of generality assume that for each \(1\leq i\leq n\)\(h>\tau_{i}>1\). Furthermore choose \(t_{0}\) sufficiently large such that
\[h>h_{\sigma^{t}S}>\tau_{i}\quad\forall 1\leq i\leq n\]
for all \(t\geq t_{0}\). Thus by Theorem 1.6
\[\dim_{\mathcal{H}}\widehat{\Lambda}^{S}_{\mathcal{Q}}(\boldsymbol{\tau})=\sup \limits_{t\in\mathbb{N}}s=s\]
completeing the proof.
### Proof of Theorem 1.2
Note that if
\[\lim_{j\to\infty}\frac{\log q_{j}}{\log q_{j-1}}=k\]
then for any sufficiently small \(\varepsilon>0\) there exists \(j_{0}\in\mathbb{N}\) such that for all \(j>j_{0}\)
\[(k-\varepsilon)\log q_{j-1}<\log q_{j}<(k+\varepsilon)\log q_{j-1}.\]
So
\[\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}<\sum\limits_{i=1}^{j-1}( k+\varepsilon)^{-i}=\frac{1-(k+\varepsilon)^{-(j-1)}}{k-1+\varepsilon},\]
and similarly for the lower bound
\[\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{j}}>\frac{1-(k-\varepsilon)^ {-(j-1)}}{k-1-\varepsilon}.\]
Thus, as \(k>1\) and \(0<\varepsilon\) can be chosen arbitrarily small (\(\varepsilon<k-1\)), taking the limit as \(j\to\infty\) gives us that
\[\alpha_{S}=\lim_{j\to\infty}\frac{\sum\limits_{i=1}^{j-1}\log q_{i}}{\log q_{ j}}=\frac{1}{k-1}.\]
Applying Corollary 2.1 with \(h_{S}=k\), \(\alpha_{S}=\frac{1}{k-1}\), and \(0<\tau_{i}<h_{S}-1\) for each \(1\leq i\leq n\) gives us Theorem 1.2. |
2309.15991 | Targeted Image Data Augmentation Increases Basic Skills Captioning
Robustness | Artificial neural networks typically struggle in generalizing to
out-of-context examples. One reason for this limitation is caused by having
datasets that incorporate only partial information regarding the potential
correlational structure of the world. In this work, we propose TIDA (Targeted
Image-editing Data Augmentation), a targeted data augmentation method focused
on improving models' human-like abilities (e.g., gender recognition) by filling
the correlational structure gap using a text-to-image generative model. More
specifically, TIDA identifies specific skills in captions describing images
(e.g., the presence of a specific gender in the image), changes the caption
(e.g., "woman" to "man"), and then uses a text-to-image model to edit the image
in order to match the novel caption (e.g., uniquely changing a woman to a man
while maintaining the context identical). Based on the Flickr30K benchmark, we
show that, compared with the original data set, a TIDA-enhanced dataset related
to gender, color, and counting abilities induces better performance in several
image captioning metrics. Furthermore, on top of relying on the classical BLEU
metric, we conduct a fine-grained analysis of the improvements of our models
against the baseline in different ways. We compared text-to-image generative
models and found different behaviors of the image captioning models in terms of
encoding visual encoding and textual decoding. | Valentin Barriere, Felipe del Rio, Andres Carvallo De Ferari, Carlos Aspillaga, Eugenio Herrera-Berg, Cristian Buc Calderon | 2023-09-27T20:12:41Z | http://arxiv.org/abs/2309.15991v2 | # Targeted Image Data Augmentation
###### Abstract
Artificial neural networks typically struggle in generalizing to out-of-context examples. One reason for this limitation is caused by having datasets that incorporate only partial information regarding the potential correlational structure of the world. In this work, we propose TIDA (Targeted Image-editing Data Augmentation), a targeted data augmentation method focused on improving models' human-like abilities (e.g., gender recognition) by filling the correlational structure gap using a text-to-image generative model. More specifically, TIDA identifies specific skills in captions describing images (e.g., the presence of a specific gender in the image), changes the caption (e.g., "woman" to "man"), and then uses a text-to-image model to edit the image in order to match the novel caption (e.g., uniquely changing a woman to a man while maintaining the context identical). Based on the Flickr30K benchmark, we show that, compared with the original data set, a TIDA-enhanced dataset related to gender, color, and counting abilities induces better performance in several image captioning metrics. Furthermore, on top of relying on the classical BLEU metric, we conduct a fine-grained analysis of the improvements of our models against the baseline in different ways. We compared text-to-image generative models and found different behaviors of the image captioning models in terms of encoding visual encoding and textual decoding.1
Footnote 1: Code will be available online after submission.
## 1 Introduction
Humans and animals develop all kinds of cognitive abilities from a very early age that allow them to interact with their world (Spelke et al., 1992; Spelke and Kinzler, 2007). For instance, infants display numerical cognition abilities (Feigenson et al., 2004; Xu and Spelke, 2000), can recognize emotions (Bornstein and Arterberry, 2003) or even the danger associated with other agents' action plans (Liu et al., 2022). Comparatively, animals also display similar numerical cognition abilities (Davis and Memmott, 1982; Dacke and Srinivasan, 2008), or recognize emotions in order to better communicate within a social group (Hantke et al., 2018). These abilities are crucial in order to build models of the world that are necessary for planning, reasoning, and solving complex decision-making tasks (Lake et al., 2017).
Deep learning systems can solve these tasks by optimizing an objective function via supervised, semi-supervised or unsupervised learning (LeCun et al., 2015). Within this framework, it has been shown that deeper layers progressively represent increasingly abstract concepts (Krizhevsky et al., 2017), akin to what has been observed in the human visual or auditive processing pathways (Cichy et al., 2016; Caucheteux et al., 2023). Moreover, empirical work has shown that pretrained state-of-the-art transformer models (Devlin et al., 2019) encode factual knowledge within sets of knowledge neurons (Dai et al., 2022); strongly related to the concepts of "grandmother" cells in neuroscience (Quiroga et al., 2005). Importantly, not only factual knowledge but also conceptual knowledge (such as "sentiment" in a text or "written language" in an image) are encoded by nodes in deep layers (Radford et al., 2017; Yosinski et al., 2015). Whereas recent methods have been proposed to access and edit factual knowledge (Meng et al., 2022), and thus evaluate how and where facts are being encoded in deep networks (Meng et al., 2022), it is much harder to evaluate the abilities associated with conceptual knowledge stored in these networks. Yet, possessing such a conceptual knowledge base is crucial for out-of-distribution generalization (Bosselut et al., 2019).
Although deep networks seem to encode conceptual knowledge that allows them to display human-like abilities such as counting, emotion, gender,
color, and sentiment recognition/categorization Wallace et al. (2019); Barriere et al. (2022); Hendricks et al. (2018); Anderson et al. (2016); Barriere (2017), these same networks typically struggle in producing out-of-context (or out-of-distribution) generalizations Marcus (2018); Lake and Baroni (2018); Ruis et al. (2020); del Rio et al. (2023); Ribeiro et al. (2020). These limitations are due to the inherent functioning of Artificial Neural Networks (ANNs). Indeed, generalization performances of ANNs largely depend on their ability to extract the correlational structure in the training data set, memorize this structure, and extrapolate it to a novel (test) data set Krizhevsky et al. (2017); Saxe et al. (2019).
Indeed, given that the performance of vanilla deep networks is constrained by the structural correlation observed in the training data set, a straightforward way to maximize the generalization performance in ANNs is to augment data sets in _targeted_ ways Sharmanska et al. (2020); He et al. (2023). Thereby, targeted data augmentation would increase the span of potential correlations that could be observed in the world, and as such improve the human-like abilities of deep networks. By targeting specific human-like abilities and augmenting the data set to encapsulate unseen examples associated with these abilities, we hypothesize that models can increase their conceptual knowledge, and thus improve their performance on specific benchmarks we discuss below. Moreover, similar to editing unique factual knowledge Meng et al. (2022), one would ideally want to target unique conceptual knowledge (e.g., gender, color, numerosity, emotion, shape...) to induce such ability-selective performance, which has been widely studied Anderson et al. (2016); Hu et al. (2023).
We will propose a simple way to overcome the issues raised above, for Image Captioning (IC) task. Interestingly, novel text-to-image generation models Rombach et al. (2022) in combination with text-generation or manipulation He et al. (2023); Mitkov (2022); Murty et al. (2022) affords novel possibilities for targeted data augmentation for vision-language tasks. Hence, we propose to enhance the capabilities of an Image Captioning model by using a targeted data-augmentation on several specific abilities (or skills). We use simple regular expressions (regex) to identify these skills in the caption, to change the caption for another version of it, and to generate the image related to this caption. The main contributions of this work are twofold. First, we propose a simple method to identify data related to a specific human-like ability in image captioning (e.g., color identification, emotion recognition...). Second, we propose a novel data augmentation method based on image-to-text generation models that allows one to generate data sets that can selectively improve a single or combinations of human-like skills in image captioning performance. Instead of manipulating or fine-tuning information processing within image captioning models, our method increases the span of potential object correlations and thus allows us to generalize image captioning abilities to a broader spectrum of situations that can be observed in the real world Zhang et al. (2021). In what follows, we first describe related work while specifying the original contribution of our work. Subsequently, we describe the Targeted Image-editing Data Augmentation (TIDA; see Figure 1) method and present the results associated with fine-tuning models with our TIDA-augmented data sets. Finally, we discuss the implications of our work.
Figure 1: TIDA Framework (Example generated with Null-Text-Inversion Mokady et al. (2022))
Previous and Related Work
Image CaptioningImage captioning (IC) models provide human-like captions to images (Cornia et al., 2020; Herdade et al., 2019). Such an ability lies in the intersection between computer vision and natural language processing (Devlin et al., 2015), and is therefore, in essence, a multimodal problem. Early IC models proposed to sequentially combine convolutional neural networks (CNN) with recurrent neural networks (RNN) into a single imaged-conditioned language model (Karpathy and Fei-Fei, 2015; Chen and Lawrence Zitnick, 2015; Fang et al., 2015). Given the success of these models and their potential industrial applications, subsequent work has focused on improving the models' image captioning ability by focusing on specific properties of IC models. For instance, it has been shown that top-down visual attention mechanisms improve captioning performance (Anderson et al., 2018; Lu et al., 2017). Alternatively, focusing on the learning process, it has been shown that implementing self-critical sequence training (a variant of the REINFORCE algorithm) improves IC performances by avoiding the exposure bias (Ranzato et al., 2016) and directly optimizing the relevant task metrics (Rennie et al., 2017). Furthermore, many IC models are pre-trained using tasks like Masked Language Modeling (MLM) and Image-Text Matching (ITM). These tasks possess losses that differ from image captioning (or other downstream tasks), and thus IC models require further fine-tuning. Hence, recent work has focused on unifying generative vision-language models through text generation (Cho et al., 2021; Wang et al., 2022, 2022), in order to optimize knowledge transfer from train to test. Lastly, novel methods have focused on optimally leveraging language caption supervision during pre-training, as small datasets with large caption variability can lead to detrimental effects (Santurkar et al., 2023).
Symbolic KnowledgeVision-language (VL) tasks can also be improved by incorporating symbolic knowledge into the VL models. For instance, providing a knowledge base, instantiated as subject-relation-object triplets associated with the images, both improve performance in vision-question answering (VQA) tasks, on top of allowing to explain the VQA model's predictions (Riquelme et al., 2020). In the same vein, adding high-level (semantic) attributes as inputs to IC models can increase captioning benchmarks (You et al., 2016; Yao et al., 2017). Alternative efforts have shown that using object tags to facilitate the semantic image-text alignment during pre-training, and improves benchmark metrics in downstream fine-tuned image captioning tasks (Li et al., 2020). Moreover, aligning directional semantic and spatial relationships between text and image (i.e., relation-level alignment) improves compositional reasoning (Pandey et al., 2022). Finally, symbolic knowledge and reasoning capability aim to enhance textual model's robustness when faced with out-of-distribution examples, thereby enabling them to engage in more human-like reasoning (Collins et al., 2022).
Bias/Bug detection, and EvaluationTIDA enhances the likelihood of simultaneously observing distinct attributes in an image within the augmented dataset. Thereby, our work relates to studies that focus on improving the predictive abilities of models in domains that suffer from bias-induced incorrect predictions. In line with this idea, the _Equalizer_ model is constrained to attend to the person attribute in images, increasing the IC abilities to detect the gender in the image (Hendricks et al., 2018). Interestingly, other attributes such as numeracy (e.g., counting) naturally emerge in standard embeddings (Wallace et al., 2019), and may thus be less prone to biased predictions. Alternative debiasing methods focus on "decoupling" biased directions within text embeddings (Chuang et al., 2023).
Other approaches focus on discovering the specific images where IC models fail (i.e., bugs). An instance of such a method uses a sequential pipeline that generates images from specific captions, classifies the object in the image, creates captions from the incorrectly classified images, generates captions of these images, and finally regenerates novel images based on the previously generated caption via a text-to-image generative process. These last images can be used to assess the robustness of vision models, as well as improve their performance (Wiles et al., 2022).
Moreover, while image captioning is usually scored on automatic metrics like SPICE (Anderson et al., 2016) or CIDEr (Vedantam et al., 2015), it has been suggested that metrics evaluating both precision _and_ recall leading to better correlations with human judgments (Kasai et al., 2022). Finally, (Hu et al., 2023) propose a method to compare image captioning models correlated with human
judgment by leveraging LLM (OpenAI, 2023).
**Data augmentation and Image generation** Data augmentation has been shown to improve performance both in vision (Ho et al., 2019; Cubuk et al., 2020) and language (Sennrich et al., 2015; Karimi et al., 2021; Andreas, 2020; Wei and Zou, 2019) tasks. Typically, data augmentation techniques involve procedures such as geometric transformations, color space augmentations, kernel filters, or mixing images (see (Shorten and Khoshgoftaar, 2019) for review). To further improve these augmentation methods, a multi-task view of augmentation proposes to incorporate both original data and augmented examples during the training procedure (Wei et al., 2021). This proposal has the benefit to relax the assumption that augmented examples cannot be too dissimilar from the original data. In the same vein, _Neurocounterfactuals_ is a method that allows augmenting data via large counterfactual perturbations that still bear resemblance to the original data but can nonetheless provide richer data augmentation (Howard et al., 2022). More recent studies have investigated data augmentation methods in multimodal settings such as VL tasks. For instance, LeMDA is a method that learns an augmentation network alongside a task-dedicated network (Liu et al., 2022). This method augments the latent representation of the network and thus preserves the semantic structure in each modality.
Moreover, not restricting data augmentation to the specificity of inputs can have detrimental effects, as augmented examples may possibly be associated to another label (e.g., a color change from green to red rock may induce a label change from emerald to ruby). To avoid this pitfall, instance-specific augmentation (_InstaAug_) learns to apply invariances to specific parts of the input space (Miao et al., 2022). Similar work suggests estimating invariances by learning a distribution over augmentations, and jointly optimizing both the network _and_ augmentation distribution parameters (Benton et al., 2020).
Other methods belong to a class of automated data augmentation algorithms. These algorithms can for example use reinforcement learning (RL) to optimize a data augmentation policy (e.g., (Cubuk et al., 2019)). Furthermore, differentiable data augmentation proposes a method that relaxes the discrete state search assumption of RL, and allows for a more efficient data augmentation by implementing an end-to-end differentiable search procedure (Hataya et al., 2020). Notably, other methods such as _AdaAug_ extend previous research by focusing not only on instance-depend data augmentation but also on class-dependent ones through the implementation of adaptive augmentation policies (Cheung and Yeung, 2022).
Our method differentiates from policy-based methods for data augmentation but remains both automated, class-dependent, and targeted (i.e., we can focus on specific attributes such as gender, counting, or color). In particular, we leverage the impressive natural language-driven image synthesis abilities of text-to-image generative models (Yu et al., 2022; Saharia et al., 2022; Ramesh et al., 2022) (see methods). In particular, we focus on their image editing or inpainting ability, which is a difficult challenge for these models given that only part of the image has to be changed while the rest has to be maintained. To solve this issue, traditional methods make use of explicit masks to circumscribe the inpainting region (Nichol et al., 2022; Avrahami et al., 2022). However, masking methods are both time-consuming and do not leverage structural information in the image. To circumvent this issue, recent work proposes the use of a prompt-to-prompt procedure in combination with a cross-attentional control mechanism that allows to edit of specific objects in the image while taking into account the contextual information (Hertz et al., 2022). Another method proposes to use of null-text inversion to achieve maskless image editing (Mokady et al., 2022).
Interestingly, these state-of-the-art inpainting models open up the possibility to implement novel data augmentation methods. For instance, a recent paper showed that fine-tuning large-scale image-to-text generative models allows producing high-quality synthetic data that can improve ImageNet benchmark scores (Azizi et al., 2023). TIDA extends this idea in VL models, in order to improve specific target skills of these models within the framework of image captioning tasks.
## 3 Method and Experiments
We propose a two-step method that allows retrieving certain images using their captions, regarding a specific concept that we call _skill_. These skills refer to human- and animal-like basic abilities, such as gender categorization, counting, or recognizing colors. We first use a text mining method to
detect whether or not a caption contains specific words that are related to the skill (Subsection 3.1). Second, we generate variants of the original skill-related captions and create new images with these new captions in order to augment the dataset for each type of skill (Subsection 3.2). An overview of the method is shown in Figure 1.
### Skill-related retrieval
We assume a list of \(S\) skills \(\{\mathcal{S}_{i},i=1...S\}\), a training dataset of captions and images \(\mathcal{D}^{\text{train}}=\{(\mathbf{C}_{k},I_{k}),k=1..k_{\text{train}}\}\), \(\mathbf{C}_{k}\) being a set of ground truth captions.
For each skill \(\mathcal{S}_{i}\) we create a binary classifier \(f_{\mathcal{S}_{i}}\) that detects whether or not the skill \(\mathcal{S}_{i}\) is present in a pair of image and associated captions. By applying this function to a dataset \(\mathcal{D}\), it is possible to create a subpart of this dataset \(\mathcal{D}_{\mathcal{S}_{i}}\) containing samples related to the aforementioned skill. By using this method and for each skill \(\mathcal{S}_{i}\), we retrieve a subpart of the train \(\mathcal{D}^{\text{train}}\) dataset that we call \(\mathcal{D}^{\text{train}}_{\mathcal{S}_{i}}\) and a subpart of the test \(\mathcal{D}^{\text{test}}\) dataset that we call \(\mathcal{D}^{\text{test}}_{\mathcal{S}_{i}}\). The former will be used for data-augmentation and the latter will be used for the evaluation of the different models.
### Targeted Data Augmentation
In order to improve the performances of the model with regard to several skills, we augment the dataset with sets of new examples. Those examples are created so that they depict new situations that are not necessarily in the training set, but should help the model generalize. For this purpose, we create a set of text generators functions \(\{\mathcal{G}_{t,\mathcal{S}_{i}},i=1...S\}\) taking as input a text caption containing a skill \(\mathcal{S}_{i}\) and generating a slightly different version of this caption. The generator function perturbs the caption's text in such a way that it remains related to the skill. For example, it would inverse the gender of one of the words in the sentence: The caption "a man is playing basketball" would be changed (or perturbed) to "a woman is playing basketball". Mathematically, for any caption \({c_{kl}}\)2 containing the skill \(\mathcal{S}_{i}\), we create another caption \(c_{kli}=\mathcal{G}_{t,\mathcal{S}_{i}}(c_{kl})\).
Footnote 2: caption \(l\) of the image \(k\)
Finally, for every perturbated caption \(c_{kli}\) we use a text-to-image generator \(\mathcal{G}_{V}\) in order to create an image \(I_{kli}\) associated with the novel caption. We obtain an artificial set of image-caption pairs, which gives with the original images, the dataset \(\mathcal{D}^{train}_{\mathcal{G}_{V}-\mathcal{S}_{i}}\).
Those augmented datasets \(\mathcal{D}^{train}_{\mathcal{G}_{V}-\mathcal{S}_{i}}\) are used to train several image captioning models, which should focus more on the specific skill \(\mathcal{S}_{i}\). Each of the models is then evaluated on the different test sets \(\mathcal{D}^{\text{train}}_{\mathcal{S}_{i}}\) which contain the pairs of images and list of captions that are related to the skill \(\mathcal{S}_{i}\). The pseudo-code is visible in Algorithm 1.
```
0: Skills \(\mathcal{S}_{i}\), Textual skill detectors \(f_{\mathcal{S}_{i}}\), Text generators \(\mathcal{G}_{t,\mathcal{S}_{i}}\), Image generator \(\mathcal{G}_{V}\), Train set \(\mathcal{D}^{\text{train}}=\{(c_{kl},I_{k})\}\) for\(i\) in \(1...S\)do \(\mathcal{D}^{\text{train}}_{\mathcal{G}_{V}-\mathcal{S}_{i}}\leftarrow\mathcal{D} ^{\text{train}}\)\(\triangleright\) Initialize \(\mathcal{D}^{\text{train}}_{\mathcal{S}_{i}}\gets f_{\mathcal{S}_{i}}( \mathcal{D}^{\text{train}})\)\(\triangleright\) IC pairs with skill \(i\) for\((c^{\prime}_{kl},I^{\prime}_{k})\) in \(\mathcal{D}^{\text{train}}_{\mathcal{S}_{i}}\)do \(c^{\prime}_{kli}\gets\mathcal{G}_{t,\mathcal{S}_{i}}(c^{\prime}_{kl})\)\(\triangleright\) Caption perturbation \(I^{\prime}_{kli}\gets\mathcal{G}_{V}(c^{\prime}_{kli})\)\(\triangleright\) Image generation \(\mathcal{D}^{\text{train}}_{\mathcal{G}_{V}-\mathcal{S}_{i}}\leftarrow\mathcal{D} ^{\text{train}}_{\mathcal{G}_{V}-\mathcal{S}_{i}}\)\(\cup\)\(\{(c^{\prime}_{kli},I^{\prime}_{kli})\}\)\(\triangleright\) Adding the new pair endfor endfor
```
**Algorithm 1** The TIDA method on train
### Dataset
For the image captioning task, we use the Flickr30K Young et al. (2014), which is composed of 31K photographs of everyday activities, events, and scenes harvested from Flickr and 159K captions. Each image is described independently by five annotators who are not familiar with the specific entities and circumstances depicted in them. We follow Karpathy's split3Karpathy and Fei-Fei (2017), which gives 29.8k/1k/1k images for train/val/test.
### Methodology
Skill usedWe augment the data regarding three basic human skills: gender detection, counting capability and color recognition. We focus on these skills for consistency with previous work Anderson et al. (2016), and because they are considered as essential and acquired early in humans and present in animals Wang et al. (2010); Dacke and Srinivasan (2008); Davis and Memott (1982).
Text generationFor each skill, and for each of the captions that were retrieved as containing it, we changed the caption text by using an alternative attribute of the targeted skill. For this, we employed
a list of defined words that were related to the targeted skills. Each of the skill-related words has a list of other words that can be used as a replacement. For gender, masculine words like "man" were replaced by their feminine counterparts like "woman". For color, we swapped the different colors altogether. For counting, we either added or subtracted 1 to the detected written number in the sentence (\(\pm 1\)). See Appendix A for more details.
BaselineWe compared our method with a data-augmentation that consists of generating images from random captions of the dataset. In this way, we aim to show that the improvement in different performances do not only come from having a larger training set, but also to have a larger and more diverse training set. In the following, we call this augmented training set \(\mathcal{D}^{train}{}_{SD-rnd}\).
### Implementation details
Text generatorWe used simple regular expressions to find the different attributes of each skill. The replacement words were chosen randomly within the list of possible alternatives. More details are available in Appendix A.4
Footnote 4: All our code will be made available after publication.
Image generatorWe test a classical text-to-image generation technique with Stable Diffusion (Rombach et al., 2022) and generated 20k images per skill. For Stable Diffusion, we used the version 1.55 as described in (Rombach et al., 2022), leveraging the Diffusers library for its implementation (von Platen et al., 2022). We used a 16-bit floating-point data type and a guidance scale set at 8, which constrained the extent to which textual prompts generated the resultant images. The resolution of the generated images was 128 x 128 pixels. The remainder of the parameters were set as default, as specified by the Diffusers library. In the Appendix B, we show experiments with more generators.
Footnote 5: [https://huggingface.co/runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
Image captioningWe used the BLIP model (Li et al., 2022) because of its state-of-the-art performances on Image Captioning, with a publicly available code and pre-trained weights. We kept the same original hyper-parameters, adjusting only the batch size from 32 to 24 and using the ViT Base model as the image encoder, due to hardware limitations. For the training, we also kept the AdamW (Loshchilov and Hutter, 2019) original optimization algorithm with an initial learning rate of \(10^{-5}\) that is decreased through the training based on a \(cos(\cdot)\) function until it reaches \(0\). In order to compare models with different amounts of available data, we used early stopping with a patience of \(5\).
MetricsWe used the classical BLEU metric (Papineni et al., 2002) to evaluate the performances of the models. Moreover, we used another metric that relies on learned representations. We computed RefCLIPScore (Hessel et al., 2021) which is based on the similarity between the embedding of the caption and the embedding of the image coming from CLIP (Radford et al., 2021). This metric was shown to have a better correlation with human judgments than other classical metrics (Kasai et al., 2022).
\begin{table}
\begin{tabular}{c|c|c|c|c|c||c|c|c|c||c|c|c} & **\#DA** & \multicolumn{3}{c||}{**BLEU@1-4**} & \multicolumn{3}{c||}{**RefCLIPScore**} & \multicolumn{3}{c}{**Spice**} \\
**Test** & & \(\mathcal{D}^{test}{}_{dr}\) & \(\mathcal{D}^{test}{}_{dg}\) & \(\mathcal{D}^{test}{}_{gdr}\) & \(\mathcal{D}^{test}{}_{dr}\) & \(\mathcal{D}^{test}{}_{dg}\) & \(\mathcal{D}^{test}{}_{gdr}\) & \(\mathcal{D}^{test}{}_{gdr}\) & \(\mathcal{D}^{test}{}_{1dr}\) & F1\({}_{dg}\) & F1\({}_{all}\) \\
**Train** & & & & & & & & & & & & & \\ \hline \hline \(\mathcal{D}^{train}\) (Vanilla) & 0 & 51.8 & 44.0 & 49.9 & 49.7 & 79.9 & 79.3 & 79.8 & 80.3 & 24.1 & 19.7 & 20.7 \\ \hline \(\mathcal{D}^{train}{}_{SD-rnd}\) & 60k & 51.3 & 44.1 & 49.2 & 49.6 & 80.0 & 79.5 & 79.7 & 80.2 & **24.7** & **25.2** & 20.6 \\ \hline \(\mathcal{D}^{train}{}_{SD-dr}\) & 20k & _51.7_ & 44.0 & _49.3_ & 49.5 & 79.8 & 79.4 & 79.6 & 80.1 & 24.3 & 19.8 & 20.2 \\ \(\mathcal{D}^{train}{}_{SD-dg}\) & 20k & _51.7_ & _44.4_ & 49.2 & 49.7 & 79.9 & _79.5_ & 79.7 & 80.2 & 23.4 & 22.0 & 20.4 \\ \(\mathcal{D}^{train}{}_{SD-drdr}\) & 20k & 51.2 & 43.4 & 48.5 & 48.8 & _80.0_ & _79.2_ & _79.9_ & 80.3 & 24.5 & 24.4 & 20.6 \\ \hline \(\mathcal{D}^{train}{}_{SD-all}\) & 60k & 51.8 & 44.9 & **50.1** & **50.5** & 80.1 & **79.7** & 80.1 & **80.5** & **24.7** & 23.6 & **21.0** \\ \hline \end{tabular}
\end{table}
Table 1: Average of the BLEU@1-4 scores of the different TIDA-enhanced models on the different test sets. The TIDA models depicted used different image generation strategies: _SD_ uses Stable Diffusion and _AAE_ Attend-and-Excite. The first line contains the performance of the model trained with the Vanilla train set. Then, the first to third line of each TIDA model contain the results of the model trained with data-augmentation on the color, counting, and gender skills, respectively. And, the last line of each, depicts the results of the model trained with all three types of data-augmentation. The scores in bold are the best scores on each test set, while the scores in italic are the best scores of each of the models trained with (skill-related) data-augmentation.
## 4 Results and Analysis
### Results
The results of the models trained with different skill-based data-augmentation on different test sets are shown in Table 1. We can see that the overall best scores on each test set are obtained with the model using the three types of data-augmentation techniques, either using BLEU (from 49.7 to 50.5) or RefCLIPScore (from 80.3 to 80.5).
We also provide the F1-scores computed with Spice, and especially the ones related to counting and color because we aim to quantify the performances of the models on those skills. The data-augmentation helps to augment both of the metrics individually, more than the overall one.
### Analysis
We analyze the results in three different ways: (i) by using classical natural language generation metrics for image captioning, (ii) by assessing the use of skill words regarding the captions and quantifying the right use of the skill-related terms, (iii) by probing the representation of the image on a skill detection task for a finer comprehension of the image encoder and text decoder behavior.
Classical metricsBy analyzing the classical metrics we can make several observations. Contrary to what we would have expected, the skill-related TIDA are not necessarily leading to the best scores in their respective test sets. Note however that the metrics are not homogeneous. The counting-related TIDA obtains the best results on the counting test set for BLEU and RefCLIPScore, but Spice F1-counting is better with gender. Interestingly, counting (compared with color and gender) leads to the worst metrics with BLEU but the best one when focusing on the RefCLIPScore and Spice metrics. More details and metrics are available in Appendix C.
Skill-related wordsIn order to analyze the results of the model by going beyond the classical opaque metrics like BLEU and RefCLIPScore, we used a similar method to spice [1] that allows to investigate specific semantic words. TIDA relies on using certain variations of words, hence we are evaluating the propensity of the model to use those words in the right context. If a word associated with a skill is present in the ground truth or in the generated caption, it allows us to quantify the results of the model as false/true positive/negative. Specifically, when the model is using a word associated with a skill in the generated caption, and this skill is indeed associated with the image-caption ground truth, we count this as a true positive. If the model does not use any word associated with a skill and the skill is not present in the ground truth, we count this as a true negative. The other combinations are regarded as false positives or negatives. The precision, recall, and F1 for color, counting, and gender TIDAs are available in Table 2.
For the color TIDA, the precision and recall are both increasing for the positive and negative cases. This means that the model is using more often color words when the caption should contain one and less when it should not. For the counting TIDA, the recall of the negative class is augmenting from 39.1 to 45.9, which means that the model uses fewer counting-related words when it should not. At the same time the precision for the positive class augments which means the use of counting-related words is more pertinent. For the gender TIDA, the model is using more gender words (recall positive going from 88.8 to 92.4) while being a bit less precise (recall negative decreasing from 79.0 to 77.8). Overall, we observe that the color TIDA gives better results for color, but surprisingly the counting TIDA is better for gender and the gender TIDA is better for counting.
\begin{table}
\begin{tabular}{c|c c c c c|c c c c|c c c c}
**Skill** & \multicolumn{4}{c|}{**Color**} & \multicolumn{4}{c|}{**Counting**} & \multicolumn{4}{c}{**Gender**} \\
**Train** & P+ & R+ & P- & R- & F1 & P+ & R+ & P- & R- & F1 & P+ & R+ & P- & R- & F1 \\ \hline \hline \(\mathcal{D}^{train}\) & 64.4 & 89.8 & 80.5 & 45.8 & 66.7 & 73.6 & 97.9 & 91.7 & 39.1 & 69.4 & 46.5 & 88.8 & 97.2 & 79.0 & **74.1** \\ \hline \(\mathcal{D}^{train}\)\({}_{SD-rnd}\) & 64.8 & 88.1 & 78.6 & 47.7 & 67.0 & 77.2 & 97.5 & 92.0 & 50.0 & **75.5** & 45.4 & 89.4 & 97.3 & 78.0 & 73.4 \\ \hline \(\mathcal{D}^{train}\)\({}_{SD-dr}\) & 66.0 & 86.8 & 78.0 & 51.3 & **68.4** & 73.4 & 98.4 & 93.3 & 38.3 & 69.2 & 43.8 & 91.8 & 97.8 & 75.9 & 72.4 \\ \(\mathcal{D}^{train}\)\({}_{SD-dg}\) & 65.5 & 88.5 & 79.7 & 49.2 & 68.1 & 74.4 & 98.1 & 92.7 & 41.5 & 71.0 & 44.8 & 91.8 & 97.9 & 76.9 & 73.2 \\ \(\mathcal{D}^{train}\)\({}_{SD-gdr}\) & 64.1 & 88.5 & 78.5 & 45.8 & 66.1 & 75.3 & 96.8 & 89.2 & 45.1 & 72.3 & 43.9 & 90.6 & 97.5 & 76.3 & 72.4 \\ \hline \(\mathcal{D}^{train}\)\({}_{SD-all}\) & 65.7 & 90.8 & 82.8 & 48.3 & **68.6** & 75.8 & 97.8 & 92.3 & 45.9 & **73.4** & 46.0 & 92.4 & 98.0 & 77.8 & **74.1** \\ \end{tabular}
\end{table}
Table 2: Precision, Recall and F1-score regarding the use of skill-related words in the captions generated by the BLIP models trained using different TIDA techniques on the different test sets. The two best F1 scores are highlighted in bold.
Probing with visual representationsWe tried to analyze how TIDA influences the model not only using the raw results of the text decoder but also using the representation of the image encoder. For this purpose, we proposed to probe the image representations to predict whether or not the image is associated with a specific skill.
As we previously did, we used the text-mining method to label whether or not a sample is associated with one of the three skills. We then trained a linear multi-layer perceptron on the representations produced by the image encoder and these labels. As is usual with transformer-based models, we used the class embedding coming from the image encoder as the image representation embedding. We use binary cross entropy loss and SGD to train the probe and perform early stopping and a grid search on each model to find the best model hidden size and learning rate. The results with the five TIDA models are shown in Table 3.
Looking at the F1-score, it seems that none of the TIDAs bring any significant change regarding the skill-related information in the image encoding. However, the models are improving in terms of general Image Captioning performances (Table 1), and we saw previously that they are using more frequently targeted words when they should use them (Table 2). We can conclude that TIDA-related improvements are caused by changes in the text decoder rather than the image decoder.
## 5 Conclusion and Future Work
This paper assesses the effectiveness of generative data augmentation with current diffusion models for improving specific skills of image captioning models. We use the Flickr30k image captioning dataset and ran experiments with BLIP, a recent vision-language state-of-the-art model. We show that TIDA, our targeted image data-augmentation techniques allows for gains regarding classical metrics that are recognized by the community like BLEU or RefCLIPScore. On top of that, we also propose a fine-grained analysis to analyze the results of the model by going beyond the classical opaque metrics by investigating the occurrences of specific semantic words related to the target skills. We found out that TIDA helps the image captioning model to use those words more efficiently. Finally, we investigate the visual part, we probe the representations from the visual encoder and reveal that they do not contain more information on the skill, meaning the improvement relies on the textual decoder.
Our results open several avenues for further research. For instance, it remains unclear why we observe the boost in results on a specific skill when using data-augmentation on another skill. It would also be useful to investigate more in details the reasons of the improvement of performances the text decoder or the visual encoder, or to use a more precised metric powered by a LLM like (Hu et al., 2023).
It would also be useful to investigate more in details the reasons of the gain of performances of the text decoder or the visual encoder, or to using complex interpretable metrics from LLM like the Text-to-Image Faithfulness Evaluation with Question Answering (Hu et al., 2023). It would be to see improvements with text-to-image models known to be better at generating images related to color, counting, like Attend-and-Excite (Chefer et al., 2023) with newer versions of stable diffusion. Finally, we would like to extend our method to Visual Question Answering. Using symbolic knowledge to extract the objects of the image-caption and the relation as implemented in (Riquelme et al., 2020), we can adapt the model to new situations and help to debias a VQA model. Finally, given the recent results of (Azizi et al., 2023), we should run a random data-augmentation on the train set and see whether this procedure may help to improve the results compared with TIDA.
## 6 Limitations
The focus of this work has been on abstract skills shown to be learned by humans at an early age, but it is not clear which skills are the most important to image captioning in particular or another particular task in general. And it is an empirical study to determine which skills result in the most improvement in a task. Making it not straightforward to add
\begin{table}
\begin{tabular}{c|c|c|c}
**Skill** & **Color** & **Counting** & **Gender** \\ \hline \hline \(\mathcal{D}^{train}\) & 72.0 & 88.2 & 84.1 \\ \hline \(\mathcal{D}^{train}{}_{SD-rnd}\) & 73.0 & 88.3 & 84.3 \\ \hline \(\mathcal{D}^{train}{}_{SD-dr}\) & 72.9 & 88.6 & 84.7 \\ \(\mathcal{D}^{train}{}_{SD-ctg}\) & 71.6 & 88.7 & 84.1 \\ \(\mathcal{D}^{train}{}_{SD-gdr}\) & 71.7 & 89.0 & 84.0 \\ \hline \(\mathcal{D}^{train}{}_{SD-all}\) & 71.8 & 87.7 & 84.3 \\ \hline \end{tabular}
\end{table}
Table 3: F1-score for skill probing using the models learned with different targeted data-augmentations
new skills, requiring thoughtfulness and empirical validation.
In terms of computational cost, TIDA's necessity to generate a number of new examples comparable to the original dataset size using costly neural image generation models signifies it is a challenge to apply to larger datasets and that the technique doesn't scale well to dataset size. And although each generated example can be leveraged many times, the process is heavily limited by the computation capabilities.
## Acknowledgments
This work was funded by National Center for Artificial Intelligence CENIA FB210017, Basal ANID.
|
2309.04644 | Towards Understanding Neural Collapse: The Effects of Batch
Normalization and Weight Decay | Neural Collapse (NC) is a geometric structure recently observed at the
terminal phase of training deep neural networks, which states that last-layer
feature vectors for the same class would "collapse" to a single point, while
features of different classes become equally separated. We demonstrate that
batch normalization (BN) and weight decay (WD) critically influence the
emergence of NC. In the near-optimal loss regime, we establish an asymptotic
lower bound on the emergence of NC that depends only on the WD value, training
loss, and the presence of last-layer BN. Our experiments substantiate
theoretical insights by showing that models demonstrate a stronger presence of
NC with BN, appropriate WD values, lower loss, and lower last-layer feature
norm. Our findings offer a novel perspective in studying the role of BN and WD
in shaping neural network features. | Leyan Pan, Xinyuan Cao | 2023-09-09T00:05:45Z | http://arxiv.org/abs/2309.04644v3 | # Towards Understanding Neural Collapse: The Effects of Batch Normalization and Weight Decay
###### Abstract
Neural Collapse (\(\mathcal{NC}\)) is a geometric structure recently observed in the final layer of neural network classifiers. In this paper, we investigate the interrelationships between batch normalization (BN), weight decay, and proximity to the \(\mathcal{NC}\) structure. Our work introduces the geometrically intuitive intra-class and inter-class cosine similarity measure, which encapsulates multiple core aspects of \(\mathcal{NC}\). Leveraging this measure, we establish theoretical guarantees for the emergence of \(\mathcal{NC}\) under the influence of last-layer BN and weight decay, specifically in scenarios where the regularized cross-entropy loss is near-optimal. Experimental evidence substantiates our theoretical findings, revealing a pronounced occurrence of \(\mathcal{NC}\) in models incorporating BN and appropriate weight-decay values. This combination of theoretical and empirical insights suggests a greatly influential role of BN and weight decay in the emergence of \(\mathcal{NC}\).
## 1 Introduction
Over the past decade, deep learning and neural networks has revolutionized the field of machine learning and artificial intelligence, enabling machines to perform complex tasks previously thought to be beyond their capabilities. However, despite tremendous empirical advances, a comprehensive theoretical and mathematical understanding of the success behind neural networks, even for the simplest types, is still unsatisfactory. Analyzing Neural Networks using traditional statistical learning theory has encountered significant difficulties due to the high level of non-convexity, over-parameterization, and optimization-dependent properties.
Papyan et al. (2020) recently empirically observed an elegant mathematical structure in multiple successful neural network-based visual classifiers and named the phenomenon "Neural Collapse" (abbreviated \(\mathcal{NC}\) in this work). Specifically, \(\mathcal{NC}\) is a geometric structure of the learned last-layer/penultimate-layer feature and weights the terminal phase of deep neural network training. Neural Collapse states that after sufficient training of successful neural networks: **NC1**) The intra-class variability of the last-layer feature vectors tends to zero _(Variability Collapse)_; **NC2**) The mean class feature vectors become equal-norm and forms a Simplex Equiangular Tight Frame (ETF) around the center up to re-scaling.; (_Convergence to Simplex ETF_) **NC3**) The last layer weight vectors converge to match the feature class means up to re-scaling _(Self-Duality)_; **NC4**) The last layer of the network behaves the same as a "Nearest Class Center" decision rule _(Convergence to NCC)_
Notably, an Equiangular Tight Frame (ETF) is a set of vectors in a high-dimensional space that are evenly spaced from each other, such that they form equal angles with one another and are optimally arranged for maximal separability. In our context of \(\mathcal{NC}\), a simplex Equiangular Tight Frame in Euclidean space is defined as follows:
**Definition 1.1** (Simplex ETF, Papyan et al. (2020)).: _A simplex ETF is a collection of \(C\) points in \(\mathbb{R}^{d}\) specified by the columns of_
\[\mathbf{M}^{\star}=\alpha\mathbf{U}\left(\mathbf{I}_{C}-\frac{1}{C}\mathbf{1}_{C}\mathbf{1}_{C}^{ \top}\right)\]
_where \(\alpha\in\mathbb{R}^{+}\) and \(\mathbf{U}\in\mathbb{R}^{d\times C}\) is a partially orthogonal matrix (\(\mathbf{U}^{\top}\mathbf{U}=\mathbf{I}\))._
These observations of Neural Collapse reveal compelling insights into the symmetry and mathematical preferences of over-parameterized neural network classifiers. Intuitively, the last-layer features acquire the most suitable geometric feature representation for their specific classification task that maximizes inter-class separation while simultaneously discarding information about variations within individual classes. Subsequently, further work has demonstrated that Neural Collapse may play a significant role in the generalization, transfer learning (Galanti et al. (2022b)), depth minimization (Galanti et al. (2022a)), and implicit bias of neural networks (Poggio and Liao (2020)). Additionally, insights provided by Neural Collapse have been a powerful tool in exploring the intermediate layers of neural network classifiers and representations learned by self-supervised learning models.( Ben-Shaul et al. (2023), Ben-Shaul and Dekel (2022))
### Our Contributions
In this paper, we theoretically and empirically investigate the question:
**What is a minimal set of conditions that would guarantee the emergence of \(\mathcal{NC}\)?**
Our results show that batch normalization, large weight decay, and near-optimal cross-entropy loss are sufficient conditions for several core properties of \(\mathcal{NC}\), and \(\mathcal{NC}\) is most significant when all these conditions are satisfied. Specifically, we provide the following contributions:
* We propose the intra-class and inter-class cosine similarity measure, a simple and geometrically intuitive quantity that measures the proximity of a set of feature vectors to several core structural properties of \(\mathcal{NC}\). (Section 2.2)
* Under the cosine similarity measure, we show a theoretical guarantee of the proximity to \(\mathcal{NC}\) for any unbiased neural network classifier with near-optimal regularized cross-entropy loss, batch-normalized last-layer feature vectors, and last-layer weight decay. (Theorem 2.2)
* Our empirical evidence shows that \(\mathcal{NC}\) is most significant with both batch normalization and high weight decay values under the cosine similarity measure. (Section 3)
Combining our theoretical and empirical results, we conclude that batch normalization along with weight decay may be greatly influential conditions for the emergence of \(\mathcal{NC}\).
Figure 1: Visualization of \(\mathcal{NC}\)(Papyan et al. (2020)). We use an example of three classes and denote the last-layer features \(\mathbf{h}_{c,i}\), mean class features \(\tilde{\mathbf{h}}_{c}\), and last-layer class weight vectors \(\mathbf{w}_{c,i}\). Circles denote individual last-layer features, while compound and filled arrows denote class weight and mean feature vectors, respectively. As training progresses, the last-layer features of each class collapse to their corresponding class means (NC1), different class means converge to the vertices of the simplex ETF (NC2), and the class weight vector of the last-layer linear classifier approaches the corresponding class means (NC3).
### Related Theoretical Works on the Emergence of Neural Collapse
The empirical \(\mathcal{NC}\) phenomenon has inspired a recent line of work to theoretically investigate its emergence under different settings. Several studies have focused on the unconstrained features model or layer-peeled model, first introduced by Mixon et al. (2020), where the last layer features are treated as free optimization variables. Such simplification is based on the observation that most modern neural networks are highly over-parameterized and are capable of learning any feature representations. Following this model, several works have demonstrated that solutions satisfying Neural Collapse are the only global optimizers under both CE (Ji et al. (2022); Zhu et al. (2021); Lu and Steinerberger (2022)) and MSE loss (Han et al. (2022); Zhou et al. (2022)) under different settings such as regularization and normalization. Recent works have also focused on analyzing the unconstrained features model's gradient dynamics and optimization landscape (Mixon et al. (2020); Zhu et al. (2021); Ji et al. (2022); Han et al. (2022); Yaras et al. (2022)). Collectively, these works establish that, under both CE and MSE loss, the unconstrained features model has a benign global optimization landscape where every local minima solution satisfies the Neural Collapse structure and other critical points are strict saddle points with negative curvature. Furthermore, following the gradient flow or first-order optimization method would lead to solutions satisfying the Neural Collapse structure. Although works have been done in an idealized setting where gradient-based optimization is performed directly on the last layer features, it should be noted that this assumption is unrealistic. Optimizing the weights in earlier layers can have a significantly different effect from directly optimizing the last-layer features, even in over-parameterized networks. Besides the layer-peeled model, Poggio and Liao (2020) have demonstrated the Neural Collapse structure for binary classification when each individual sample achieves zero gradients with MSE loss, while Tirer and Bruna (2022) and Sukenik et al. (2023) extends the analysis under MSE loss to deeper models.
For a table comparing the model and contributions of prior work theoretically investigating the emergence of \(\mathcal{NC}\), see Appendix section C.
Our Work: \(\mathcal{NC}\) Proximity Under Near-optimal LossBuilding on the layer-peeled model from prior research, our theoretical approach offers a unique perspective, focusing on the _near-optimal regime_ and avoiding less realistic assumptions of achieving exact optimal loss and directly optimizing the last-layer feature vectors. Our approach provides further insights into \(\mathcal{NC}\) in realistic neural network training as 1) the near-optimal regime is often more reflective of the realities of neural network training, with the theoretical optimal loss often being unattainable in practice; 2) in contrast to landscape or gradient flow analyses on the layer-peeled model, our findings are optimization-agnostic and applicable in practical scenarios where direct optimization of the last-layer features is unfeasible; 3) our emphasis on measuring the _proximity_ to \(\mathcal{NC}\), rather than achieving exact \(\mathcal{NC}\), unveils additional insights, especially in instances where exact \(\mathcal{NC}\) is unattainable.
## 2 Theoretical Results
### Problem Setup and Notations
Neural Network with Cross-Entropy Loss.In this work, we consider unbiased neural network classifiers trained using cross-entropy loss functions on a balanced dataset. A vanilla deep neural network classifier is composed of a feature representation function \(\phi_{\boldsymbol{\theta}}(\boldsymbol{x})\) and a linear classifier parameterized by \(\mathbf{W}\). Specifically, a \(L\)-layer vanilla deep neural network can be mathematically formulated as:
\[f(\boldsymbol{x};\boldsymbol{\theta})=\underbrace{\boldsymbol{W}^{(L)}}_{ \text{Last layer weight }\mathbf{W}=\mathbf{W}^{(L)}}\underbrace{BN\left(\sigma\left( \boldsymbol{W}^{(L-1)}\cdots\sigma\left(\boldsymbol{W}^{(1)}\boldsymbol{x}+ \boldsymbol{b}^{(1)}\right)+\cdots+\boldsymbol{b}^{(L-1)}\right)\right)}_{ \text{last-layer feature }\boldsymbol{h}=\phi_{\boldsymbol{\theta}}(\boldsymbol{x})}\]
Each layer is composed of an affine transformation parameterized by weight matrix \(\boldsymbol{W}^{(l)}\) and bias \(\boldsymbol{v}^{(l)}\) followed by a non-linear activation \(\sigma\) which may contain element-wise transformation such as \(\text{ReLU}(x)=\max\{x,0\}\) as well as normalization techniques such as batch normalization.
The network is trained by minimizing the empirical risk over all samples \(\{(\boldsymbol{x}_{c,i},\boldsymbol{y}_{c})\},c\in[C],i\in[N]\) where each class contains \(N\) samples and \(\boldsymbol{y}_{c}\) is the one-hot encoded label vector for class \(c\). We also denote \(\mathbf{h}_{c,i}=\phi_{\boldsymbol{\theta}}(\boldsymbol{x}_{c,i})\) as the last-layer feature corresponding to \(\boldsymbol{x}_{c,i}\). The training process
minimizes the average cross-entropy loss
\[\mathcal{L}=\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}\mathcal{L}_{\mathrm{CE}}\left( f(\mathbf{x}_{c,i};\mathbf{\theta}),\mathbf{y}_{c}\right)=\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N} \mathcal{L}_{\mathrm{CE}}\left(\mathbf{W}\mathbf{h}_{c,i},\mathbf{y}_{c}\right),\]
where the cross entropy loss function for a one-hot encoding \(\mathbf{y}_{c}\) is:
\[\mathcal{L}_{\mathrm{CE}}(\mathbf{z},\mathbf{y}_{c})=-\log\left(\frac{\exp(z_{c})}{ \sum_{c^{\prime}=1}^{C}\exp(z^{\prime}_{c})}\right)\]
Batch Normalization and Weight Decay.For a given batch of vectors \(\mathbf{v}_{1},\mathbf{v}_{2},\cdots\mathbf{v}_{b}\subset\mathbb{R}^{d}\), let \(v^{(k)}\) denote the \(k\)'th element of \(\mathbf{v}\). Batch Normalization (BN) developed by Ioffe and Szegedy (2015) performs the following operation along each dimension \(k\in[d]\):
\[BN(\mathbf{v}_{i})^{(k)}=\frac{v_{i}^{(k)}-\mu^{(k)}}{\sigma^{(k)}}\times \gamma^{(k)}+\beta^{(k)}\]
Where \(\mu^{(k)}\) and \((\sigma^{(k)})^{2}\) are the mean and variance along the \(k\)'th dimension of all the vectors in the batch. The vectors \(\mathbf{\gamma}\) and \(\mathbf{\beta}\) are trainable parameters that represent the desired variance and mean after BN. BN has been empirically demonstrated to facilitate convergence and generalization and is adopted in many popular network architectures.
Weight decay is a technique in deep learning training that facilitates generalization by penalizing large weight vectors. Specifically, the Frobenius norm of each weight matrix \(\mathbf{W}^{(l)}\) and batch normalization weight vector \(\mathbf{\gamma}^{(l)}\) is added as a penalty term to the final cross-entropy loss. Thus, the final loss function with weight decay parameter \(\lambda\) is
\[\mathcal{L}_{\mathrm{reg}}=\mathcal{L}+\frac{\lambda}{2}\sum_{l=1}^{L}(\|\bm {\gamma}^{(l)}\|^{2}+\|\mathbf{W}^{(l)}\|_{F}^{2})\]
where \(\gamma^{(l)}=0\) for layers without batch normalization. In our theoretical analysis, we consider the simplified layer-peeled model that only applies weight decay on the network's final linear and batch normalization layer. Under this setting, the final regularized loss is:
\[\mathcal{L}_{\mathrm{reg}}=\mathcal{L}+\frac{\lambda}{2}(\|\mathbf{\gamma}\|^{2} +\|\mathbf{W}\|_{F}^{2})\]
where \(\mathbf{W}\) is the last layer weight matrix and \(\mathbf{\gamma}=\mathbf{\gamma}^{(L-1)}\) is the weight of the batch normalization layer before the final linear transformation.
### Cosine Similarity Measure of Neural Collapse
Numerous measures of NC have been used in past literature, including within-class covariance (Papyan et al. (2020)), signal-to-noise (SNR) ratio (Han et al. (2022)), as well as class distance normalized variance (CDNV, Galanti et al. (2022)). While these measures all indicate the emergence of \(\mathcal{NC}\) when the measured value approaches zero and provides convergence guarantees to Neural Collapse, they do not provide a straightforward and geometrically intuitive measure of how close a given structure is to \(\mathcal{NC}\) when the values are non-zero.
In this work, we propose the cosine similarity measure of \(\mathcal{NC}\), which focuses on simplicity and geometric interpretability at the cost of discarding norm information.
For a given class \(c\), the average intra-class cosine similarity for class \(c\) is defined as the average cosine similarity of picking two feature vectors in the class:
\[intra_{c}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\cos_{\angle}(\mathbf{h} _{c,i},\mathbf{h}_{c,j})\]
where
\[\cos_{\angle}(\mathbf{x},\mathbf{y})=\frac{\mathbf{x}^{\intercal}\mathbf{y}}{ \|\mathbf{x}\|\cdot\|\mathbf{y}\|}\]
is the vector cosine similarity measure. Similarity, the inter-class cosine similarity between two classes \(c,c^{\prime}\) is defined as the average cosine similarity of picking one feature vector of class \(c\) and another from class \(c^{\prime}\):
\[inter_{c,c^{\prime}}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\cos_{\angle}( \mathbf{h}_{c,i},\mathbf{h}_{c^{\prime},j})\]
Relationship with \(\mathcal{NC}\)While cosine-similarity does not measure the degree of norm equality, it can describe _necessary_ conditions for the core observations of \(\mathcal{NC}\) as follows:
1. _(Variability Collapse)_ NC1 implies that all features in the same class collapse to the class mean and have the same vector value. Therefore, all features in the same class must be in the same direction and achieve an intra-class cosine similarity \(intra_{c}=1\).
2. _(Convergence to Simplex ETF)_ NC2 implies that class means converge to the vertices of a simplex ETF. Combined with NC1, this implies that the angle between every pair of features from different classes must be \(-\frac{1}{C-1}\) (a property of the simplex ETF over \(C\) points). Therefore, the inter-class cosine similarity between each pair of classes must be \(inter_{c,c^{\prime}}=-\frac{1}{C-1}\)
With the above problem formulation, we now present our main theorems for \(\mathcal{NC}\) in neural network classifiers with near-optimal training cross-entropy loss. Before presenting our core theoretical result on batch normalization and weight decay, we first present a more general preliminary theorem that provides theoretical bounds for the intra-class and inter-class cosine similarity for any classifier with near-optimal (unregularized) average cross-entropy loss.
### Main Results
Our first theorem states that if the average last-layer feature norm and the last-layer weight matrix norm are both _bounded_, then achieving _near-optimal loss_ implies that _most classes_ have intra-class cosine similarity near one and _most pairs of classes_ have inter-class cosine similarity near \(-\frac{1}{C-1}\).
**Theorem 2.1** (\(\mathcal{NC}\) proximity guarantee with bounded norms).: _For any unbiased neural network classifier trained on dataset with the number of classes \(C\geq 3\) and samples per class \(N\geq 1\), under the following assumptions:_
1. _The quadratic average of the last-layer feature norms_ \(\sqrt{\frac{1}{\mathcal{CN}}\sum_{c=1}^{C}\sum_{i=1}^{N}\|\mathbf{h}_{c,i}\|^ {2}}\leq\alpha\)__
2. _The Frobenius norm of the last-layer weight_ \(\|\mathbf{W}\|_{F}\leq\sqrt{C}\beta\)__
3. _The average cross-entropy loss over all samples_ \(\mathcal{L}\leq m+\epsilon\) _for small_ \(\epsilon>0\)__
_where \(m=\log(1+(C-1)\exp(-\frac{C}{C-1}\alpha\beta))\) is the minimum achievable loss for any set of weight and feature vectors satisfying the norm constraints, then for at least \(1-\delta\) fraction of all classes, with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{intra}_{c}\geq 1-O\left(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}\sqrt{ \frac{\epsilon}{\delta}}\right)\]
_and for at least \(1-\delta\) fraction of all pairs of classes \(c,c^{\prime}\), with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{inter}_{c,c^{\prime}}\leq-\frac{1}{C-1}+O\left(\frac{e^{O(C\alpha\beta) }}{\alpha\beta}(\frac{\epsilon}{\delta})^{1/6}\right)\]
Remarks.:
* We only consider the near-optimal regime where \(\epsilon\ll 1\). However, a near-optimal cross-entropy training loss is demonstrated in most successful neural network classifiers exhibiting \(\mathcal{NC}\), including all the original experiments by Papyan et al. (2020), at the terminal phase of training.
* Since \(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}\) is a mostly increasing function of \(\alpha\beta\), lower last-layer feature and weight norms can provide stronger guarantees on Neural Collapse measured using cosine similarity.
Proof Sketch of Theorem 2.1.: Our proof is inspired by the optimal-case proof of Lu and Steinerberger (2022), which shows the global optimality conditions using Jensen's inequality. Our core lemma shows that if a set of variables achieves roughly equal value on the LHS and RHS of Jensen's inequality for a strongly convex function (such as \(\exp(x)\)), then the mean of every subset cannot deviate too far from the global mean:
**Lemma 2.1** (Subset mean close to global mean by Jensen's inequality on strongly convex functions).: _Let \(\{x_{i}\}_{i=1}^{N}\subset\mathcal{I}\) be a set of \(N\) real numbers, let \(\tilde{x}=\frac{1}{N}\sum_{i=1}^{N}x_{i}\) be the mean over all \(x_{i}\) and \(f\) be a function that is \(\lambda\)-strongly-convex on \(\mathcal{I}\). If_
\[\frac{1}{N}\sum_{i=1}^{N}f(x_{i})\leq f(\tilde{x})+\epsilon\]
_i.e., Jensen's inequality is satisfied with gap \(\epsilon\), then for any subset of samples \(S\subseteq[N]\), let \(\delta=\frac{|S|}{N}\), there is_
\[\tilde{x}+\sqrt{\frac{2\epsilon(1-\delta)}{\lambda\delta}}\geq\frac{1}{|S|} \sum_{i\in S}x_{i}\geq\tilde{x}-\sqrt{\frac{2\epsilon(1-\delta)}{\lambda \delta}}\]
This lemma can serve as a general tool to convert optimal-case conditions derived using Jensen's inequality into high-probability proximity bounds under near-optimal conditions.
Using the strong convexity of \(\exp(x)\) and \(\log(1+(C-1)\exp(x))\) along with Lemma 2.1 and the optimal case proof of Lu and Steinerberger (2022), we show that most classes \(c\) much have high same-class weight-feature vector cosine similarity, and most pairs of classes \(c,c^{\prime}\) have inter-class weight-feature vector cosine similarity. This upper and lower bound is then used to lower bound \(\|\tilde{\mathbf{h}}_{c}\|\) and upper bound \(\tilde{\langle\tilde{\mathbf{h}}_{c},\tilde{\mathbf{h}}_{c^{\prime}}\rangle}\) where
\[\tilde{\tilde{\mathbf{h}}}_{c}=\frac{1}{N}\sum_{i=1}^{N}\frac{\mathbf{h}_{c,i }}{\|\mathbf{h}_{c,i}\|}\]
is the mean _normalized_ feature vector of class \(c\). The intra-class and inter-class cosine similarity follows immediately from these results.
Our preliminary theorem above shows that lower values of the average feature norm and weight Frobenius norm of the final layer provide stronger guarantees of the proximity to \(\mathcal{NC}\). Note that weight decay is used to regularize the norms of weight matrices and weight vectors. Therefore, higher weight decay values should result in smaller weight matrix and weight vector norms. Our following proposition shows that regularizing the weight vector of an unbiased batch normalization layer is equivalent to regularizing the quadratic average of the feature norms of its output vectors:
**Proposition 2.1** (BN normalizes quadratic average of feature norms).: _Let \(\{\mathbf{h}_{i}\}_{i=1}^{N}\) be a set of Batch Normalized feature vectors with variance vector \(\boldsymbol{\gamma}\) and bias term \(\boldsymbol{\beta}=0\) (i.e. \(\mathbf{h}_{i}=BN(\mathbf{x}_{i})\) for some \(\{\mathbf{x}_{i}\}_{i=1}^{N}\)). Then_
\[\sqrt{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{i}\|_{2}^{2}}=\|\boldsymbol{ \gamma}\|_{2}\]
Therefore, regularizing the batch normalization variance vector is effectively equivalent to regularizing the quadratic average of the feature norms. Intuitively, under the same other conditions, a higher regularization coefficient in the training loss function should result in lower values of the regularized parameters. Therefore, a higher weight decay value (i.e., regularization coefficient of the weight matrices and variance vectors) should result in a lower weight norm and last-layer feature norm and a tighter bound in Theorem 2.1. This intuition is formalized in the following main theorem:
**Theorem 2.2** (\(\mathcal{NC}\) proximity guarantee with layer-peeled BN and WD).: _For an unbiased neural network classifier trained on a dataset with the number of classes \(C\geq 3\) and samples per class \(N\geq 1\), under the following assumptions:_
1. _The network contains an unbiased batch normalization layer before the final layer with trainable weight vector_ \(\mathbf{\gamma}\)_;_
2. _The layer-peeled regularized cross-entropy loss with weight decay_ \(\lambda\)__ \[\mathcal{L}_{\mathrm{reg}}=\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}\mathcal{L}_{ \mathrm{CE}}\left(\mathbf{W}\mathbf{h}_{c,i},\mathbf{y}_{c}\right)+\frac{\lambda}{2}(\|\bm {\gamma}\|^{2}+\|\mathbf{W}\|_{F}^{2})\] _satisfies_ \(\mathcal{L}_{\mathrm{reg}}\leq m_{\mathrm{reg}}+\epsilon\) _for small_ \(\epsilon\)_; where_ \(m_{reg}\) _is the minimum achievable regularized loss_
_then for at least \(1-\delta\) fraction of all classes, with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{intra}_{c}\geq 1-O\left(e^{O(C/\lambda)}\sqrt{\frac{\epsilon}{\delta}}\right)\]
_and for at least \(1-\delta\) fraction of all pairs of classes \(c,c^{\prime}\), with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{inter}_{c,c^{\prime}}\leq-\frac{1}{C-1}+O\left(e^{O(C/\lambda)}(\frac{ \epsilon}{\delta})^{1/6}\right)\]
Since \(e^{O(C/\lambda)}\) is an decreasing function of \(\lambda\), higher values of \(\lambda\) would result in smaller values of both \(O(e^{O(C/\lambda)}(\frac{\epsilon}{\delta})^{1/6})\) and \(O(e^{O(C/\lambda)}\sqrt{\frac{\epsilon}{\delta}})\). As such, under the presence of batch normalization and weight decay of the final layer, larger values of weight decay provide stronger \(\mathcal{NC}\) guarantees in the sense that the intra-class cosine similarity of most classes is nearer to 1 and the inter-class cosine similarity of most pairs of classes is nearer to \(-\frac{1}{C-1}\)
### Conclusion
Our theoretical result shows that last-layer BN, last-layer weight decay, and near-optimal average cross-entropy loss are sufficient conditions to guarantee proximity to the \(\mathcal{NC}\) structure as measured using cosine similarity, regardless of the training method and earlier layer structure. Moreover, such a guarantee is optimization-independent
## 3 Empirical Results
In this chapter, we present empirical evidence on the importance of batch normalization and weight decay on the emergence of Neural Collapse. Specifically, we compare the emergence of Neural Collapse in terms of the minimum intra-class cosine similarity over all classes and maximum inter-class cosine similarity over all pairs of classes. Our experiments show that **models with batch normalization and appropriate weight decay achieve the highest levels of \(\mathcal{NC}\) measured using cosine similarity**, which supports the predictions of Theorem 2.2.
### Experiments with Synthetic Datasets
Our first set of experiments considers the simple setting of using a vanilla neural network (i.e., Multi-Layer Perceptron) to classify a well-defined synthetic dataset of different separation complexities. We aim to use straightforward model architectures and well-defined datasets of different complexities to explore the effect of different hyperparameters in \(\mathcal{NC}\) under a controlled setting.
For datasets, we consider two different datasets of increasing classification difficulty: 1) The 4-class conic hull dataset, where two intersecting hyperplanes separate the input space into four classes; 2) the MLP3 dataset, where class labels are generated by the predicted labels of a 3-layer Neural Network with randomly generated weights. In the appendix, we also provide results for MLP6 and MLP9 datasets, created in a similar manner but for 6 and 9-layer neural networks.
The models used in the experiments are 3-layer and 6-layer multi-layer perception (MLP) models with ReLU activation. We compare models with and without batch normalization, where the batch-normalized models have batch-normalization layers between every adjacent linear layer. We train each model on the same synthetic dataset with 8000 training samples over 15 weight decay values
ranging from 0.0001 to 0.1. For each experiment, we record the **minimum** intra-class cosine similarity among all classes and the **maximum** inter-class cosine similarity among all pairs of classes as defined in section 2.2. Each model is trained for 200 epochs using the SGD optimizer. We refer the readers to Appendix Section B.1 for more comprehensive experiments with different model depths, datasets, and training details.
ResultsAs can be observed in Figure 2, our experimental results indicate that \(\mathcal{NC}\) is most significant on models with batch normalization and high values of weight decay when using the MLP classifier on a conic hull dataset. Furthermore, the degree of \(\mathcal{NC}\) increases significantly along with the increase of weight decay for the batch-normalized model. Finally, the 6-layer model without batch normalization failed to classify the training data starting from weight decay 0.003, while the batch-normalized model successfully classified the data for all weight decay values in our experiments, corroborating the conventional wisdom that batch normalization facilitates model convergence.
### Experiment with Real-world Datasets
Our next set of experiments explores the effect of Batch Normalization and Weight Decay using standard computer vision datasets MNIST (LeCun et al. (2010)) and CIFAR-10 (Krizhevsky (2009)). Specifically, we explore the difference in the degree of Neural Collapse between convolutional neural network architectures with and without Batch Normalization across different weight decay parameters. Notably, we compare the results of 2 different implementations of the VGG (Simonyan and Zisserman (2015)) convolutional neural network, one of which applies batch normalization after each convolution layer. We also compare these results with ResNet (He et al. (2015)), which contains several batch normalization layers within its architecture by default. Results are presented in Figure 3
For more comprehensive experiments on different model variants and datasets, including experiments with MNIST and VGG19, we refer the readers to section B.2 in the appendix.
### Conclusion
Our experiments show that, in both synthetic and realistic scenarios, the highest level of \(\mathcal{NC}\) is achieved by models with BN and appropriate weight decay. Moreover, BN allows the degree of
Figure 2: Minimum intra-class and maximum inter-class Cosine Similarity for 3-layer and 6-layer MLP under Different WD and BN. Higher values of intra-class and lower values of inter-class cosine similarity imply a higher degree of Neural Collapse. Error bars refer to the standard deviation over five different experiments.
to increase smoothly along with the increase of weight decay within the range of perfect interpolation, while the degree of \(\mathcal{NC}\) is unstable or decreases with the increase of weight decay in non-BN models. Such a phenomenon is also more pronounced in simpler neural networks and easier classification tasks than in realistic classification tasks.
## 4 Limitations and Future Work
Our theoretical exploration into deep neural network phenomena, specifically \(\mathcal{NC}\), has its limitations and offers various avenues for further work. Based on our work, we have identified several directions for future efforts:
* Our work, like previous studies employing the layer-peeled model, primarily focuses on the last-layer features and posits that BN and weight decay are only applied to the penultimate layer. However, \(\mathcal{NC}\) has been empirically observed in deeper network layers (Ben-Shaul and Dekel (2022); Galanti et al. (2022)) and shown to be optimal for regularized MSE loss in deeper unconstrained features models (Tirer and Bruna (2022); Sukenik et al. (2023)). An insightful future direction would involve investigating how the proximity bounds to \(\mathcal{NC}\) can be generalized to deeper layers of neural networks and understanding how these theoretical guarantees evolve with network depth.
* The theoretical model we have developed is idealized, omitting several intricate details inherent to practical neural networks. These include bias in linear layers and BN layers, and the sequence of BN and activation layers. Consequently, a worthwhile avenue for future research would be to refine the \(\mathcal{NC}\) proximity bounds to accommodate more realistic network settings.
Figure 3: Intra-class and Inter-class Cosine Similarity for VGG11, VGG11 with batch normalization, and ResNet18 under Different WD and BN. Higher intra-class and lower inter-class cosine similarity indicate a higher degree of \(\mathcal{NC}\). Error bars refer to the standard deviation over three different experiments. The minimum intra-class cosine measure positively correlates with weight decay in BN models, and such correlation is not present in models without BN.
## References
* Ben-Shaul and Dekel (2022) Ido Ben-Shaul and Shai Dekel. Nearest class-center simplification through intermediate layers. In _Proceedings of Topological, Algebraic, and Geometric Learning Workshops_, volume 196 of _PMLR_, pages 37-47, 2022.
* Ben-Shaul et al. (2023) Ido Ben-Shaul, Ravid Shwartz-Ziv, Tomer Galanti, Shai Dekel, and Yann LeCun. Reverse engineering self-supervised learning, 2023.
* E and Wojtowytsch (2022) Weinan E and Stephan Wojtowytsch. On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers. In Joan Bruna, Jan Hesthaven, and Lenka Zdeborova, editors, _Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference_, volume 145 of _Proceedings of Machine Learning Research_, pages 270-290. PMLR, 16-19 Aug 2022. URL [https://proceedings.mlr.press/v145/e22b.html](https://proceedings.mlr.press/v145/e22b.html).
* Galanti et al. (2022a) Tomer Galanti, Liane Galanti, and Ido Ben-Shaul. On the implicit bias towards minimal depth of deep neural networks, 2022a. URL [https://arxiv.org/abs/2202.09028](https://arxiv.org/abs/2202.09028).
* Galanti et al. (2022b) Tomer Galanti, Andras Gyorgy, and Marcus Hutter. On the role of neural collapse in transfer learning, 2022b.
* Han et al. (2022) Xu Han, Vahe Papyan, and David L Donoho. Neural collapse under mse loss: Proximity to and dynamics on the central path. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=w1UbdvWH_R3](https://openreview.net/forum?id=w1UbdvWH_R3).
* He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
* Ioffe and Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International Conference on Machine Learning_, pages 448-456, 2015.
* Ji et al. (2022) Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, and Weijie J. Su. An unconstrained layer-peeled perspective on neural collapse, 2022.
* Krizhevsky (2009) Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
* LeCun et al. (2010) Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. _ATT Labs [Online]. Available: [http://yann.lecun.com/exdb/mnist_](http://yann.lecun.com/exdb/mnist_), 2, 2010.
* Lu and Steinerberger (2022) Jianfeng Lu and Stefan Steinerberger. Neural collapse under cross-entropy loss. _Applied and Computational Harmonic Analysis_, 59:224-241, 2022. ISSN 1063-5203. doi: [https://doi.org/10.1016/j.acha.2021.12.011](https://doi.org/10.1016/j.acha.2021.12.011). URL [https://www.sciencedirect.com/science/article/pii/S1063520321001123](https://www.sciencedirect.com/science/article/pii/S1063520321001123). Special Issue on Harmonic Analysis and Machine Learning.
* Merentes and Nikodem (2010) Nelson Merentes and Kazimierz Nikodem. Remarks on strongly convex functions. _Aequationes mathematicae_, 80(1):193-199, Sep 2010. ISSN 1420-8903. doi: 10.1007/s00010-010-0043-0. URL [https://doi.org/10.1007/s00010-010-0043-0](https://doi.org/10.1007/s00010-010-0043-0).
* Mixon et al. (2020) Dustin G. Mixon, Hans Parshall, and Jianzong Pi. Neural collapse with unconstrained features, 2020.
* Papyan et al. (2020) Vardan Papyan, X. Y. Han, and David L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. _Proceedings of the National Academy of Sciences_, 117(40):24652-24663, 2020. doi: 10.1073/pnas.2015509117. URL [https://www.pnas.org/doi/abs/10.1073/pnas.2015509117](https://www.pnas.org/doi/abs/10.1073/pnas.2015509117).
* Poggio and Liao (2020) Tomaso Poggio and Qianli Liao. Explicit regularization and implicit bias in deep network classifiers trained with the square loss, 2020.
* Simonyan and Zisserman (2015) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2015.
* Sukenik et al. (2023) Peter Sukenik, Marco Mondelli, and Christoph Lampert. Deep neural collapse is provably optimal for the deep unconstrained features model, 2023.
* S. S.
Tom Tirer and Joan Bruna. Extended unconstrained features model for exploring deep neural collapse, 2022.
* Yaras et al. (2022) Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, and Qing Qu. Neural collapse with normalized features: A geometric analysis over the riemannian manifold. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_, volume 35, pages 11547-11560. Curran Associates, Inc., 2022. URL [https://proceedings.neurips.cc/paper_files/paper/2022/file/4b3cc0d1c897ebcf71aca92a4a26ac83-Paper-Conference.pdf](https://proceedings.neurips.cc/paper_files/paper/2022/file/4b3cc0d1c897ebcf71aca92a4a26ac83-Paper-Conference.pdf).
* Zhou et al. (2022) Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, and Zhihui Zhu. On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features. _arXiv preprint arXiv:2203.01238_, 2022.
* Zhu et al. (2021) Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, and Qing Qu. A geometric analysis of neural collapse with unconstrained features. 2021.
Proofs
### Proof of Proposition 2.1
**Proposition 2.1**.: _Let \(\{\mathbf{h}_{i}\}_{i=1}^{N}\) be a set of feature vectors immediately after Batch Normalization with variance vector \(\boldsymbol{\gamma}\) and bias term \(\boldsymbol{\beta}=0\) (i.e. \(\mathbf{h}_{i}=BN(\mathbf{x}_{i})\) for some \(\{\mathbf{x}_{i}\}_{i=1}^{N}\)). Then_
\[\sqrt{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{i}\|_{2}^{2}}=\|\boldsymbol{\gamma }\|_{2}\]
Proof.: Let \(\boldsymbol{\gamma}\) be the variance vector for the Batch Normalization layer, and consider a single batch \(\{\mathbf{x}_{i}\}_{i=1}^{B}\) be a batch of \(B\) vectors, and
\[h_{i}^{(k)}=\frac{x_{i}^{(k)}-\tilde{x}^{(k)}}{\sigma^{(k)}}\times\gamma^{(k)}\]
for all \(B\). By the linearity of mean and standard deviation, \(\hat{x}_{i}^{(k)}=\frac{x_{i}^{(k)}-\tilde{x}^{(k)}}{\sigma_{\mathbf{x}}^{(k)}}\) must have mean 0 and standard deviation 1. As a result, \(\sum_{i=1}^{B}\hat{x}_{i}^{(k)}=0\) and \(\frac{1}{B}\sum_{i=1}^{B}(\hat{x}_{i}^{(k)})^{2}=1\). Therefore,
\[\sum_{i=1}^{B}(h_{i}^{(k)})^{2}=\sum_{i=1}^{B}\gamma^{(k)}(\hat{x}_{i}^{(k)})^ {2}=B(\gamma^{(k)})^{2}\]
, and
\[\sum_{i=1}^{B}\|\mathbf{h}_{i}\|^{2}=\sum_{k=1}^{d}\sum_{i=1}^{B}(h_{i}^{(k)})^ {2}=\sum_{k=1}^{d}\sum_{i=1}^{B}\gamma^{(k)}(\hat{x}_{i}^{(k)})^{2}=\sum_{k=1} ^{d}B(\gamma^{(k)})^{2}=B\|\boldsymbol{\gamma}\|^{2}\]
Now, Consider a set of \(N\) vectors divided into \(m\) batches of size \(\{B_{j}\}_{j=1}^{m}\). (This accounts for the fact that during training, the last mini-batch may have a different size than the other mini-batches if the number of training data is not a multiple of \(B\)). Then,
\[\sum_{i=1}^{N}\|\mathbf{h}_{i}\|^{2}=\sum_{j=1}^{m}\sum_{i=1}^{B_{j}}\| \mathbf{h}_{j,i}\|^{2}=\sum_{j=1}^{m}B_{j}\|\boldsymbol{\gamma}\|^{2}=N\| \boldsymbol{\gamma}\|^{2}\]
Therefore, \(\sqrt{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{i}\|^{2}}=\|\boldsymbol{\gamma}\|\)
### Proof of Lemma 2.1
**Lemma 2.1**.: _Let \(\{x_{i}\}_{i=1}^{N}\subset\mathcal{I}\) be a set of \(N\) real numbers, let \(\tilde{x}=\frac{1}{N}\sum_{i=1}^{N}x_{i}\) be the mean over all \(x_{i}\) and \(f\) be a function that is \(\lambda\)-strongly-convex on \(\mathcal{I}\). If_
\[\frac{1}{N}\sum_{i=1}^{N}f(x_{i})\leq f(\tilde{x})+\epsilon\]
_Then for any subset of samples \(S\subseteq[N]\), let \(\delta=\frac{[S]}{N}\), there is_
\[\tilde{x}+\sqrt{\frac{2\epsilon(1-\delta)}{\lambda\delta}}\geq\frac{1}{|S|} \sum_{i\in S}x_{i}\geq\tilde{x}-\sqrt{\frac{2\epsilon(1-\delta)}{\lambda\delta}}\]
Proof.: For the proof, we use a result from Merentes and Nikodem (2010) which bounds the Jensen inequality gap using the variance of the variables for strongly convex functions:
**Lemma A.1** (Theorem 4 from Merentes and Nikodem (2010)).: _If \(f:I\rightarrow\mathbb{R}\) is strongly convex with modulus \(c\), then_
\[f\left(\sum_{i=1}^{n}t_{i}x_{i}\right)\leq\sum_{i=1}^{n}t_{i}f(x_{i})-c\sum_{i= 1}^{n}t_{i}(x_{i}-\bar{x})^{2}\]
_for all \(x_{1},\ldots,x_{n}\in I\), \(t_{1},\ldots,t_{n}>0\) with \(t_{1}+\cdots+t_{n}=1\) and \(\bar{x}=t_{1}x_{1}+\cdots+t_{n}x_{n}\)_
In the original definition of the authors, a strongly convex function with modulus \(c\) is equivalent to a \(2c\)-strongly-convex function. We can apply \(t_{i}=\frac{1}{N}\) for all \(i\) and substitute the definition for strong convexity measure to obtain the following corollary:
**Corollary A.1**.: _If \(f:I\to\mathbb{R}\) is \(\lambda\)-strongly-convex on \(\mathcal{I}\), and_
\[\frac{1}{N}\sum_{i=1}^{N}f(x_{i})=f\left(\frac{1}{N}\sum_{i=1}^{N}x_{i}\right)+\epsilon\]
_for \(x_{1},\ldots,x_{N}\in\mathcal{I}\), then \(\frac{1}{N}\sum_{i}(x_{i}-\tilde{x})^{2}\leq\frac{2\epsilon}{\lambda}\)_
From A.1, we know that \(\frac{1}{N}\sum_{i=1}^{n}(x_{i}-\tilde{x})^{2}\leq\frac{2\epsilon}{\lambda}\). Let \(D=\sum_{i\in S}(x_{i}-\tilde{x})\), by the convexity of \(x^{2}\), there is
\[\sum_{i=1}^{n}(x_{i}-\tilde{x})^{2} =\sum_{i\in S}(x_{i}-\tilde{x})^{2}+\sum_{i\notin S}(x_{i}-\tilde{ x})^{2}\] \[\geq|S|(\frac{1}{|S|}\sum_{i\in S}(x_{i}-\tilde{x}))^{2}+(N-|S|) (\frac{1}{N-|S|}\sum_{i\notin S}(x_{i}-\tilde{x}))^{2}\] \[=\frac{1}{S}(\sum_{i\in S}(x_{i}-\tilde{x}))^{2}+\frac{1}{N-|S|} (\sum_{i\notin S}(x_{i}-\tilde{x}))^{2}\] \[=\frac{1}{S}D^{2}+\frac{1}{N-|S|}(-D)^{2}\] \[=\frac{D^{2}}{N}(\frac{1}{\delta}+\frac{1}{1-\delta})\] \[=\frac{D^{2}}{N}(\frac{1}{\delta(1-\delta)})\]
Therefore \(\frac{D^{2}}{N}(\frac{1}{\delta(1-\delta)})\leq\frac{2\epsilon N}{\lambda}\), and \(|D|\leq\sqrt{\frac{2\epsilon\delta(1-\delta)N^{2}}{\lambda}}\). Using \(\frac{1}{|S|}\sum_{i\in S}x_{i}=\frac{1}{|S|}(|S|\tilde{x}+D)\) and \(|S|=\delta N\) completes the proof.
### Proof of Theorem 2.1
**Theorem 2.1**.: _For any unbiased neural network classifier trained on dataset with the number of classes \(C\geq 3\) and samples per class \(N\geq 1\), under the following assumptions:_
1. _The quadratic average of the feature norms_ \(\sqrt{\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}\|\mathbf{h}_{c,i}\|^{2}}\leq\alpha\)__
2. _The Frobenius norm of the last-layer weight_ \(\|\mathbf{W}\|_{F}\leq\sqrt{C}\beta\)__
3. _The average cross-entropy loss over all samples_ \(\mathcal{L}\leq m+\epsilon\) _for small_ \(\epsilon\)__
_where \(m=\log(1+(C-1)\exp(-\frac{C}{C-1}\alpha\beta))\) is the minimum achievable loss for any set of weight and feature vectors satisfying the norm constraints, then for at least \(1-\delta\) fraction of all classes, with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{intra}_{c}\geq 1-O(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}\sqrt{\frac{ \epsilon}{\delta}})\]
_and for at least \(1-\delta\) fraction of all pairs of classes \(c,c^{\prime}\), with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{inter}_{c,c^{\prime}}\leq-\frac{1}{C-1}+O(\frac{e^{O(C\alpha\beta)}}{ \alpha\beta}(\frac{\epsilon}{\delta})^{1/6})\]
We first present several lemmas that facilitate the proof technique used in the main proof. The first two lemmas demonstrate that if a set of variables achieves roughly equal value on the LHS and RHS of Jensen's inequality for a strongly convex function, then the mean of every subset cannot deviate too far from the global mean.
Our first lemma states that, For \(\lambda\)-strongly-convex-function \(f\) and a set of numbers \(\{x_{i}\}_{i=1}^{N}\), if Jensen's inequality has its gap bounded by \(\epsilon\), then the mean of any subset that includes \(\delta\) fraction of all samples can not deviate from global mean of all samples by more than \(\sqrt{\frac{2\epsilon(1-\delta)}{\lambda\delta}}\):
Our second lemma states a similar result specific to the function \(e^{x}\) and only provides the upper bound. Note that, within any predefined range \([a,b]\), \(\exp(x)\) can only be guaranteed to be \(e^{a}\) strongly convex, which may be bad if the lower bound \(a\) is small or does not exist. Our further result in the following lemma shows that we can provide a better upper bound of the subset mean for the exponential function that is dependent on \(\exp(\tilde{x})\) and does not require other prior knowledge of the range of \(x_{i}\):
**Lemma A.2**.: _Let \(\{x_{i}\}_{i=1}^{N}\subset\mathbb{R}\) be any set of \(N\) real numbers, let \(\tilde{x}=\frac{1}{N}\sum_{i=1}^{N}x_{i}\) be the mean over all \(x_{i}\). If_
\[\frac{1}{N}\sum_{i=1}^{N}\exp(x_{i})\leq\exp(\tilde{x})+\epsilon\]
_then for any subset \(S\subseteq[N]\), let \(\delta=\frac{|S|}{N}\), the there is_
\[\frac{1}{|S|}\sum_{i\in S}x_{i}\leq\tilde{x}+\sqrt{\frac{2\epsilon}{\delta\exp (\tilde{x})}}\]
_._
Proof.: Let \(D=\sum_{i\in S}(x_{i}-\tilde{x})\). Note that if \(D<0\) then the upper bound is obviously satisfied since the subset mean will be smaller than the global mean. Therefore, we only consider the case when \(D>0\)
\[\sum_{i=1}^{N}\exp(x_{i}) =\sum_{i\in S}\exp(x_{i})+\sum_{i\notin S}\exp(x_{i})\] \[\geq|S|\exp(\frac{1}{|S|}\sum_{i\in S}x_{i})+(N-|S|)\exp(\frac{1} {N-|S|}\sum_{i\notin S}x_{i})\] \[\geq|S|\exp(\tilde{x}+\frac{D}{|S|})+(N-|S|)\exp(\tilde{x}-\frac {D}{N-|S|})\] \[\geq|S|\exp(\tilde{x})(1+\frac{D}{|S|}+\frac{D^{2}}{2|S|^{2}})+( N-|S|)\exp(\tilde{x})(1-\frac{D}{N-|S|})\] \[=(N+\frac{D^{2}}{2|S|})\exp(\tilde{x})\] \[N\exp(\tilde{x})+N\epsilon \geq(N+\frac{D^{2}}{2|S|})\exp(\tilde{x})\] \[D^{2} \leq\frac{2|S|N\epsilon}{\exp(\tilde{x})}\] \[D \leq N\sqrt{\frac{2\delta\epsilon}{\exp(\tilde{x})}}\]
Using \(\frac{1}{|S|}\sum_{i\in S}x_{i}=\frac{1}{|S|}(|S|\tilde{x}+D)\) and \(|S|=\delta N\) completes the proof.
Directly approaching the average intra-class and inter-class cosine similarity of vector set(s) is a relatively difficult task. Our following lemma shows that the inter-class and inter-class cosine similarities can be computed as the norm and dot product of the vectors \(\tilde{\mathbf{h}}_{c}\), respectively, where \(\tilde{\mathbf{h}}_{c}\) is the mean _normalized_ vector among all vectors in a class.
**Lemma A.3**.: _Let \(c,c^{\prime}\) be 2 classes, each containing \(N\) feature vectors \(\mathbf{h}_{c,i}\in\mathbb{R}^{d}\). Define the average intra-class cosine similarity of picking two vectors from the same class \(c\) as_
\[\mathrm{intra}_{c}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\cos_{\angle}( \mathbf{h}_{c,i},\mathbf{h}_{c,j})\]
_and the intra-class cosine similarity between two classes \(c,c^{\prime}\) is defined as the average cosine similarity of picking one feature vector of class \(c\) and another from class \(c^{\prime}\) as_
\[\mathrm{inter}_{c}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\cos_{\angle}( \mathbf{h}_{c,i},\mathbf{h}_{c^{\prime},j})\]
_Let \(\tilde{\mathbf{h}}_{c}=\frac{1}{N}\sum_{i=1}^{N}\frac{\mathbf{h}_{c,i}}{\| \mathbf{h}_{c,i}\|}\). Then \(\text{intra}_{c}=\|\tilde{\mathbf{h}}_{c}\|^{2}\) and \(\text{inter}_{c,c^{\prime}}=\tilde{\mathbf{h}}_{c}\cdot\tilde{\mathbf{h}}_{c^ {\prime}}\)_
Proof.: For the intra-class cosine similarity,
\[\text{intra}_{c} =\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\tilde{\mathbf{h}}_{c,i}\cdot\tilde{\mathbf{h}}_{c,j}\] \[=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\mathbf{h}_{c,i}}{\|\mathbf{h}_{c,i}\|}\cdot\frac{\mathbf{h}_{c,j}}{\|\mathbf{h}_{c,j}\|}\] \[=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\mathbf{h}_{c,i}\cdot\mathbf{h}_{c,j}}{\|\mathbf{h}_{c,i}\|\|\mathbf{h}_{c,j}\|}\] \[=\left(\frac{1}{N}\sum_{i=1}^{N}\frac{\mathbf{h}_{c,i}}{\| \mathbf{h}_{c,i}\|}\right)\cdot\left(\frac{1}{N}\sum_{j=1}^{N}\frac{\mathbf{h }_{c,j}}{\|\mathbf{h}_{c,j}\|}\right)\] \[=\|\tilde{\mathbf{h}}_{c}\|^{2}\]
and for the inter-class cosine similarity,
\[\text{inter}_{c,c^{\prime}} =\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\tilde{\mathbf{h}}_{ c,i}\cdot\tilde{\mathbf{h}}_{c^{\prime},j}\] \[=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\mathbf{h}_{c,i}}{\|\mathbf{h}_{c,i}\|}\cdot\frac{\mathbf{h}_{c^{\prime},j}}{\|\mathbf{h}_ {c^{\prime},j}\|}\] \[=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\mathbf{h}_{c,i}\cdot\mathbf{h}_{c^{\prime},j}}{\|\mathbf{h}_{c,i}\|\|\mathbf{h}_{c^{\prime },j}\|}\] \[=\left(\frac{1}{N}\sum_{i=1}^{N}\frac{\mathbf{h}_{c,i}}{\| \mathbf{h}_{c,i}\|}\right)\cdot\left(\frac{1}{N}\sum_{j=1}^{N}\frac{\mathbf{h }_{c^{\prime},j}}{\|\mathbf{h}_{c^{\prime},j}\|}\right)\] \[=\tilde{\mathbf{h}}_{c}\cdot\tilde{\mathbf{h}}_{c^{\prime}}\]
We prove the intra-class cosine similarity by first showing that the norm of the mean (un-normalized) class-feature vector for a class is near the quadratic average of feature means (i.e., \(\|\tilde{\mathbf{h}}_{c}\|=\|\frac{1}{N}\sum_{i=1}^{N}\mathbf{h}_{c,i}\|\approx \sqrt{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{c,i}\|^{2}}\)). However, to show intra-class cosine similarity, we need instead a bound on \(\|\tilde{\mathbf{h}}_{c}\|=\|\frac{1}{N}\sum_{i=1}^{N}\tilde{\mathbf{h}}_{c,i}\|\). The following lemma provides a conversion between these requirements:
**Lemma A.4**.: _Suppose \(\mathbf{u}\in\mathbb{R}^{d}\) and \(\|\mathbf{u}\|\leq\beta\). Let \(\{\mathbf{v}_{i}\}_{i=1}^{N}\subset\mathbb{R}^{d}\) such that \(\frac{1}{N}\|\mathbf{v}_{i}\|^{2}\leq\alpha^{2}\). If_
\[\frac{1}{N}\sum_{i=1}^{N}\langle\mathbf{u},\mathbf{v}_{i}\rangle\geq c\]
_, for \(\frac{\alpha\beta}{\sqrt{2}}\leq c\leq\alpha\beta\) and let \(\bar{\mathbf{v}}=\frac{\mathbf{v}}{\|\mathbf{v}\|}\) then_
\[\tilde{\mathbf{v}}=\|\frac{1}{N}\sum_{i=1}^{N}\bar{\mathbf{v}}\|\geq 2(\frac{c}{ \alpha\beta})^{2}-1\]
Proof.: Divide into 2 cases: the set of indices
\[pos=\{i\in[N]|\langle\mathbf{u},\mathbf{v}_{i}\rangle>0\}\]
and
\[neg=\{i\in[N]|\langle\mathbf{u},\mathbf{v}_{i}\rangle<0\}\]
Let \(M=|pos|\), then note that
\[\sum_{i\in pos}\langle\mathbf{u},\mathbf{v}_{i}\rangle \geq Nc\] \[\sum_{i\in pos}\langle\mathbf{u},\mathbf{v}_{i}\rangle \leq\|\mathbf{u}\|\sum_{i\in pos}\|\mathbf{v}_{i}\|\] \[\leq\beta\sqrt{M\sum_{i\in pos}\|\mathbf{v}_{i}\|^{2}} E[X^{2}]\geq E[X]^{2}\] \[\leq\beta\sqrt{MN\alpha^{2}}\] \[=\alpha\beta\sqrt{MN}\]
Therefore
\[N\geq M\geq N(\frac{c}{\alpha\beta})^{2}\]
First, consider \(\sum_{i\in pos}\langle\mathbf{u},\bar{\mathbf{v}}_{i}\rangle\). Note that \(\sum_{i\in pos}\|\mathbf{v}_{i}\|^{2}\leq N\alpha^{2}\) and \(\sum_{i\in pos}\langle\mathbf{u},\mathbf{v}_{i}\rangle\geq Nc\). We will use the following proposition that can be easily shown through Lagrange multipliers: Given \(\{a_{i}\}_{i=1}^{N}\) and \(\{b_{i}\}_{i=1}^{N}\) such that \(a_{i}\geq 0\) and \(b_{i}>0\) for all \(i\), if \(\sum_{i=1}^{n}a_{i}=A\) and \(\sum_{i=1}^{n}b_{i}^{2}\leq B\), then \(\sum_{i=1}^{N}\frac{a_{i}}{b_{i}}\geq\frac{\alpha\sqrt{N}}{\sqrt{B}}\)
Therefore
\[\sum_{i\in pos}\langle\mathbf{u},\bar{\mathbf{v}}_{i}\rangle =\sum_{i\in pos}\frac{\langle\mathbf{u},\mathbf{v}_{i}\rangle}{ \|\mathbf{v}_{i}\|}\] \[\geq\frac{c}{\alpha}\cdot\sqrt{MN}\] Proposition \[\geq\frac{c}{\alpha}\cdot(N\frac{c}{\alpha\beta})\] \[=N\beta(\frac{c}{\alpha\beta})^{2}\]
On the other hand, for \(neg\), since \(\langle\mathbf{u},\bar{\mathbf{v}}_{i}\rangle\geq-\|u\|\geq-\beta\), we get
\[\sum_{i\in pos}\langle\mathbf{u},\bar{\mathbf{v}}_{i}\rangle \geq\sum_{i\in neg}-\beta\] \[=-\beta(N-M)\] \[\geq-\beta N(1-(\frac{c}{\alpha\beta})^{2})=N\beta((\frac{c}{ \alpha\beta})^{2}-1)\]
Therefore
\[\|\mathbf{u}\|\|\frac{1}{N}\sum_{i=1}^{N}\bar{\mathbf{v}}_{i}\| \geq\frac{1}{N}\sum_{i=1}^{N}\langle\mathbf{u},\bar{\mathbf{v}}_{i}\rangle \geq\frac{1}{N}(\sum_{i\in pos}\langle\mathbf{u},\bar{\mathbf{v}}_{ i}\rangle+\sum_{i\in neg}\langle\mathbf{u},\bar{\mathbf{v}}_{i}\rangle)\] \[\geq\frac{1}{N}(N\beta((\frac{c}{\alpha\beta})^{2}-1)+N\beta( \frac{c}{\alpha\beta})^{2})\] \[=\beta(2(\frac{c}{\alpha\beta})^{2}-1)\] \[\|\frac{1}{N}\sum_{i=1}^{N}\bar{\mathbf{v}}\|^{2} \geq 2(\frac{c}{\alpha\beta})^{2}-1\]
To make this lemma generalize to other proofs in future work, we provide the generalized corollary of the above lemma by setting \(\mathbf{u}\) to be the normalized mean vector of \(\mathbf{v}\):
**Corollary A.2**.: _Let \(\{\mathbf{v}_{i}\}_{i=1}^{N}\subset\mathbb{R}^{d}\) such that \(\frac{1}{N}\|\mathbf{v}_{i}\|^{2}\leq\alpha^{2}\). If_
\[\|\frac{1}{N}\sum_{i=1}^{N}\mathbf{v}_{i}\|\geq c\]
_, for \(\frac{\alpha}{\sqrt{2}}\leq c\leq\alpha\) and let \(\tilde{\mathbf{v}}=\frac{\mathbf{v}}{\|\mathbf{v}\|}\) then_
\[\tilde{\mathbf{v}}=\|\frac{1}{N}\sum_{i=1}^{N}\bar{\mathbf{v}}\|\geq 2(\frac{c}{ \alpha})^{2}-1\]
Similarly, for inter-class cosine similarity, we have the following lemma:
**Lemma A.5**.: _Let \(\mathbf{w}\in\mathbb{R}^{d}\), \(\{\mathbf{h}_{i}\}_{i=1}^{N}\subset\mathbb{R}^{d}\). Let \(\tilde{\mathbf{h}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{h}_{i}\) and \(\tilde{\tilde{\mathbf{h}}}=\frac{1}{N}\sum_{i=1}^{N}\frac{\mathbf{h}_{i}}{\| \mathbf{h}_{i}\|}\). If the following condition is satisfied:_
\[\frac{1}{N}\mathbf{w}\cdot\sum_{i=1}^{N}\mathbf{h}_{i} \leq c \text{for}\,c<0\] \[\|\mathbf{w}\| \leq\beta\] \[\frac{1}{N}\sum_{i=1}^{n}\|\mathbf{h}_{i}\|^{2} \leq\alpha^{2}\] \[\exists\mathbf{w}^{\prime}\in\mathbb{R}^{d},\|\mathbf{w}^{\prime }\|\leq\beta,\frac{1}{N}\mathbf{w}^{\prime}\sum_{i=1}^{N}\mathbf{h}_{i} \geq\alpha\beta-\epsilon^{\prime}\] \[\epsilon^{\prime} \ll\alpha\beta\]
_Then \(\cos_{\angle}(\mathbf{w},\tilde{\tilde{\mathbf{h}}})\leq-\frac{c}{\alpha\beta }+4(\frac{\epsilon^{\prime}}{\alpha\beta})^{1/3}\)_
Proof.: For \(\mathbf{w}\in\mathbb{R}^{d}\), \(\{\mathbf{h}_{i}\}_{i=1}^{N}\subset\mathbb{R}^{d}\)
Let \(a_{i}=\frac{1}{N}\mathbf{w}\mathbf{h}_{i}\), \(b_{i}=\|\mathbf{h}_{i}\|\), \(\epsilon=\frac{\epsilon^{\prime}}{\beta}\), then the constraints of the above problem relaxed as follows:
\[\max\sum_{i=1}^{N}\frac{a_{i}}{b_{i}}\] \[s.t.\sum_{i=1}^{N}a_{i} \leq c\] \[\frac{1}{N}\sum_{i=1}^{N}b_{i}^{2} =\alpha^{2}\] \[\frac{1}{N}\sum_{i=1}^{N}b_{i} \geq\alpha-\frac{\epsilon}{\beta}\] \[\forall i,\|\frac{a_{i}}{b_{i}}| \leq\beta\]
First, consider the case when \(\frac{1}{N}\sum_{i=1}^{N}b_{i}\geq\alpha-\epsilon\) Consider a random variable \(B\) that uniformly picks a value from \(\{b_{i}\}_{i=1}^{N}\). Then \(\mathbb{E}[B]\geq\alpha-\frac{\epsilon}{\beta}\), \(\mathbb{E}[B^{2}]=\alpha^{2}\), and therefore \(\sigma_{B}=\sqrt{\mathbb{E}[B^{2}]-\mathbb{E}[B]^{2}}\leq\sqrt{2\alpha\epsilon}\). According to Chebyshev's inequality
\[P(|B-(\alpha-\epsilon)|\geq k\sqrt{2\alpha\epsilon})\leq\frac{1}{k^{2}}\]
Note that for positive \(a_{i}\), smaller \(b_{i}\) means larger \(\frac{a_{i}}{b_{i}}\) and for negative \(a_{i}\), higher \(b_{i}\) means larger \(\frac{a_{i}}{b_{i}}\). Suppose that \(\epsilon\) is sufficiently small such that \(\epsilon\ll\sqrt{\epsilon}\).Therefore, an upper bound for \(\frac{a_{i}}{b_{i}}\) when \(a_{i}>0\) is
\[\frac{a_{i}}{b_{i}}\leq\begin{cases}\frac{a_{i}}{\alpha-k\sqrt{2\alpha\epsilon }}&b_{i}\geq\alpha-k\sqrt{2\alpha\epsilon}\\ \beta&b_{i}<\alpha-k\sqrt{2\alpha\epsilon}\end{cases}\]
and an upper bound for \(a_{i}<0\) would is
\[\frac{a_{i}}{b_{i}}\leq\begin{cases}\frac{a_{i}}{\alpha+k\sqrt{2\alpha\epsilon }}&b_{i}\leq\alpha+k\sqrt{2\alpha\epsilon}\\ 0&b_{i}>\alpha+k\sqrt{2\alpha\epsilon}\end{cases}\]
Suppose that \(k\sqrt{\frac{2\epsilon}{\alpha}}\) is less than \(\frac{1}{2}\), then
\[\frac{a_{i}}{\alpha-k\sqrt{2\alpha\epsilon}}=\frac{a_{i}}{\alpha}\cdot\frac{ 1}{1-k\sqrt{\frac{2\epsilon}{\alpha}}}<\frac{a_{i}}{\alpha}\cdot(1+2k\sqrt{ \frac{2\epsilon}{\alpha}})=\frac{a_{i}}{\alpha}+|\frac{a_{i}}{\alpha}|\cdot 2k \sqrt{\frac{2\epsilon}{\alpha}}\]
when \(a_{i}>0\), and similarly
\[\frac{a_{i}}{\alpha+k\sqrt{2\alpha\epsilon}}=\frac{a_{i}}{\alpha}\cdot\frac{ 1}{1+k\sqrt{\frac{2\epsilon}{\alpha}}}<\frac{a_{i}}{\alpha}\cdot(1-2k\sqrt{ \frac{2\epsilon}{\alpha}})=\frac{a_{i}}{\alpha}+|\frac{a_{i}}{\alpha}|\cdot 2k \sqrt{\frac{2\epsilon}{\alpha}}\]
when \(a_{i}<0\). Note that
\[\sum_{i=1}^{N}|\frac{a_{i}}{\alpha}|\cdot 2k\sqrt{\frac{2\epsilon}{\alpha}} \leq\sum_{i=1}^{N}\frac{\beta}{N}\cdot 2k\sqrt{\frac{2\epsilon}{\alpha}}=2k \beta\sqrt{\frac{2\epsilon}{\alpha}}\]
Therefore, an upper bound on the total sum would be:
\[\frac{c}{\alpha}+2k\beta\sqrt{\frac{2\epsilon}{\alpha}}+\frac{\beta}{k^{2}}\]
Set \(k=(\sqrt{\frac{8\epsilon}{\alpha}})^{-\frac{1}{3}}\) to get:
\[\frac{c}{\alpha}+2\beta(\sqrt{\frac{8\epsilon}{\alpha}})^{\frac{2}{3}}=\frac {c}{\alpha}+4\beta(\frac{\epsilon}{\alpha})^{\frac{1}{3}}\]
Now, we substitute \(\epsilon=\frac{\epsilon^{\prime}}{\beta}\) we get: \(\mathbf{w}\cdot\tilde{\mathbf{h}}\leq\frac{c}{\alpha}+4\beta(\frac{\epsilon^{ \prime}}{\alpha\beta})^{1/3}\) Since \(|\mathbf{w}|\leq\beta\) and \(|\tilde{\mathbf{h}}|\leq 1\), we get that
\[\cos_{\angle}(\mathbf{w},\tilde{\mathbf{h}})\leq\frac{c}{\alpha\beta}+4( \frac{\epsilon^{\prime}}{\alpha\beta})^{1/3}\]
Now we proceed to the main proof: First, consider the minimum achievable average loss for a single class \(c\):
\[\frac{1}{N}\sum_{i=1}^{N}L_{c,i} =\frac{1}{N}\sum_{i=1}^{N}\text{softmax}(\mathbf{W}\mathbf{h}_{c,i} )_{c} \tag{1}\] \[\geq\text{softmax}(\frac{1}{N}\sum_{i=1}^{N}\mathbf{W}\mathbf{h}_{ c,i})_{c}\] (2) \[=\log\left(1+\sum_{c^{\prime}\neq c}\exp(\frac{1}{N}\sum_{i=1}^{N }(\mathbf{w}_{c^{\prime}}-\mathbf{w}_{c})\mathbf{h}_{c,i})\right)\] (3) \[=\log\left(1+\sum_{c^{\prime}\neq c}\exp((\mathbf{w}_{c^{\prime}}- \mathbf{w}_{c})\mathbf{\tilde{h}}_{c})\right)\] (4) \[\geq\log\left(1+(C-1)\exp(\frac{1}{(C-1)}(\sum_{c^{\prime}=1}^{C} \mathbf{w}_{c^{\prime}}\mathbf{\tilde{h}}_{c}-C\mathbf{w}_{c}\mathbf{\tilde{h }}_{c}))\right)\] (5) \[=\log\left(1+(C-1)\exp(\frac{1}{(C-1)}(\sum_{c^{\prime}=1}^{C} \mathbf{w}_{c^{\prime}}-C\mathbf{w}_{c})\mathbf{\tilde{h}}_{c})\right)\] (6) \[=\log\left(1+(C-1)\exp(\frac{C}{C-1}(\tilde{\mathbf{w}}-\mathbf{w }_{c})\mathbf{\tilde{h}}_{c})\right)\] (7) \[=\log\left(1+(C-1)\exp(-\frac{C}{C-1}\mathbf{\dot{w}}_{c}\mathbf{ \tilde{h}}_{c})\right) \tag{8}\]
Let \(\overrightarrow{\mathbf{w}}=[\mathbf{w}_{1}-\tilde{\mathbf{w}},\mathbf{w}_{2 }-\tilde{\mathbf{w}},\ldots,\mathbf{w}_{C}-\tilde{\mathbf{w}}]=[\mathbf{ \dot{w}}_{1},\mathbf{\dot{w}}_{2},\ldots,\mathbf{\dot{w}}_{C}]\), and \(\overrightarrow{\mathbf{h}}=[\mathbf{\tilde{h}}_{1},\mathbf{\tilde{h}}_{2}, \ldots,\mathbf{\tilde{h}}_{c}]\in\mathbf{R}^{Cd}\). Note that
\[\|\overrightarrow{\mathbf{w}}\|^{2} =\sum_{c=1}^{C}\|\mathbf{w}_{c}-\tilde{\mathbf{w}}\|^{2}=\sum_{c =1}^{C}\left(\|\mathbf{w}_{c}\|^{2}-2\mathbf{w}_{c}\tilde{\mathbf{w}}+\| \tilde{\mathbf{w}}\|^{2}\right)\] \[=\sum_{c=1}^{C}\|\mathbf{w}_{c}\|^{2}-C\|\tilde{\mathbf{w}}\|^{2 }\leq\sum_{c=1}^{C}\|\mathbf{w}_{c}\|^{2}=\|\mathbf{W}\|_{F}^{2}\leq C\beta^{2}\]
and also
\[\|\overrightarrow{\mathbf{h}}\|^{2} =\sum_{c=1}^{C}\|\mathbf{\tilde{h}}_{c}\|^{2}=\sum_{c=1}^{C}\| \frac{1}{N}\sum_{i=1}^{N}\mathbf{h}_{c,i}\|^{2}\leq\sum_{c=1}^{C}\left(\frac{ 1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{c,i}\|\right)^{2}\] \[\leq\frac{1}{N}\sum_{c=1}^{C}\sum_{i=1}^{N}\|\mathbf{h}_{c,i}\|^{2 }=C\alpha^{2}\]
The first inequality uses the triangle inequality and the second uses \(\mathbb{E}[X^{2}]\geq\mathbb{E}[X]^{2}\) Now consider the total average loss over all classes:
\[\mathcal{L} =\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}L_{c,i}\] \[\geq\frac{1}{C}\sum_{c=1}^{C}\log\left(1+(C-1)\exp(\frac{C}{C-1}( \tilde{\mathbf{w}}-\mathbf{w}_{c})\mathbf{\tilde{h}}_{c})\right)\] \[\geq\log\left(1+(C-1)\exp(\frac{C}{C-1}\cdot\frac{1}{C}\sum_{c=1 }^{C}(\tilde{\mathbf{w}}-\mathbf{w}_{c})\mathbf{\tilde{h}}_{c})\right)\] Jensen's \[\geq\log\left(1+(C-1)\exp(-\frac{1}{C-1}\overrightarrow{ \mathbf{w}}\cdot\overrightarrow{\mathbf{h}})\right)\] \[\geq\log\left(1+(C-1)\exp(-\frac{C}{C-1}\alpha\beta\right)\] \[=m\]
showing that \(m\) is indeed the minimum achievable average loss among all samples.
Now we instead consider when the final average loss is near-optimal of value \(m+\epsilon\) with \(\epsilon\ll 1\). We use a new \(\epsilon\) to represent the gap introduced by each inequality in the above proof. Additionally, since the average loss is near-optimal, there must be \(\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\geq 0\) for any sufficiently small \(\epsilon\):
\[\frac{1}{N}\sum_{i=1}^{N}L_{c,i} =\frac{1}{N}\sum_{i=1}^{N}\text{softmax}(\mathbf{Wh}_{c,i})_{c} \tag{9}\] \[\geq\text{softmax}(\frac{1}{N}\sum_{i=1}^{N}\mathbf{Wh}_{c,i})_{c}\] (10) \[=\log\left(1+\sum_{c^{\prime}\neq c}\exp(\frac{1}{N}\sum_{i=1}^{ N}(\mathbf{w}_{c^{\prime}}-\mathbf{w}_{c})\mathbf{h}_{c,i})\right)\] (11) \[=\log\left(1+\sum_{c^{\prime}\neq c}\exp((\mathbf{w}_{c^{\prime}} -\mathbf{w}_{c})\tilde{\mathbf{h}}_{c})\right)\] (12) \[=\log\left(1+(C-1)\exp(\frac{1}{(C-1)}(\sum_{c^{\prime}=1}^{C} \mathbf{w}_{c^{\prime}}\tilde{\mathbf{h}}_{c}-C\mathbf{w}_{c}\tilde{\mathbf{h }}_{c}))+\epsilon_{1}^{\prime}\right)\] (13) \[=\log\left(1+(C-1)\exp(\frac{1}{(C-1)}(\sum_{c^{\prime}=1}^{C} \mathbf{w}_{c^{\prime}}-C\mathbf{w}_{c})\tilde{\mathbf{h}}_{c})+\epsilon_{1}^{ \prime}\right)\] (14) \[=\log\left(1+(C-1)\exp(\frac{C}{C-1}(\tilde{\mathbf{w}}-\mathbf{w }_{c})\tilde{\mathbf{h}}_{c})+\epsilon_{1}^{\prime}\right)\] (15) \[\geq\log\left(1+(C-1)\exp(-\frac{C}{C-1}\hat{\mathbf{w}}_{c} \tilde{\mathbf{h}}_{c})\right)+\frac{\epsilon_{1}^{\prime}}{1+(C-1)\exp(- \frac{C}{C-1}\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c})}\] (16) \[\geq\log\left(1+(C-1)\exp(-\frac{C}{C-1}\hat{\mathbf{w}}_{c} \tilde{\mathbf{h}}_{c})\right)+\frac{\epsilon_{1}^{\prime}}{C} \tag{17}\]
and also
\[\mathcal{L} =\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}L_{c,i}\] \[\geq\frac{1}{C}\sum_{c=1}^{C}\left(\log\left(1+(C-1)\exp(-\frac{C} {C-1}\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c})\right)+\frac{\epsilon_{1,c}^{ \prime}}{C}\right)+\epsilon_{2}^{\prime}\] \[\geq\log\left(1+(C-1)\exp(-\frac{C}{C-1}\cdot\frac{1}{C}\sum_{c= 1}^{C}\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c})\right)+\frac{1}{C}\sum_{c=1} ^{C}\frac{\epsilon_{1,c}^{\prime}}{C}+\epsilon_{2}^{\prime}\] Jensen's \[=\log\left(1+(C-1)\exp(-\frac{1}{C-1}\overrightarrow{\mathbf{w} }\cdot\overrightarrow{\mathbf{h}})\right)+\frac{1}{C}\sum_{c=1}^{C}\frac{ \epsilon_{1,c}^{\prime}}{C}+\epsilon_{2}^{\prime}\] \[=\log\left(1+(C-1)\exp(-\frac{C}{C-1}\alpha\beta+\epsilon_{3}^{ \prime})\right)+\frac{1}{C}\sum_{c=1}^{C}\frac{\epsilon_{1,c}^{\prime}}{C}+ \epsilon_{2}^{\prime}\]
Consider \(\log(1+(C-1)\exp(-\frac{C\alpha\beta}{C-1}+\epsilon_{3}^{\prime}))\): Let \(\gamma^{\prime}=(C-1)\exp(-\frac{C\alpha\beta}{C-1})\)
\[\log(1+(C-1)\exp(-\frac{C\alpha\beta}{C-1}+\epsilon_{3}^{\prime})) =\log(1+(C-1)\exp(-\frac{C\alpha\beta}{C-1})\exp(\epsilon_{3}^{ \prime}))\] \[=\log(1+(C-1)\exp(-\frac{C\alpha\beta}{C-1})\exp(\epsilon_{3}^{ \prime}))\] \[=\log(1+\gamma^{\prime}\exp(\epsilon_{3}^{\prime}))\] \[\geq\log(1+\gamma^{\prime}(1+\epsilon_{3}^{\prime}))\] \[=\log(1+\gamma^{\prime}+\gamma^{\prime}\epsilon_{3}^{\prime})\] \[\approx\log(1+\gamma^{\prime})+\frac{\gamma^{\prime}}{1+\gamma^{ \prime}}\epsilon_{3}^{\prime}\]
Thus using the fact that \(1+(C-1)\exp(-\frac{C\alpha\beta}{C-1})\leq C\)
\[\mathcal{L} \geq\log(1+(C-1)\exp(-\frac{C\alpha\beta}{C-1}))+\frac{1}{C}\sum_ {c=1}^{C}\frac{\epsilon_{1,c}^{\prime}}{C}+\epsilon_{2}^{\prime}+\frac{\gamma ^{\prime}}{1+\gamma^{\prime}}\epsilon_{3}^{\prime}\] \[\epsilon \geq\frac{1}{C}\sum_{c=1}^{C}\frac{\epsilon_{1,c}^{\prime}}{C}+ \epsilon_{2}^{\prime}+\frac{\gamma^{\prime}}{1+\gamma^{\prime}}\epsilon_{3}^{\prime}\]
Note that while we do not know how \(\epsilon\) is distributed among the different gaps, all the bounds involving \(\epsilon_{1,c}^{\prime},\epsilon_{2}^{\prime},\epsilon_{3}^{\prime}\) always hold in the worst case scenario subject to the constraint \(\epsilon\geq\frac{1}{C}\sum_{c=1}^{C}\frac{\epsilon_{1,c}^{\prime}}{C}+ \epsilon_{2}^{\prime}+\frac{\gamma^{\prime}}{1+\gamma^{\prime}}\epsilon_{3}^{\prime}\). Note that \(\|\tilde{\mathbf{h}_{c}}\|\leq\alpha\), therefore \((\mathbf{w}_{c^{\prime}}-\mathbf{w}_{c})\tilde{\mathbf{h}}_{c}\geq-C\alpha\beta\), and the second-order derivative of \(\log(1+(C-1)\exp(x))\) is
\[\frac{(C-1)\exp(x)}{(1+(C-1)\exp(x))^{2}}\]
, which is \(e^{-O(C\alpha\beta)}\) for any \(x\in[-C\alpha\beta,C\alpha\beta]\). Therefore, the function \(\log(1+(C-1)\exp(x))\) is \(\lambda\)-strongly-convex for \(\lambda=e^{-O(\alpha\beta)}\) Thus, for any subset \(S\subseteq[C]\), let \(\delta=\frac{|S|}{C}\), by 2.1:
\[-\frac{C}{C-1}\sum_{c\in S}\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c} \leq\delta C(-\frac{1}{C-1}\overrightarrow{\mathbf{w}}\cdot \overrightarrow{\mathbf{h}})+C\sqrt{\frac{2\epsilon_{2}^{\prime}\delta(1- \delta)}{\lambda}}\] \[\sum_{c\in S}\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c} \geq\delta\overrightarrow{\mathbf{w}}\cdot\overrightarrow{\mathbf{ h}}-(C-1)\sqrt{\frac{2\epsilon_{2}^{\prime}\delta(1-\delta)}{\lambda}}\] \[\sum_{c\in S}\alpha_{c}\beta_{c} =\sum_{c\in[C]}\alpha_{c}\beta_{c}-\sum_{c\notin S}\alpha_{c} \beta_{c}\] \[\leq\sum_{c\in[C]}\alpha_{c}\beta_{c}-\sum_{c\notin S}\hat{ \mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\] \[\leq C\alpha\beta-\sum_{c\notin[C]-S}\hat{\mathbf{w}}_{c}\tilde{ \mathbf{h}}_{c}\] \[\leq C\alpha\beta-(1-\delta)\overrightarrow{\mathbf{w}}\cdot \overrightarrow{\mathbf{h}}+(C-1)\sqrt{\frac{2\epsilon_{2}^{\prime}\delta(1- \delta)}{\lambda}}\]
Let \(\alpha_{c}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{c,i}\|^{2}}\) and \(\beta_{c}=\|\hat{\mathbf{w}}_{c}\|\). Note that since \(-\frac{1}{C-1}\overrightarrow{\mathbf{w}}\cdot\overrightarrow{\mathbf{h}}=- \frac{C}{C-1}\alpha\beta+\epsilon_{3}^{\prime}\), there is \(\overrightarrow{\mathbf{w}}\cdot\overrightarrow{\mathbf{h}}=C\alpha\beta-(C -1)\epsilon_{3}^{\prime}\). Therefore,
\[\sum_{c\in S}\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\geq\delta C \alpha\beta-\delta(C-1)\epsilon_{3}^{\prime}-(C-1)\sqrt{\frac{2\epsilon_{2}^ {\prime}\delta(1-\delta)}{\lambda}}\] \[\sum_{c\in S}\alpha_{c}\beta_{c}\leq\delta C\alpha\beta+(1- \delta)(C-1)\epsilon_{3}^{\prime}+(C-1)\sqrt{\frac{2\epsilon_{2}^{\prime} \delta(1-\delta)}{\lambda}}\]
Therefore, there are at most \(\delta C\) classes for which
\[\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\leq\alpha\beta-\frac{(C-1)}{C} \epsilon_{3}^{\prime}-\frac{C-1}{C}\sqrt{\frac{2\epsilon_{2}^{\prime}(1- \delta)}{\delta\lambda}} \tag{18}\]
and also there are at most \(\delta C\) classes for which
\[\alpha_{c}\beta_{c}\geq\alpha\beta+\frac{(1-\delta)(C-1)}{\delta C}\epsilon_{3 }^{\prime}+\frac{C-1}{C}\sqrt{\frac{2\epsilon_{2}^{\prime}(1-\delta)}{\delta \lambda}} \tag{19}\]
Thus, for at least \((1-2\delta)C\) classes, there is
\[\frac{\hat{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}}{\alpha_{c}\beta_{c}}\geq 1-( \frac{C-1}{C\alpha\beta})(\frac{\epsilon_{3}^{\prime}}{\delta}-2\sqrt{\frac{2 \epsilon_{2}^{\prime}(1-\delta)}{\delta\lambda}}) \tag{20}\]
By setting \(\epsilon_{2}^{\prime}=\epsilon\) and \(\epsilon_{3}^{\prime}=0\), we get the following upper bound on
\[\cos_{\angle}(\hat{\mathbf{w}}_{c},\tilde{\mathbf{h}}_{c})\geq 1-2\sqrt{\frac{2 \epsilon(1-\delta)}{\delta\lambda}}\]
Using \(\lambda=e^{-O(\alpha\beta)}\) gives the NC3 bound in the theorem. Therefore, applying lemma A.4 these classes, there is
\[\text{intra}_{c}\geq 1-4(\frac{C-1}{C\alpha\beta})(\frac{\epsilon_{3}^{\prime }}{\delta}-2\sqrt{\frac{2\epsilon_{2}^{\prime}(1-\delta)}{\delta\lambda}})\]
Assuming that \(\epsilon\ll 1\), then \(\epsilon\ll\sqrt{\epsilon}\). Therefore, then worst case bound when \(\epsilon\geq\epsilon_{2}^{\prime}+\frac{\gamma^{\prime}}{1+\gamma^{\prime}} \epsilon_{3}^{\prime}\) is achieved when \(\epsilon_{2}^{\prime}=\epsilon\):
\[\text{intra}_{c}\geq 1-8(\frac{C-1}{C\alpha\beta})\sqrt{\frac{2\epsilon(1- \delta)}{\delta\lambda}}\]
Plug in \(\lambda=\exp(-O(C\alpha\beta))\) and with simplification we get:
\[\text{intra}_{c}\geq 1-\frac{\exp(O(C\alpha\beta))}{\alpha\beta}\sqrt{\frac{128 \epsilon}{\delta}}=1-O(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}\sqrt{\frac{ \epsilon}{\delta}})\]
Now consider the inter-class cosine similarity. Let \(m_{c}=-\frac{C}{C-1}\mathbf{w}_{c}\tilde{\mathbf{h}}_{c}\), by Lemma A.2 we know that for any set \(S\) of \(\delta(C-1)\) classes in \([C]-\{c\}\), using the definition that \(\dot{\mathbf{w}}_{c}=\mathbf{w}_{c}-\ddot{\mathbf{w}}\) there is
\[\sum_{c^{\prime}\in S}(\dot{\mathbf{w}}_{c^{\prime}}-\dot{\mathbf{w}}_{c}) \tilde{\mathbf{h}}_{c}=\sum_{c^{\prime}\in S}(\mathbf{w}_{c^{\prime}}- \mathbf{w}_{c})\tilde{\mathbf{h}}_{c}\leq\delta(C-1)m_{c}+(C-1)\sqrt{\frac{2 \delta\epsilon^{\prime}_{1,c}}{\exp(m_{c})}}\]
Therefore, for at least \((1-\delta)(C-1)\) classes, there is
\[(\dot{\mathbf{w}}_{c^{\prime}}-\dot{\mathbf{w}}_{c})\tilde{ \mathbf{h}}_{c} \leq m_{c}+\sqrt{\frac{2\epsilon^{\prime}_{1,c}}{\exp(m_{c}) \delta}}=-\frac{C}{C-1}\dot{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}+\sqrt{\frac {2\epsilon^{\prime}_{1,c}}{\exp(m_{c})\delta}} \tag{21}\] \[\dot{\mathbf{w}}_{c^{\prime}}\tilde{\mathbf{h}}_{c} \leq-\frac{1}{C-1}\ddot{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}+ \sqrt{\frac{2\epsilon^{\prime}_{1,c}}{\exp(m_{c})\delta}} \tag{22}\]
Combining with (18) (19), we get that there are at least \((1-2\delta)C\times(1-3\delta)C\geq(1-5\delta)C^{2}\) pairs of classes \(c,c^{\prime}\) that satisfies the following: for both \(c\) and \(c^{\prime}\), equations (18) (19) are not satisfied (i.e. satisfied in reverse direction), and (21) is satisfied for the pair \(c^{\prime},c\). Note that this implies
\[m_{c}=-\frac{C}{C-1}\dot{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\leq-\frac{C}{C- 1}\alpha\beta+\epsilon^{\prime}_{3}+\sqrt{\frac{2\epsilon^{\prime}_{2}(1- \delta)}{\delta\lambda}}\]
and
\[\dot{\mathbf{w}}_{c^{\prime}}\tilde{\mathbf{h}}_{c}\leq-\frac{\alpha\beta}{C- 1}+\frac{1}{C}(\epsilon^{\prime}_{3}+\sqrt{\frac{2\epsilon^{\prime}_{2}(1- \delta)}{\delta\lambda}})+\sqrt{\frac{2\epsilon^{\prime}_{1,c}}{\exp(m_{c}) \delta}}\]
We now seek to simplify the above bounds using the constraint that \(\epsilon\geq\frac{1}{C}\sum_{c=1}^{C}\frac{\epsilon^{\prime}_{1,c}}{C}+ \epsilon^{\prime}_{2}+\frac{\gamma^{\prime}}{1+\gamma^{\prime}}\epsilon^{ \prime}_{3}\). Note that \(\epsilon\ll\sqrt{\epsilon}\), and both \(\lambda\) and \(\exp(m_{c})\) are \(\exp(-O(C\alpha\beta))\), therefore, we can achieve the maximum bound by setting \(\epsilon^{\prime}_{1,c}=\epsilon\),
\[\dot{\mathbf{w}}_{c^{\prime}}\tilde{\mathbf{h}}_{c}\leq-\frac{\alpha\beta}{C- 1}+\exp(O(C\alpha\beta))\sqrt{\frac{2\epsilon}{\delta}}\]
Similarly, we can achieve the smallest bound on \(\alpha_{c}\beta_{c}\) (the reverse of (19))by setting \(\epsilon^{\prime}_{2}=\epsilon\) and using \(\lambda=\exp(-O(\alpha\beta))\) we get for both \(c\) and \(c^{\prime}\)
\[\alpha_{c}\beta_{c}\leq\alpha\beta+\exp(O(\alpha\beta))\sqrt{\frac{2\epsilon}{ \delta}}\]
and achieve the largest bound on \(\dot{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\) (the reverse of (18)) by setting \(\epsilon^{\prime}_{2}=\epsilon\) we get for both \(c\) and \(c^{\prime}\):
\[\dot{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c}\leq\alpha\beta-\exp(O(C\alpha\beta) )\sqrt{\frac{2\epsilon}{\delta}}\]
Therefore, we can apply Lemma A.5 with \(\alpha=\alpha_{c}\), \(\beta=\beta_{c}\), \(\epsilon^{\prime}=\alpha_{c}\beta_{c}-\dot{\mathbf{w}}_{c}\tilde{\mathbf{h}}_{c} \leq 2\exp(O(C\alpha\beta))\sqrt{\frac{2\epsilon}{\delta}}\) bound to get:
\[\cos_{\angle}(\dot{\mathbf{w}}_{c^{\prime}},\tilde{\tilde{\mathbf{ h}}}_{c}) \leq-\frac{1}{C-1}+\frac{C}{C-1}\frac{\exp(O(C\alpha\beta))}{\alpha \beta}\sqrt{\frac{2\epsilon}{\delta}}+4(\frac{2\exp(O(C\alpha\beta))}{\alpha \beta}\sqrt{\frac{2\epsilon}{\delta}})^{1/3}\] \[\leq-\frac{1}{C-1}+O(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}( \frac{\epsilon}{\delta})^{1/6})\]
Where the last inequality is because \(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}>1,\frac{\epsilon}{\delta}<1\). Finally, we derive an upper bound on \(\cos_{\angle}(\tilde{\mathbf{h}}_{c^{\prime}},\tilde{\mathbf{h}}_{c})\) and thus intra-class cosine similarity by combining the above bounds. Note that for \(\frac{\pi}{2}<a<\pi\) and \(0<b<\frac{pi}{2}\) we have:
\[\cos(a-b) =\cos(a)\cos(b)+\sin(a)\sin(b)\] \[\leq\cos(a)+\sin(b)\] \[\leq\cos(a)+\sqrt{1-\cos^{2}(b)}\] \[\leq cos(a)+\sqrt{1-2\cos(b)}\]
by (20) we get that
\[\cos_{\angle}(\dot{\mathbf{w}}_{c^{\prime}},\tilde{\mathbf{h}}_{c^{\prime}}) \geq 1-(\frac{C-1}{C\alpha\beta})(\frac{\epsilon_{3}^{\prime}}{\delta}-2 \sqrt{\frac{2\epsilon_{2}^{\prime}(1-\delta)}{\delta\lambda}})\geq 1-\frac{ \exp(O(C\alpha\beta))}{\alpha\beta}\sqrt{\frac{2\epsilon}{\delta}}\]
Therefore,
\[\cos_{\angle}(\tilde{\mathbf{h}}_{c^{\prime}},\tilde{\mathbf{h}} _{c}) \leq-\frac{1}{C-1}+O(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}( \frac{\epsilon}{\delta})^{1/6})+O((\frac{e^{O(C\alpha\beta)}}{\alpha\beta})^{1 /2}(\frac{\epsilon}{\delta})^{1/4})\] \[\leq-\frac{1}{C-1}+O(\frac{e^{O(C\alpha\beta)}}{\alpha\beta}( \frac{\epsilon}{\delta})^{1/6})\]
Since \(\|\tilde{\mathbf{h}}_{c}\|\leq 1\), there is
\[\tilde{\mathbf{h}}_{c^{\prime}}\cdot\tilde{\mathbf{h}}_{c}=\|\tilde{\mathbf{h }}_{c^{\prime}}\|\|\tilde{\mathbf{h}}_{c}\|\cos_{\angle}(\tilde{\mathbf{h}}_{c ^{\prime}},\tilde{\mathbf{h}}_{c})\leq-\frac{1}{C-1}+O(\frac{e^{O(C\alpha \beta)}}{\alpha\beta}(\frac{\epsilon}{\delta})^{1/6})\]
Applying A.3 shows the bound on inter-class cosine similarity. Note that although this bound holds only for \(1-5\delta\) fraction of pairs of classes, changing the fraction to \(1-\delta\) only changes \(\delta\) by a constant factor and does not affect the asymptotic bound.
### Proof of Theorem 2.2
**Theorem 2.2**.: _For an unbiased neural network classifier trained on a dataset with the number of classes \(C\geq 3\) and samples per class \(N\geq 1\), under the following assumptions:_
1. _The network contains an unbiased batch normalization layer before the final layer with trainable weight vector_ \(\boldsymbol{\gamma}\)_;_
2. _The layer-peeled regularized cross-entropy loss with weight decay_ \(\lambda<\frac{1}{\sqrt{C}}\)__ \[\mathcal{L}_{\mathrm{reg}}=\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}\mathcal{L }_{\mathrm{CE}}\left(f(\boldsymbol{x}_{c,i};\boldsymbol{\theta}),\boldsymbol{ y}_{c}\right)+\frac{\lambda}{2}(\|\boldsymbol{\gamma}\|^{2}+\|\mathbf{W}\|_{F}^{2})\] _satisfies_ \(\mathcal{L}_{\mathrm{reg}}\leq m_{\mathrm{reg}}+\epsilon\) _for small_ \(\epsilon\)_; where_ \(m_{reg}\) _is the minimum achievable regularized loss_
_then for at least \(1-\delta\) fraction of all classes, with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{intra}_{c}\geq 1-O(e^{O(C/\lambda)}\sqrt{\frac{\epsilon}{\delta}})\]
_and for at least \(1-\delta\) fraction of all pairs of classes \(c,c^{\prime}\), with \(\frac{\epsilon}{\delta}\ll 1\), there is_
\[\text{inter}_{c,c^{\prime}}\leq-\frac{1}{C-1}+O(e^{O(C/\lambda)}(\frac{ \epsilon}{\delta})^{1/6})\]
Proof.: Let \(\boldsymbol{\gamma}^{*}\) and \(\boldsymbol{W}^{*}\) be the weight vector and weight matrix that achieves the minimum achievable regularized loss. Let \(\alpha=\|\boldsymbol{\gamma}\|\) and \(\beta=\frac{\|\boldsymbol{W}\|_{F}}{\sqrt{C}}\), and \(\alpha^{*}\) and \(\beta^{*}\) represent the values at minimum loss accordingly. According to Proposition 2.1, we know that \(\sqrt{\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{h}_{i}\|_{2}^{2}}=\|\boldsymbol{\gamma} \|_{2}=\alpha\)
From Theorem 2.1 we know that, under fixed \(\alpha\beta\), the minimum achievable unregularized loss is \(\log(1+(C-1)\exp(-\frac{C}{C-1}\alpha\beta))\). Since only the product \(\gamma=\alpha\beta\) is of interest to Theorem 2.1, we make the following observation:
\[\mathcal{L}_{\mathrm{reg}} =\frac{1}{CN}\sum_{c=1}^{C}\sum_{i=1}^{N}\mathcal{L}_{\mathrm{CE} }\left(f(\mathbf{x}_{c,i};\mathbf{\theta}),\mathbf{y}_{c}\right)+\frac{\lambda}{2}(\|\mathbf{ \gamma}\|^{2}+\|\textbf{W}\|_{F}^{2})\] \[\geq\log(1+(C-1)\exp(-\frac{C}{C-1}\alpha\beta))+\frac{\lambda}{2 }(\alpha^{2}+C\beta^{2})\] \[\geq\log(1+(C-1)\exp(-\frac{C}{C-1}\gamma))+\sqrt{C}\lambda\gamma\] \[\geq\min_{\gamma}\log(1+(C-1)\exp(-\frac{C}{C-1}\gamma))+\sqrt{C }\lambda\gamma\]
Now we analyze the properties of this function. For simplicity, we combine \(\sqrt{C}\lambda\) into \(\lambda\) in the following proposition:
**Proposition A.1**.: _The function \(f_{\lambda}(\gamma)=\log\left(1+(C-1)\exp(-\frac{C}{C-1}\gamma)\right)+\lambda\gamma\) have minimum value_
\[f_{\lambda}(\gamma^{*})=\log(1-\frac{C-1}{C}\lambda)+\frac{C-1}{C}\lambda\log \left(\frac{C-(C-1)\lambda}{\lambda}\right)\]
_achieved at \(\gamma^{*}=O(\log(\frac{1}{\lambda}))\) for \(\lambda<1\). Furthermore, for any \(\gamma\) such that \(f_{\lambda}(\gamma)-f_{\lambda}(\gamma^{*})\leq\epsilon\ll\lambda\), there is \(|\gamma-\gamma^{*}|\leq\sqrt{O(1/\lambda)\epsilon}\)_
Proof.: Consider the optimum of the function by setting the derivative to 0:
\[g_{\lambda}^{\prime}(\gamma) =-\frac{C}{C-1}\frac{(C-1)\exp(-\frac{C}{C-1}\gamma)}{\left(1+(C- 1)\exp(-\frac{C}{C-1}\gamma)\right)}+\lambda=0\] \[\frac{C-1}{C}\lambda =1-\frac{1}{1+(C-1)\exp(-\frac{C}{C-1}\gamma)}\] \[1+(C-1)\exp(-\frac{C}{C-1}\gamma) =\frac{1}{1-\frac{C-1}{C}\lambda}\] \[\gamma =\frac{C-1}{C}\log\left(\frac{C-(C-1)\lambda}{\lambda}\right)=O( \log(\frac{C}{\lambda}))\]
Plugging in \(\gamma^{*}=\frac{C-1}{C}\log\left(\frac{C-(C-1)\lambda}{\lambda}\right)\) to the original formula we get:
\[f_{\lambda}(\gamma^{*})=\log(1-\frac{C-1}{C}\lambda)+\frac{C-1}{C}\lambda\log \left(\frac{C-(C-1)\lambda}{\lambda}\right)\]
Note that since \(\gamma\geq 0\), the optimum point is only positive when \(\lambda\leq 1\).
Now consider the case where the loss is near-optimal and \(\gamma=\gamma^{*}+\epsilon^{\prime}\) for \(\epsilon^{\prime}\ll 1\):
\[\log\left(1+(C-1)\exp(-\frac{C}{C-1}(\gamma^{*}+\epsilon^{\prime} ))\right)+\lambda(\gamma^{*}+\epsilon^{\prime})\] \[\geq \log\left(1+(C-1)\exp(-\frac{C}{C-1}\gamma^{*})(1-\frac{C}{C-1} \epsilon^{\prime}+\frac{\epsilon^{\prime 2}}{2})\right)+\lambda(\gamma^{*}+\epsilon^{ \prime})\] \[\geq \log\left(1+(C-1)\exp(-\frac{C}{C-1}\gamma^{*})\right)+\frac{(C- 1)\exp(-\frac{C}{C-1}\gamma^{*})}{\left(1+(C-1)\exp(-\frac{C}{C-1}\gamma^{*}) \right)}(-\frac{C}{C-1}\epsilon+\frac{\epsilon^{2}}{2})+\lambda(\gamma^{*}+ \epsilon^{\prime})\]
By definition of \(\gamma^{*}\), the linear term w.r.t. \(\epsilon^{\prime}\) must cancel out. Also, by plugging in \(\gamma^{*}=O(\log(\sqrt{C}/\lambda))\) the coefficient of \(\frac{\epsilon^{\prime 2}}{2}\) is \(1-\frac{1}{1+\Omega(\lambda)}=\Omega(\lambda)\). Therefore,
\[\log\left(1+(C-1)\exp(-\frac{C}{C-1}(\gamma^{*}+\epsilon^{\prime} ))\right)+\lambda(\gamma^{*}+\epsilon^{\prime})\] \[\leq \log\left(1+(C-1)\exp(-\frac{C}{C-1}\gamma^{*})\right)+\lambda \gamma^{*}+\Omega(\lambda)\epsilon^{\prime 2}\]
Conversely, for any \(\epsilon\ll 1\) for which \(g(\gamma)\leq g(\gamma^{*})+\epsilon\), there must be \(|\gamma-\gamma^{*}|\leq\sqrt{\frac{\epsilon}{\Omega(\lambda)}}=\sqrt{O(1/\lambda)\epsilon}\)
Thus, the minimum achievable value of the regularized loss is
\[m_{\mathrm{reg}}=\log(1-\frac{C-1}{\sqrt{C}}\lambda)+\frac{C-1}{\sqrt{C}} \lambda\log\left(\frac{\sqrt{C}}{\lambda}-(C-1)\right)\]
Now, consider any \(\mathbf{W}\) and \(\boldsymbol{\gamma}\) that achieves near-optimal regularized loss \(\mathcal{L}_{\mathrm{reg}}=m_{\mathrm{reg}}+\epsilon\) for very small \(\epsilon\). Recall that \(\alpha=\|\boldsymbol{\gamma}\|\), \(\beta=\frac{\|\mathbf{W}\|_{F}}{\sqrt{C}}\), \(\gamma=\alpha\beta\). According to Proposition A.1 we know that \(|\gamma-\gamma^{*}|\leq\sqrt{O(1/\sqrt{C}\lambda)\epsilon}\ll\frac{1}{\lambda}\). Therefore, \(\gamma\leq\gamma^{*}+O(\frac{1}{\lambda})=O(\log(\sqrt{C}/\lambda))+O(\frac{1 }{\lambda})=O(1/\lambda)\). Also, note that \(\mathcal{L}_{\mathrm{reg}}-f_{\sqrt{C}\lambda}(\gamma)\leq\mathcal{L}_{ \mathrm{reg}}-f_{\sqrt{C}\lambda}(\gamma^{*})=\epsilon\), where \(f_{\sqrt{C}\lambda}(\gamma)\) is the minimum unregularized loss according to Theorem 2.1. Therefore, we can apply Theorem 2.1 with \(\alpha\beta=O(\frac{1}{\lambda})\) and the same \(\epsilon\) to get:
\[\text{intra}_{c}\geq 1-O(\frac{e^{O(C\gamma)}}{\gamma}\sqrt{\frac{\epsilon}{ \delta}})\geq 1-O(e^{O(C/\lambda)}\sqrt{\frac{\epsilon}{\delta}})\]
and
\[\text{inter}_{c,c^{\prime}}\leq-\frac{1}{C-1}+O(\frac{e^{O(C\gamma)}}{\gamma}( \frac{\epsilon}{\delta})^{1/6})\leq-\frac{1}{C-1}+O(e^{O(C/\lambda)}(\frac{ \epsilon}{\delta})^{1/6})\]
## Appendix B Additional Experiments
This section presents more comprehensive experimental results that support our conclusion.
### Experiments on Synthetic Datasets
#### b.1.1 Experimental Setup
Our results in the main paper show the intra-class and inter-class cosine similarity results for 3-layer and 6-layer multi-layer perceptrons on the conic hull datasets. To further investigate the effect of Batch Normalization and Weight Decay on more complex synthetic datasets, we randomly initialize the weight of a 3-layer and 6-layer MLP network with the same architecture as the model used in training. We then sample random vectors from a standard Gaussian distribution and use the index of the maximum element of the output of the randomly initialized MLP as the label. The corresponding datasets generated using 3-layer and 6-layer randomly initialized models are called MLP3 and MLP6 datasets, respectively. Our intuition is that by generating data using a randomly initialized network, we can control the complexity of the underlying distribution, unlike vision datasets such as MNISTLeCun et al. (2010) and CIFAR10Krizhevsky (2009) where the distribution cannot be strictly defined. We run our experiments on models of 3 different depths (3, 6, 9). For each model depth, we create a version with batch normalization between each adjacent hidden layer and a version without any batch normalization. We used 8000 training samples sampled from each distribution (conic hull dataset, MLP3 dataset, and MLP6 dataset). Other hyperparameters are the same as described in the main paper. All experiments in this subsection are performed on Google Colab. |
2309.12998 | Audience-specific Explanations for Machine Translation | In machine translation, a common problem is that the translation of certain
words even if translated can cause incomprehension of the target language
audience due to different cultural backgrounds. A solution to solve this
problem is to add explanations for these words. In a first step, we therefore
need to identify these words or phrases. In this work we explore techniques to
extract example explanations from a parallel corpus. However, the sparsity of
sentences containing words that need to be explained makes building the
training dataset extremely difficult. In this work, we propose a semi-automatic
technique to extract these explanations from a large parallel corpus.
Experiments on English->German language pair show that our method is able to
extract sentence so that more than 10% of the sentences contain explanation,
while only 1.9% of the original sentences contain explanations. In addition,
experiments on English->French and English->Chinese language pairs also show
similar conclusions. This is therefore an essential first automatic step to
create a explanation dataset. Furthermore we show that the technique is robust
for all three language pairs. | Renhan Lou, Jan Niehues | 2023-09-22T17:00:45Z | http://arxiv.org/abs/2309.12998v1 | # Audience-specific Explanations for Machine Translation
###### Abstract
In machine translation, a common problem is that the translation of certain words even if translated can cause incomprehension of the target language audience due to different cultural backgrounds. A solution to solve this problem is to add explanations for these words. In a first step, we therefore need to identify these words or phrases. In this work we explore techniques to extract example explanations from a parallel corpus. However, the sparsity of sentences containing words that need to be explained makes building the training dataset extremely difficult. In this work, we propose a semi-automatic technique to extract these explanations from a large parallel corpus. Experiments on English\(\rightarrow\)German language pair show that our method is able to extract sentence so that more than 10\(\%\) of the sentences contain explanation, while only 1.9\(\%\) of the original sentences contain explanations. In addition, experiments on English\(\rightarrow\)French and English\(\rightarrow\)Chinese language pairs also show similar conclusions. This is therefore an essential first automatic step to create a explanation dataset. Furthermore we show that the technique is robust for all three language pairs.
keyword1, keyword2, keyword3
language
periments on English\(\rightarrow\)German, English\(\rightarrow\)French and English\(\rightarrow\)Chinese language pairs. The results show that the method we propose can greatly reduce the final manual selection work, and at the same time, it can stably and efficiently find the target sentence in the last remaining sentences for all three language pairs. This reduces the difficulty of building a training dataset and also facilitates the training of models in the future. 1
Footnote 1: Code and data available at: [https://github.com/RHL1014/Audience-specific-Explanations-for-MT](https://github.com/RHL1014/Audience-specific-Explanations-for-MT)
The contributions of our work are as follows:
1. We propose a method for finding sentences with words that need to be explained. Using this method we can find the target sentences stably and efficiently.
2. This method works for multiple language pairs, and the final effect is independent of the input data distribution. This shows that this method has robustness.
## 2 Audience-specific Explanations
We want to solve the problem of how to eliminate the target language audience's incomprehension during the translation. With the help of the human translation solution, this problem can be transformed into another more specific problem, that is, how can we model audiences' specific needs for additional information during translation? This means we need to develop a model that can predict which words will cause incomprehension to the target language audience during translation. In order to accurately predict the words that need to be explained when doing machine translation tasks, it is necessary to build a dataset for training and evaluation. In this paper we develop a methodology to identify these words in a parallel corpus. In a second step this data can then be used to train and test models that are able to predict these words.
Here is an example of translation with explanation for English\(\rightarrow\)German:
1. **En**: **John Bunyan** said, " He who runs from God in the morning will scarcely find Him the rest of the day. " **De**: **John Bunyan**, _der Autor der bekannten Pilgerreise_, hat einmal gesagt : ", Wer morgens vor Gott weglauft, wird lhn den Rest des Tages kaum noch finden. " In German translation,, _der Autor der bekannten Pilgerreise_, is the explanation for **John Bunyan**, which tells the German audience that **John Bunyan** is a writer. We started with an initial manual inspection of explanation. The main challenge thereby is that these explanations are extremely infrequent. On the other hand, the uncertainty in the position of the explanation also increases the difficulty of finding sentence pairs with explanations. To reduce the complexity of finding sentence pairs with explanations, we only consider the most common form of sentence pairs with explanations, that is, the explanation part immediately follows the word that needs to be explained. Based on our initial investigate we identified several key characteristics of sentences containing explanations. Based on these characteristics we build filters to find the sentences:
1. The word being explained or the word in the phrase being explained is rare in the target language.
2. The explanation is a redundant part of the sentence in the target language.
3. The explanation follows the word or phrase being explained.
4. The explanation contains punctuation.
5. Words that differ from the word or phrase being explained are also included in the explanation.
6. The word or phrase being explained is more likely to be a named entity.
7. Information about words or phrases that need to be explained can be found using Wikipedia.
## 3 Identifying candidate source phrases
Based on the summarized characteristics of sentence pairs with explanations, a heuristic method for efficiently searching sentence pairs with explanations is proposed. Considering the sparsity of the sentence pairs with explanations that need to be found, the goal of this method is to find as many sentence pairs with explanations as possible while minimizing the number of sentence pairs without explanations.
This heuristic method is divided into four processes. The first process is to identify words that may need explanation based on corpus statistics. The second process is to identify sentence pairs that may have explanation with the help of word alignment. After utilizing the internal knowledge we also integrated external knowledge. The third process is to use the named entity recognition (NER) model to identify the words that need to be explained. The last process is to exploit Wikipedia to more accurately identify target sentence pairs.
### Filtering based on corpus statistics
Intuitively, when translating, if a word is rare in the target language, it is more likely to be explained than other words. In order to decide which words
are rare, the word count within a certain range can be used. A word can be considered rare if its count is below a certain threshold. For the purpose of finding as many rare words as possible, the word count in all Wikipedia articles is used to check whether a word is rare. Meanwhile, if only the uncommon words in the target language are considered, it is found that many non-candidates are introduced in the experiment. So not only the word count in the target language but also the word count in the source language must be considered.
### Integrating word alignment inference
While the model to identify the words that need to be explained does not have access to the translation, we can use the target side for identifying examples in the parallel data, in this case, this is even the most valuable way. With the help of the word alignment of the determined rare word and the word following it, the corresponding words in the target language sentence and their position in the sentence can be found. If there is a redundant part between the corresponding words in the target language sentence, it can be considered that there may be an explanation for the rare word in the redundant part.
The explanation part is often accompanied by punctuation marks, such as commas and parentheses. This means that it is possible to determine whether a redundant part contains an explanation by checking for possible punctuation in the redundant part. In addition, as the explanation contains additional information about the object being explained, the explanation should contain other words besides the explained word, so it can be judged whether there is an explanation in the redundant part by checking the words in the redundant part and their word alignment. If the redundant part also contains words other than the explained word, and none of the words in the redundant part have a word alignment, the redundant part can be considered as likely to contain the true explanation.
Depending on the form of the sentence pair that need to be identified and the summarized characteristics, we can visualize the candidate sentence pair with explanations. Figure 1 shows an ideal candidate sentence pair.
Given the source language sentence \(s\) and the corresponding target language translation sentence \(t\). The length of sentence \(s\) is \(N\), which means it is composed of \(N\) tokens. Similarly, the length of sentence \(t\) is \(M\). In sentence \(s\), the \(k\)-th token \(token_{k}\) is an uncommon word, meanwhile, the \(m\)_-th_ token \(token_{m}\) aligned with \(token_{k}\) in sentence \(t\) is also a rare word. The next token \(token_{k+1}\) of token \(token_{k}\) in sentence \(s\) is aligned with token \(token_{m+n+1}\) in sentence \(t\). And in sentence \(t\)\(token_{m+n+1}\) is not the next token of \(token_{m}\), which means that there is a redundant part of length \(n\) after \(token_{m}\) in sentence \(t\). In the redundant part from \(token_{m+1}\) to \(token_{m+n}\), there are punctuation marks, such as \(token_{m+1}\) is likely to be a comma, or a parenthesis. All tokens from \(token_{m+1}\) to \(token_{m+n}\) in the redundant part should have no word alignment results, and should contain other words except \(token_{m}\).
### Using NER
From the summarized characteristics about the target sentence pair we know that the word or phrase being explained is more likely to be a named entity. This provides another way to determine candidate sentence pairs. If the named entities in a sentence, such as person names, place names or organization names, can be identified and located, then the range of candidate sentences can be narrowed down better. Named entity recognition (NER) is an effective tool for identifying named entity in a sentence. NER can also give the location of each named entity. So NER can be used to further identify possible candidates while also reducing the number of non-candidates.
The word being explained is either itself a named entity, or it is part of a named entity. So the candidate sentence pairs can be further determined by comparing the recognized named entities with the previously confirmed words that may be explained.
### Using Wikipedia
All the named entities in a sentence can be recognized after NER. Based on named entities, another method that can further to identify target sentence pairs with explanations is using Wikipedia. This step is also inspired by the work of (Nothman et al., 2008, 2009, 2013). Their work confirms that Wikipedia can be used to build dataset for training NER models and NER models can also achieve optimal results.
For most phrases that need to be explained, the corresponding articles can be searched in Wikipedia, so the titles of Wikipedia articles can be used to determine target sentence pairs. If a source language named entity is a title of a Wikipedia article, then it is likely a candidate that needs to be explained. However, this only considers the aspect of the source language, if the consideration for the target language is added, candidates can be further identified. If a source language named entity is the title of a Wikipedia article, and the corresponding target language named entity is not the title
Figure 1: Candidate sentence pair
of a Wikipedia article, this means that the source language audience has a source of information to understand this named entity, while the target language audience does not have a corresponding information source. In this case, this named entity is more likely to be a good candidate that needs to be explained.
On the other hand, in addition to the title of the Wikipedia article can be used to determine candidates, the Wikipedia article itself can also be used to determine candidates. If both the named entity in the source language and the corresponding named entity in the target language are the titles of Wikipedia articles, then the articles corresponding to the titles can be compared. More precisely, candidates can be determined by comparing the size of Wikipedia articles. If the size of the Wikipedia article in the source language is larger than the size of Wikipedia article in the target language, this situation means that while both the source and target language audiences have a source of information to understand the candidate, but the candidate is less common in target language, then the title of the source language Wikipedia article might be a good candidate that needs to be explained. Figure 2 shows the process of using Wikipedia to identify candidates.
There are many text comparisons in this step, for example, the comparisons between named entities and Wikipedia titles. The possible form inconsistencies between them will make the comparison difficult. In order to simplify the text comparison when using Wikipedia, the stemming algorithm will be applied.
## 4 Evaluation
### Setup
We choose CCMatrix (Schwenk et al., 2019; Fan et al., 2021) as the corpus required for the experiment. All CCMatrix data are downloaded from OPUS (Tiedemann, 2012). The version of Wikipedia database backup dumps used in the experiments is 20221101. We use wikiextractor 2 to extract Wikipedia articles. Meanwhile, we use the tool wikipedia-parallel-titles 3 to create the Wikipedia parallel titles corpus. For preprocessing, we choose spaCy (Honnibal et al., 2020) as the word tokenization tool. For Chinese word tokenization, we use pkuseg (Luo et al., 2019) under the framework of spaCy. We use awesome-align (Dou and Neubig, 2021) to extract word alignment results. We choose the Snowball algorithm from NLTK (Bird et al., 2009) to perform stemming to simplify text comparison. When using word alignment to find the target sentence pair, the length of the redundant part in the target language sentence is set to be greater than or equal to 3.
Footnote 2: [https://github.com/attardi/wikiextractor](https://github.com/attardi/wikiextractor)
Footnote 3: [https://github.com/clab/wikipedia-parallel-titles](https://github.com/clab/wikipedia-parallel-titles)
### Evaluation Metric
Filter the target sentences from the corpus to construct the training dataset is a classification problem. Therefore, the metric BLEU does not work here. For a classification problem, F1-score is a good evaluation metric. The calculation of the F1-score requires the number of positive examples and the number of negative examples. However, due to the sparsity of sentences with explanations, we only focus on whether the target sentence can be found, which means that in our experiments, the evaluation of the method for finding target sentences only involves sentences with explanation, i.e. only the number of positive examples is considered. This means that we just selected a subset of F1-score as the evaluation metric for our experiments.
### Initial Result
We first focus only on the English\(\rightarrow\)German (En\(\rightarrow\)De) language pair. In order to compare and evaluate the subsequent steps of our proposed method, we first run our method to the step before NER, i.e., using only word counts and word alignments to find target sentence pair candidates. The first 5 million sentence pairs from the corpus are taken as the input, and the word count thresholds for both source and target languages are set to 15000. This means that if a word, its word count is below 15000, it is considered a rare word. The results are in Table 1.
It can be found from the Table 1 that for En\(\rightarrow\)De there are still 8977 sentence pairs remaining. On
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**En\(\rightarrow\)De** & **Num. of sentences** & **F1** \\ \hline Total & 5000000 & - \\ \hline
1. Initial result & 8977/173 & \(3.78\%\) \\ \hline \end{tabular}
\end{table}
Table 1: The initial results of the En\(\rightarrow\)De
Figure 2: An example of identifying candidates using Wikipedia
the basis of the remaining sentence pairs, manual work is performed to select out the sentence pairs that contain explanations. Finally, 173 sentence pairs with explanations are found out of 8977 remaining sentence pairs. This means that only 1.92\(\%\) of the original sentences contain explanations. F1-score is also calculated and given in the table.
### Follow-up results
Based on the results in Table 1, the subsequent experiments can be performed. Since the models provided by different NER tools and different word count thresholds will affect the final experimental results, different combinations of NER models and thresholds need to be considered and compared to obtain the best results. We compared the performance of the NER models from Flair Akbik et al. (2019), spaCy Honnibal et al. (2020), and Stanza Qi et al. (2020). We also tried five different word count thresholds for both source and target languages under each NER model: 100, 1000, 5000, 10000 and 15000. We found that for En\(\rightarrow\)De, optimal results can be obtained when using the NER model provided by Flair with a word count threshold of 5000 (i.e., the word count is under 5000) for both the source and target languages. The final overall results are in Table 2.
Comparing the initial result and the result of the first step, we can find that as the word count threshold becomes smaller, the number of remaining sentence pairs decreases, and the number of target sentence pairs with explanations in the remaining sentence pairs also decreases. But on the other hand, the F1-score increased from \(3.78\%\) to \(8.18\%\). Comparing the results of the first step and the second step, we can find that after using NER, the number of remaining sentence pairs and target sentence pairs both decreased, but the F1-score increased from \(8.18\%\) to \(19.29\%\). The significant improvement in the F1-score shows that NER is a powerful and effective tool for identifying target sentence pairs.
Finally, comparing the results of the second step with the third step, we can find that after using Wikipedia to continue to identify target sentence pairs, the number of remaining sentence pairs and target sentence pairs are reduced. Meanwhile, the F1-score is also decreased from \(19.29\%\) to \(17.74\%\). This shows that Wikipedia can greatly reduce manual work while losing a relatively small F1-score.
The results in Table 2 prove that our proposed method can greatly improve the efficiency of finding target sentence pairs. In order to verify the general effectiveness of our method, we also test the effect of our method on other inputs. We randomly select 5 million sentence pairs from all remaining sentence pairs in the corpus except for the first 5 million sentence pairs. These randomly selected sentence pairs are used as input. The same configurations are also used for test. In order to avoid accidental errors, we conduct five experiments for testing. The results of the five experiments for testing are very similar, so we only provide the results of one of them. The results of the experiments for testing are in Table 3 and Table 4.
If we compare the results in Table 3 with those in Table 2, we can find a non-negligible gap between the two results regarding the number of remaining sentence pairs. In order to check the proportion of target sentence pairs with explanations in the remaining sentence pairs, we also check the number of target sentence pairs among the last remaining sentence pairs. We select the results of the last step (i.e. the step to use Wikipedia) for validation. The proportion results are in Table 4.
When the input is the first 5 million sentence pairs of the corpus, 44 sentence pairs with explanations can be found in the remaining 323 sentence pairs. When 5 million randomly selected sentence pairs are taken as input, we found 294 target sentence pairs with explanations out of 2832 remaining sentence pairs. This means that when the input is random, the number of remaining sentence pairs and the sentence pairs with explanations are all increased. However, when we check the proportion results, we can find that for the random input, among the remaining sentence pairs we can find \(10.38\%\) target sentence pairs. And for the first 5 million sentence pairs, the proportion is \(13.62\%\). Although the proportion result of the random input is lower than that of the first 5 million sentence pairs,
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**En\(\rightarrow\)De** & **Num. of sent.** & **Perc.** \\ \hline
3. Wiki (First SM) & 323/44 & 13.62\(\%\) \\
3. Wiki (Rand. 5M) & 2832/294 & 10.38\(\%\) \\ \hline \end{tabular}
\end{table}
Table 4: The percentage results of En\(\rightarrow\)De
\begin{table}
\begin{tabular}{|l|l|} \hline
**Step** & **Num. of sent.** \\ \hline Initial result & 8977/173 & \(3.78\%\) \\ \hline
1. Using word count & 3102/134 & \(8.18\%\) \\ \hline
2. Using NER & 791/93 & \(19.29\%\) \\ \hline
3. Using Wiki & 323/44 & \(17.74\%\) \\ \hline \end{tabular}
\end{table}
Table 2: The final result of the En\(\rightarrow\)De
it is still higher than \(10\%\), which is an acceptable result.
The results in Table 4 illustrate that our proposed method for finding sentence pairs with explanations is robust against different input data and efficient for the En\(\rightarrow\)De. Our proposed method is independent of the distribution of data, and a large number of non-target sentence pairs can be removed. Therefore, the number of last remaining sentence pairs is extremely small. Among the last remaining sentences, no matter how the number of remaining sentences changes, more than \(10\%\) of the target sentence pairs can always be found.
Finally, we also check whether each named entity that is explained in the found target sentence pairs also always needs to be explained in other sentences. Based on 44 named entities that require explanation found in experiments, in the input of 5 million sentence pairs, for each named entity, we check whether each sentence pair containing this named entity also contains an explanation for it. The result is in Figure 3 and Figure 4.
After removing duplicate named entities, there are 42 explained named entities left for En\(\rightarrow\)De. From the Figure 3 we can find that not all named entities always need to be explained. Only 15 named entities are always explained (with 100\(\%\) probability). Based on these 15 named entities that always require explanation, we also examine how often these named entities are explained, the result is in the Figure 4. Among the 15 named entities that are always explained, 14 named entities are explained only once, and only 1 named entity is explained 4 times.
### Multi-language results
We also conduct experiments for English\(\rightarrow\)French (En\(\rightarrow\)Fr) and English\(\rightarrow\)Chinese (En\(\rightarrow\)Zh) language pairs. The first 5 million sentence pairs in the corpus are still used as the input. For initial results, the word count thresholds for both the source and target languages are set to 15000. We found that for En\(\rightarrow\)Fr, optimal results can be obtained when using the NER model provided by Stanza Qi et al. (2020) with a word count threshold of 5000 for both the source and target languages. And for En\(\rightarrow\)Zh, optimal results can be obtained when using the NER model provided by HanLP He and Choi (2021) with a word count threshold of 5000 for both the source and target languages. Although the word count threshold to obtain the optimal results is the same for all language pairs. However, for each language pair, the results of the number of remaining sentence pairs and the number of sentence pairs with explanations are different.
Table 5 is the result for En\(\rightarrow\)Fr. The result for En\(\rightarrow\)Zh is in Table 6. The results for these two language pairs are similar to those for En\(\rightarrow\)De. As the word count threshold gets smaller, the number of remaining and target sentence pairs decreases, while the F1-score rises. NER is also a powerful and effective tool for identifying target sentence pairs for both language pairs. After using NER, although the number of remaining sentence pairs and target sentence pairs are further reduced, the F1-score is significantly improved. And the use of Wikipedia can still achieve an acceptable F1-score while greatly reducing manual work.
Similarly, we also test the effect of our method on
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Step** & **Num. of sent.** & **F1** \\ \hline Initial Result & 13541/402 & \(5.77\%\) \\ \hline
1. Using word count & 7360/302 & \(7.78\%\) \\ \hline
2. Using NER & 2557/194 & \(13.11\%\) \\ \hline
3. Using Wiki & 1083/87 & \(11.72\%\) \\ \hline \end{tabular}
\end{table}
Table 6: The final results of En\(\rightarrow\)Zh language pair
Figure 4: Explanation frequency of NE that is always explained
Figure 3: Probability distribution of NE to be explained
random inputs for En\(\rightarrow\)Fr and En\(\rightarrow\)Zh language pairs. The results are in Table 7 and Table 8. In order to avoid accidental errors, we also conduct five experiments. The results of the five experiments for En\(\rightarrow\)Fr and En\(\rightarrow\)Zh are also very similar, so we also provide the results of one of them. Compared to the experimental results of the first five million sentence pairs as input, when the input are random five million sentence pairs, a non-negligible gap regarding the number of remaining sentence pairs can be always found.
We also calculated the proportion of target sentence pairs with explanations in the results of experiments for validation for En\(\rightarrow\)Fr and En\(\rightarrow\)Zh. The proportion results are in Table 9 and Table 10. For En\(\rightarrow\)Fr, a surprising result is obtained. For the results with the random input, \(8.24\%\) of the target sentence pairs can be found in the last remaining 4051 sentence pairs, which is much higher than the \(4.56\%\) of the results with the first 5 million sentence pairs as the input. Due to the identical experimental parameters, the significant difference in proportion results is presumably caused by the distribution of input data. For En\(\rightarrow\)Zh, \(7.40\%\) of the target sentence pairs can be found in the remaining 3149 sentence pairs when the input is random, which is very close to the \(8.03\%\) of the results when the input is the first 5 million sentence pairs. This means that our proposed method is also robust against different input data and efficient for En\(\rightarrow\)Fr and En\(\rightarrow\)Zh. The number of last remaining sentence pairs is extremely small. For En\(\rightarrow\)Fr, more than \(5\%\) of the target sentence pairs can always be found among the last remaining sentence pairs. While for En\(\rightarrow\)Zh, more than \(7\%\) of the target sentence pairs can always be found in the last remaining sentence pairs.
We can also provide some examples with explanations found for En\(\rightarrow\)Fr and En\(\rightarrow\)Zh.
1. **En**:'Even if **WMO** agrees, I will still not pass on the data.
## 5 Conclusion
We propose a heuristic method to find target sentence pairs with explanations. In this method, both internal and external knowledge are utilized: word count, word alignment, named entity recognition, and Wikipedia. We conduct experiments on three language pairs: English\(\rightarrow\)German, English\(\rightarrow\)French and English\(\rightarrow\)Chinese. The results show that for each language pair, our proposed method can reduce the number of remaining sentence pairs to an extremely low number. Moreover, our method is robust, among the remaining sentence pairs, a certain proportion of target sentence pairs can always be found for each language pair. Among the remaining sentence pairs, more than \(10\%\) of the target sentence pairs can be found for the English\(\rightarrow\)German, more than \(7\%\) of the target sentence pairs can be found for the English\(\rightarrow\)Chinese, and for the English\(\rightarrow\)French, more than \(5\%\) of the target sentence pairs can be found. This means that a sufficient number of target sentence pairs can be efficiently found using our method so that we can construct a training dataset for the training of the model in the future.
|
2310.00227 | Scaling for Training Time and Post-hoc Out-of-distribution Detection
Enhancement | The capacity of a modern deep learning system to determine if a sample falls
within its realm of knowledge is fundamental and important. In this paper, we
offer insights and analyses of recent state-of-the-art out-of-distribution
(OOD) detection methods - extremely simple activation shaping (ASH). We
demonstrate that activation pruning has a detrimental effect on OOD detection,
while activation scaling enhances it. Moreover, we propose SCALE, a simple yet
effective post-hoc network enhancement method for OOD detection, which attains
state-of-the-art OOD detection performance without compromising in-distribution
(ID) accuracy. By integrating scaling concepts into the training process to
capture a sample's ID characteristics, we propose Intermediate Tensor SHaping
(ISH), a lightweight method for training time OOD detection enhancement. We
achieve AUROC scores of +1.85\% for near-OOD and +0.74\% for far-OOD datasets
on the OpenOOD v1.5 ImageNet-1K benchmark. Our code and models are available at
https://github.com/kai422/SCALE. | Kai Xu, Rongyu Chen, Gianni Franchi, Angela Yao | 2023-09-30T02:10:54Z | http://arxiv.org/abs/2310.00227v1 | # Scaling for Training Time and Post-hoc Out-of-distribution Detection Enhancement
###### Abstract
The capacity of a modern deep learning system to determine if a sample falls within its realm of knowledge is fundamental and important. In this paper, we offer insights and analyses of recent state-of-the-art out-of-distribution (OOD) detection methods - extremely simple activation shaping (ASH). We demonstrate that activation pruning has a detrimental effect on OOD detection, while activation scaling enhances it. Moreover, we propose SCALE, a simple yet effective post-hoc network enhancement method for OOD detection, which attains state-of-the-art OOD detection performance without compromising in-distribution (ID) accuracy. By integrating scaling concepts into the training process to capture a sample's ID characteristics, we propose **I**ntermediate Tensor **SH**aping (ISH), a lightweight method for training time OOD detection enhancement. We achieve AUROC scores of +1.85% for near-OOD and +0.74% for far-OOD datasets on the OpenOOD v1.5 ImageNet-1K benchmark. Our code and models are available at [https://github.com/kai422/SCALE](https://github.com/kai422/SCALE).
## 1 Introduction
In deep neural networks, out-of-distribution (OOD) detection distinguishes samples which deviate from the training distribution. Standard OOD detection concerns semantic shifts (Yang et al., 2022; Zhang et al., 2023), where OOD data is defined as test samples from semantic categories unseen during training. Ideally, the neural network should be able to reject such samples as being OOD, while still maintaining strong performance on in-distribution (ID) test samples belonging to seen training categories.
Methods for detecting OOD samples work by scoring network outputs such as logits or softmax values (Hendrycks and Gimpel, 2017; Hendrycks et al., 2022), post-hoc network adjustment during inference to improve OOD scoring (Sun and Li, 2022; Sun et al., 2021; Djurisic et al., 2023), or by adjusting model training (Wei et al., 2022; Ming et al., 2023; DeVries and Taylor, 2018). These approaches can be used either independently or in conjunction with one another. Typically, post-hoc adjustments together with OOD scoring is the preferred combination since it is highly effective at discerning OOD samples with minimal ID drop and can also be applied directly to already-trained models off-the-shelf. Examples include ReAct (Sun et al., 2021), DICE (Sun and Li, 2022) and more recently, ASH (Djurisic et al., 2023).
On the surface, each method takes different and sometimes even contradictory approaches. ReAct rectifies penultimate activations which exceed a threshold; ASH, on the other hand, prunes penultimate activations that are too low while amplifying remaining activations. While ASH currently achieves state-of-the-art performance, it lacks a robust explanation of its underlying operational principles. This limitation highlights the need for a comprehensive explanatory framework.
This work seeks to understand the working principles behind ASH. Through observations and mathematical derivations, we reveal that OOD datasets tend to exhibit a lower rate of pruning due to distinct mean and variance characteristics. We also demonstrate the significant role of scaling in enhancing OOD detection in ASH, while highlighting that the lower-part pruning approach, in contrast to ReAct, hinders the OOD detection process. This understanding leads to new state-of-the
art results by leveraging scaling, achieving significant improvements without compromising on ID accuracy.
Through the lens of studying the distributions, we highlight the importance of scaling as a key metric for assessing a sample's ID nature. We integrate this concept into the training process, hypothesizing the feasibility of shaping the ID-ness objective even without the inclusion of OOD samples. The ID-ness objective introduces an optimization weighting factor for different samples through proposed intermediate tensor shaping (ISH). Remarkably, ISH achieves outstanding performance in both near-OOD and far-OOD detection tasks, with only one-third of the training effort required compared to current state-of-the-art approaches.
Our contributions can be summarized as follows:
* We analyze and explain the working principles of pruning and scaling for OOD detection and reveal that pruning, in some scenario, actually hurts OOD detection.
* Based on our analysis, we devise SCALE, a new post-hoc network enhancement method for OOD detection, which achieves state-of-the-art results on OOD detection without any ID accuracy trade-off.
* By incorporating scaling concepts into the training process to capture a sample's ID characteristics, we introduce ISH, a lightweight and innovative method for improving OOD detection during training. ISH yields remarkable OOD detection results.
## 2 Related Work
**OOD scoring methods** indicate how likely a sample comes from the training distribution, _i.e_. is in-distribution, based on sample features or model outputs. From a feature perspective, Lee et al. (2018) proposed to score a sample via the minimum Mahalanobis distance of that sample's features to the nearest ID class centroid. For model outputs, two common variants are based on the maximum softmax prediction (Hendrycks and Gimpel, 2017) and the maximum logit scores (Hendrycks et al., 2022). The raw softmax or logit scores are susceptible to the overconfidence issue, therefore, Liu et al. (2020) proposed to use an energy-based function to transform the logits as an improved score. A key benefit of deriving OOD scores from feature or model outputs is that it does not impact the model or the inference procedure, so the ID accuracy will not be affected.
**Post-hoc model enhancement methods** modify the inference procedure to improve OOD detection and are often used together with OOD scoring methods. Examples include ReAct (Sun et al., 2021), which rectifies the penultimate activations for inference, DICE (Sun and Li, 2022), which sparsifies
Figure 1: **ID-OOD Trade-off on ImageNet on Near-OOD Dataset. Unlike existing methods such as ASH, ReAct and Dice, our proposed SCALE does not have any ID accuracy trade-off while improving OOD accuracy. Our training methods, ISH, achieves outstanding OOD results by emphasizing the training of samples with high ID characteristics.**
the network's weights in the last layer, and ASH (Djurisic et al., 2023), which scales and prunes the penultimate activations. Each of these methods is then combined with energy-based score (Liu et al., 2020) to detect the OOD data. While effective at identifying OOD data, these methods have a reduced ID accuracy as the inference procedure is altered. Our proposed SCALE is also post-hoc model enhancement, while our ID accuracy will not be affected, where we applies different scaling factor based on sample's activations shape, which do not alter the ID estimates for single sample, but emphasize difference among samples.
Training-time model enhancementtechniques aims to make OOD data more distinguishable directly at training. Various strategies including the incorporation of additional network branches (DeVries and Taylor, 2018), alternative training strategies (Wei et al., 2022), or data augmentation (Pinto et al., 2022; Hendrycks et al., 2020). The underlying assumption behind each of these techniques is training towards OOD detection objective can provide more discriminative features for OOD detection. A significant drawback of training-time enhancement is the additional computational cost. For example, AugMix (Hendrycks et al., 2020) requires double training time and extra GPU memory cost. Our intermediate tensor shaping (ISH) improves the OOD detection with one-third of the computational cost compares to the most lightweight method, without modifying model architecture.
Intermediate tensor shaping:Activation shaping have been explored in deep learning for various purposes. DropOut is the first to utilize this idea by sparsifying the activations for regularization. Similar ideas has been applied on Li et al. (2023) for transformers. Activation shaping can also help efficient training and inference by compression (Kurtz et al., 2020; Chen et al., 2023b). Shaping operations on intermediate tensors differ from those on activations. Activation shaping affects both forward pass inference and backward gradient computation during training. In contrast, shaping intermediate tensors exclusively influences the backward gradient computation. Since intermediate tensors tend to consume a significant portion of GPU memory, techniques for compressing intermediate tensors have gained widespread use in memory-efficient training, all without altering the forward pass. (Evans and Aamodt, 2021; Liu et al., 2022; Chen et al., 2023a).
## 3 Activation Scaling for Post-hoc Model Enhancement
We start by presenting the preliminaries of Out-of-Distribution (OOD) detection in Sec. 3.1 to set the stage for our subsequent discussion and analysis of the ASH method in Sec. 3.2. The results of our analysis directly leads to our own OOD criterion in Sec. 3.3. Finally, we introduce our intermediate tensor shaping approach for training time OOD detection enhancement in Sec. 3.4.
### Preliminaries
While OOD is relevant for many domains, we follow previous works (Yang et al., 2022) and focus specifically on semantic shifts in image classification. During training, the classification model is trained with ID data that fall into a pre-defined set of \(K\) semantic categories: \(\forall(\mathbf{x},y)\sim\mathcal{D}_{\text{ID}},y\in\mathcal{Y}_{\text{ID}}\). During inference, there are both ID and OOD samples; the latter are samples drawn from categories unobserved during training, _i.e._\(\forall(\mathbf{x},y)\sim\mathcal{D}_{\text{OOD}},y\notin\mathcal{Y}_{\text{ID}}\).
Now consider a neural network consisting of two parts: a feature extractor \(f(\cdot)\), and a linear classifier parameterized by weight matrix \(\mathbf{W}\in\mathbb{R}^{K\times D}\) and a bias vector \(\mathbf{b}\in\mathbb{R}^{D}\). The network logit can be mathematically represented as
\[\mathbf{z}=\mathbf{W}\cdot\mathbf{a}+\mathbf{b},\qquad\mathbf{a}=f(\mathbf{x}), \tag{1}\]
where \(\mathbf{a}\in\mathbb{R}^{D}\) is the \(D\)-dimensional feature vector in the penultimate layer of the network and \(\mathbf{z}\in\mathbb{R}^{K}\) is the logit vector from which the class label can be estimated by \(\hat{y}=\arg\max(\mathbf{z})\). In line with other OOD literature (Sun et al., 2021), an individual dimension of feature \(\mathbf{a}\), denoted with index \(j\) as \(\mathbf{a}_{j}\), is referred to as an "activation".
For a given test sample \(\mathbf{x}\), an OOD score can be calculated to indicate the confidence that \(\mathbf{x}\) is in-distribution. By convention, scores above a threshold \(\tau\) are ID, while those equal or below are considered OOD. A common setting is the energy-based OOD score \(S_{\text{{EBO}}}(\mathbf{x})\) together with indicator function \(G(\cdot)\) that applies the thresholding (Liu et al., 2020):
\[G(\mathbf{x};\tau)=\begin{cases}0&\text{if }S_{\textit{EBO}}(\mathbf{x})\leq\tau\quad(OOD), \\ 1&\text{if }S_{\textit{EBO}}(\mathbf{x})>\tau\quad(ID),\end{cases},\qquad S_{\textit{EBO}}( \mathbf{x})=T\cdot\text{log}\sum_{k}^{K}e^{\mathbf{z}_{k}/T}, \tag{2}\]
where \(T\) is a temperature parameter, \(k\) is the logit index for the \(K\) classes.
### Analysis on ASH:
A state-of-the-art method for OOD detection is ASH (Djurisic et al., 2023). ASH stands for activation shaping and is a simple post-hoc method that applies a rectified scaling to the feature vector \(\mathbf{a}\). Activations in \(\mathbf{a}\) up to the \(p^{\text{th}}\) percentile across the \(D\) dimensions are rectified ("pruned" in the original text); activations above the \(p^{\text{th}}\) percentile are scaled. More formally, ASH introduces a shaping function \(s_{f}\) that is applied to each activation \(\mathbf{a}_{j}\) in a given sample. If we define \(P_{p}(\mathbf{a})\) as the \(p^{\text{th}}\) percentile of the elements in \(\mathbf{a}\), ASH produces the logit \(\mathbf{z}_{\text{ASH}}\):
\[\mathbf{z}_{\text{ASH}}=\mathbf{W}\cdot(\mathbf{a}\circ s_{f}(\mathbf{a}))+\mathbf{b},\quad \text{where }s_{f}(\mathbf{a})_{j}=\begin{cases}0&\text{if }\mathbf{a}_{j}\leq P_{p}(\mathbf{a}),\\ \exp(r)&\text{if }\mathbf{a}_{j}>P_{p}(\mathbf{a}),\end{cases}, \tag{3}\]
and \(\circ\) denotes an element-wise matrix multiplication, and the scaling factor \(r\) is defined as the ratio of the sum of all activations versus the sum of un-pruned activations in \(\mathbf{a}\):
\[r=\frac{Q}{Q_{p}},\qquad\text{where }Q=\sum_{j}^{D}\mathbf{a}_{j}\qquad\text{ and }\ Q_{p}=\sum_{\mathbf{a}_{j}>P_{p}(\mathbf{a})}\mathbf{a}_{j}. \tag{4}\]
Since \(Q_{p}\leq Q\), the factor \(r\geq 1\); the higher the percentile \(p\), _i.e._ the greater the extent of pruning, the smaller \(Q_{p}\) is with respect to \(Q\) and the larger the scaling factor \(r\). To distinguish OOD data, ASH then passes the logit from Eq. 3 to score and indicator function as given in Eq. 2.
While ASH is highly effective, the original paper has no explanation of the working mechanism1. We analyze the rectification and scaling components of ASH below and reveal that scaling helps to separate ID versus OOD energy scores, while rectification has an adverse effect.
Footnote 1: In fact, the authors put forth a call for explanation in their Appendix L.
**Assumptions:** Our analysis is based on two assumptions. (1) The penultimate activations of ID and OOD samples follow two differing rectified Gaussian distributions parameterized by \((\mu^{\text{ID}},\sigma^{\text{ID}})\) and \((\mu^{\text{OOD}},\sigma^{\text{OOD}})\). The Gaussian assumption is commonly used in the literature (Sun et al., 2021)and we verify it in Tab. 1; the rectification follows naturally if a ReLU is applied as the final operation of the penultimate layer. (2) Normalized ID activations are higher than that of OOD activations; this assumption is supported by (Liu et al., 2020), who suggested that well-trained networks have
higher responses to samples resembling those seen in training. Fig. 2 and Fig. 3 visualize statistical corroboration of these assumptions.
**Proposition 3.1**.: _Assume that ID activations \(\mathbf{a}_{j}^{(\text{ID})}\sim\mathcal{N}^{R}(\mu^{\text{ID}},\sigma^{\text{ID}})\) and OOD activations \(\mathbf{a}_{j}^{(OOD)}\sim\mathcal{N}^{R}(\mu^{\text{OOD}},\sigma^{\text{OOD}})\) where \(\mathcal{N}^{R}\) denotes a rectified Gaussian distribution. If \(\mu^{\text{ID}}/\sigma^{\text{ID}}>\mu^{OOD}/\sigma^{\text{OOD}}\), then there is a range of percentiles \(p\) for which a factor \(C(p)=\frac{\varphi(\sqrt{2}\operatorname{erf}^{-1}(2p-1))}{1-\Phi(\sqrt{2} \operatorname{erf}^{-1}(2p-1))}\) is large enough such that \(Q_{p}^{\text{ID}}/Q^{\text{ID}}<Q_{p}^{OOD}/Q^{OOD}\)._
The full proof is given in Appendix A. Above, \(\varphi\) and \(\Phi\) denote the probability density function and cumulative distribution function of the standard normal distribution, respectively. The factor \(C(p)\), plotted in Fig. 4a, relates the percentile of activations that distinguishes ID from OOD data.
**Rectification (Pruning)** The relative reduction of activations can be expressed as:
\[D^{Pruning}=(Q-Q_{p})/Q. \tag{5}\]
Note that a reduction in activations also leads to a reduction in the OOD energy. Since \(Q_{p}^{\text{ID}}/Q^{\text{ID}}<Q_{p}^{OOD}/Q^{OOD}\), it directly implies that the decrease in ID samples will be greater than that in OOD samples, denoted as \(D_{ID}^{Pruning}>D_{OOD}^{Pruning}\). From this result, we can show that the expected value of the relative decrease in energy scores with rectification will be greater for ID samples than OOD samples following the Remark 2 in Sun et al. (2021), which illustrates that the changes in logits is proportional to the changes in activations.
Our result above shows that rectification or pruning creates a greater overlap in energy scores between ID and OOD samples, making it more difficult to distinguish them. Empirically, this result is shown in Fig. 4b, where AUROC steadily decreases with stand-alone pruning as the percentile \(p\) increase.
**Scaling** on the other hand behaves in a manner opposite to the derivation above and enlarges the separation between ID and OOD scores.
Given \(Q_{p}^{\text{ID}}/Q^{\text{ID}}<Q_{p}^{OOD}/Q^{OOD}\) and \(r=Q/Q_{q}\), we have \(r^{\text{ID}}>r^{OOD}\), which motivates the separation on \(r\) between ID and OOD, Fig. 4c depicts the histograms for these respective distributions, they are well separated and therefore scale activations of ID and OOD samples differently. The relative increase on activation can be expressed as:
\[I^{\text{Scaling}}=(r-1) \tag{6}\]
where we can get \(I_{\text{ID}}^{\text{scaling}}>I_{\text{OOD}}^{\text{scaling}}\). This increase is then transferred to logit spaces \(\mathbf{z}\) and energy-based scores \(S_{\text{EBO(ID)}}\) and \(S_{\text{EBO(OOD)}}\), which increase the gap between ID and OOD samples.
**Discussion on percentile \(p\):** Note that \(C(p)\) does not monotonically increasing with respect to \(p\) (see Fig. 4a). When \(p\approx 0.95\), there is an inflection point and \(C(p)\) decreases. A similar inflection follows
Figure 4: (a) The relationship between the parameter \(C(p)\) and the percentile \(p\). A higher value of \(C(p)\) indicates better separation of scales. (b) AUROC vs. percentile \(p\). Up to \(p=0.85\), as highlighted by orange box, AUROC for scaling increases while for pruning it decreases. The results of ASH sit between the two as the method is a combination of pruning plus scaling. (c) Histograms of scales \(Q/Q_{p}\) for ID dataset (ImageNet) and OOD dataset (iNaturalist) exhibit a clear separation from each other.
on the AUROC for scaling (see Fig. 3(b)), though it is not exactly aligned to \(C(p)\). The difference is likely due to the approximations made to estimate \(C(p)\). Also, as \(p\) gets progressively larger, fewer activations (\(D=2048\) total activations) are considered for estimating \(r\), leading to unreliable logits for the energy score. Curiously, pruning also drops off, which we believe to come similarly from the extreme reduction in activations.
### SCALE Criterion for OOD Detection
From our analyses and findings above, we propose a new post-hoc model enhancement criterion, which we call _SCALE_. As the name suggests, it shapes the activation with (only) a scaling:
\[\mathbf{z}^{\prime}=\mathbf{W}\cdot(\mathbf{a}\circ s_{f}(\mathbf{a}))+\mathbf{b},\qquad\text {where }s_{f}(\mathbf{a})_{j}=\exp(r)\;\;\text{and}\;\;r=\frac{\sum_{j}\mathbf{a}_{j}}{\sum_{ \mathbf{a}_{j}>P_{p}(\mathbf{a})}\mathbf{a}_{j}}. \tag{7}\]
Fig. 4(a) illustrates how SCALE works. SCALE applies the same scaling factor \(r\) as ASH, based on percentile \(p\). Instead of pruning, it retains and scales _all_ the activations. Doing so has two benefits. First, it enhances the separation in energy scores between ID and OOD samples. Secondly, scaling all activations equally preserve the ordinality of the logits \(\mathbf{z}^{\prime}\) compared to \(\mathbf{z}\). As such, the \(\arg\max\) is not affected and there is no trade-off for ID accuracy; this is not the case with rectification, be it pruning, like in ASH or clipping, or like ReAct (see Fig. 1). Results in Tab. 2 and 3 verify that SCALE outperform ASH-S on all datasets and model architectures.
### Incorporating SCALE into Training
In practice, the semantic shift of ID versus OOD data may be ambiguous. For example, iNaturalist dataset features different species of plants; similar objects may be found in ImageNet. Our hypothesis is that, during training, we can emphasize the impact of samples possessing the most distinctive in-distribution characteristics, denoted as "ID-ness". Quantifying the ID-ness of specific samples is a challenging task, so we rely on a well-trained network to assist us in this endeavor. In particular, for a well-trained network, we can reacquire the activations of all training samples. We proceed on the assumption that the normalized ID activations are greater than those of out-of-distribution (OOD) activations. To measure the degree of ID-ness within the training data, we compute their scale factor, represented as \(Q/Q_{p}\). Armed with this measurement of ID-ness, we can then undertake the process of re-optimizing the network using the high ID-ness data. Our approach draws inspiration from the concept of intermediate tensor compression found in memory-efficient training methods (Chen et al., 2023), where modifications are exclusively applied to the backward pass, leaving the forward pass unchanged.
Fig. 4(b) illustrates our training time enhancement methods for OOD detection. We finetune a well-trained network, by introducing a modification to the gradient of the weights of the fully connected
Figure 5: Illustrations of our post-hoc model enhancement method SCALE and training time model enhancement method ISH.
layer. The modified gradient is defined as follows:
\[\mathbf{W}^{t+1}=\mathbf{W}^{t}-\eta\sum_{i}[(\mathbf{a}_{i}\circ s_{f}(\mathbf{a}_{i}))^ {\top}\nabla\mathbf{z}_{i}] \tag{8}\]
where \(i\) denotes sample index in the batch, \(\nabla\) denotes the gradient regarding to the cross entropy loss, \(t\) denotes the training step \(t\), and \(\eta\) represents the learning rate.
Modifying activations exclusively in the backward pass offers several advantages. Firstly, it leaves the forward pass unaffected, resulting in only a minimal loss in ID accuracy. Secondly, the model architecture remains exactly the same during inference, making this training strategy compatible with any OOD post-processing techniques. Since the saved activations in the backward pass are also referred to as intermediate tensors, we term this method as Intermediate tensor SHaping (ISH).
## 4 Experiments
### Settings
To verify SCALE as a post-hoc OOD method, we conduct experiments using CIFAR10, CIFAR100 (Krizhevsky, 2009), and ImageNet-1k (Deng et al., 2009) as in-distribution (ID) data sources.
**CIFAR.** We used SVHN (Netzer et al., 2011), LSUN-Crop (Yu et al., 2015), LSUN-Resize (Yu et al., 2015), iSUN (Xu et al., 2015), Places365 (Zhou et al., 2018), and Textures (Cimpoi et al., 2014) as OOD datasets, For consistency with previous work, we use the same model architecture and pretrained weights, namely, DenseNet-101 (Huang et al., 2017), in accordance with the other post-hoc approaches DICE, ReAct, and ASH. Table 3 compares the FPR@95 and AUROC averaged across all six datasets; detailed results are provided in Appendix B.
**ImageNet.** In our ImageNet experiments, we follow the OpenOOD v1.5 (Zhang et al., 2023) benchmark, which separates OOD datasets as near-OOD and far-OOD groups. We employed SSB-hard (Vaze et al., 2022) and INICO (Bitterwolf et al., 2023) as near-OOD datasets and iNaturalist (Horn et al., 2018), Textures (Cimpoi et al., 2014), and OpenImage-O (Wang et al., 2022) as far-OOD datasets. Our reported metrics are the average FPR@95 and AUROC values across these categories; detailed results are given in Appendix B. The OpenOOD benchmark includes improved hyperparameter selection with a dedicated OOD validation set to prevent overfitting to the testing set. Additionally, we provide results following the same dataset and test/validation split settings as ASH and ReAct in the appendix. We adopted the ResNet50 (He et al., 2016) model architecture and obtained the pretrained network from the torchvision library.
**Metrics.** We evaluate with two measures. The first is FPR@95, which measures the false positive rate at a fixed true positive rate of 95%; lower scores are better). The second is AUROC (Area under the ROC curve). It represents the probability that a positive in-distribution (ID) sample will have a higher detection score than a negative out-of-distribution (OOD) sample; higher scores indicate superior discrimination.
### SCALE for Post-Hoc OOD Detection
Comparison of ODD score methods and post-hoc model enhancement methods (separated with a solid line) on the ImageNet and CIFAR are illustrated in the Table 2 and 3. Notably, SCALE attains the highest OOD detection scores.
**OOD Detection Accuracy.** Compared to the current state-of-the-art ASH-S, SCALE demonstrates significant improvements on ImageNet \(-\) 1.73 on Near-OOD and 0.26 on far-OOD when considering AUROC. For FPR@95, it outperforms ASH-S by 2.27 and 0.33. On CIFAR10 and CIFAR100, SCALE has even greater improvements of 2.48 and 2.41 for FPR@95, as well as 0.66 and 0.72 for AUROC, respectively.
**ID Accuracy.** One of SCALE's key advantages is it only applies linear transformations on features, so ID accuracy is guaranteed to stay the same. This differentiates it from other post-hoc enhancement methods that rectify or prune activations, thereby modifying inference and invariably compromises the ID accuracy. SCALE's performance surpasses ASH-S by a substantial margin of 0.67 on the ID
dataset, ImageNet-1k. This capability is pivotal for establishing a unified pipeline that excels for ID and OOD.
**Comparison with TempScale.** Temperature scaling (TempScale) is widely used for confidence calibration (Guo et al., 2017). SCALE and TempScale both leverage scaling for OOD detection, but with two distinctions. Firstly, TempScale directly scales logits for calibration, whereas SCALE applies scaling at the penultimate layer. Secondly, TempScale employs a uniform scaling factor for all samples, whereas SCALE applies a sample-specific scaling factor based on the sample's activation statistics. The sample-specific scaling is a crucial differentiator that enables the discrimination between ID and OOD samples. Notably, our SCALE model significantly outperforms TempScale in both Near-OOD and Far-OOD scenarios.
**SCALE with different percentiles \(p\).** Table 2 uses \(p=0.85\) for SCALE and ASH-S, which is verified on the validation set. As detailed in Section 3.2, in order to ensure the validity of scaling, it is essential for the percentile value \(p\) to fall within a specific range where the parameter \(C(p)\) exhibits a sufficiently high value to meet the required condition. Our experimental observations align with this theoretical premise. Specifically, we have empirically observed that, up to the 85% percentile threshold, the AUROC values for both Near-OOD and Far-OOD scenarios consistently show an upward trend. However, a noticeable decline becomes apparent beyond this percentile threshold. This empirical finding corroborates our theoretical insight, indicating that the parameter \(C(p)\) experiences a reduction in magnitude as \(p\) approaches the 90%.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \(p\) & 65 & 70 & 75 & 80 & 85 & 90 & 95 \\ \hline Near-OOD & 62.45 / 79.31 & 61.65 / 79.83 & 61.12 / 80.41 & 60.12 / 81.01 & **59.76 / 81.36** & 63.19 / 80.14 & 78.62 73.40 \\ Far-OOD & 24.08 / 94.43 & 22.21 / 95.02 & 20.20 / 95.61 & 18.26 / 96.17 & **16.53 / 96.53** & 18.58 / 96.20 & 32.42 93.28 \\ \hline \hline \end{tabular}
\end{table}
Table 4: FPR@95 / AUROC results on ImageNet benchmarks under different \(p\).
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Postprocessor**} & \multicolumn{3}{c}{**Near-OOD**} & \multicolumn{2}{c}{**Far-OOD**} & \multirow{2}{*}{**ID ACC**} \\ & & FPR@95 & AUROC & FPR@95 & & & \\ & & \(\downarrow\) & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) & \(\uparrow\) \\ \hline \multirow{8}{*}{ResNet50} & EBO (Liu et al., 2020) & 68.56 & 75.89 & 38.40 & 89.47 & 76.18 \\ & MSP (Hendrycks \& Gimpel, 2017) & 65.67 & 76.02 & 51.47 & 85.23 & 76.18 \\ & ML (Hendrycks et al., 2022) & 67.82 & 76.46 & 38.20 & 89.58 & 76.18 \\ & GEN (Liu et al., 2023) & 65.30 & 76.85 & 35.62 & 89.77 & 76.18 \\ & RMDS (Ren et al., 2021) & 65.04 & 76.99 & 40.91 & 86.38 & 76.18 \\ \cline{2-6} & TempScale (Guo et al., 2017) & 64.51 & 77.14 & 46.67 & 87.56 & 76.18 \\ & ReAct (Sun et al., 2021) & 66.75 & 77.38 & 26.31 & 93.67 & 75.58 \\ & ASH-S (Djurisic et al., 2023) & 62.03 & 79.63 & 16.86 & 96.47 & 75.51 \\ & **SCALE (Ours)** & **59.76** & **81.36** & **16.53** & **96.53** & **76.18** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **OOD detection results on ImageNet-1K benchmarks.** Model choice and protocol are the same as existing works. SCALE outperforms other OOD score methods and post-hoc model enhancement methods, achieving the highest OOD detection scores and excelling in the ID-OOD trade-off. Detailed results for each dataset are given in Appendix B.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Postprocessor**} & \multicolumn{3}{c}{**CIFAR-10**} & \multicolumn{3}{c}{**CIFAR-100**} \\ & & FPR@95 & AUROC & FPR@95 & AUROC \\ & & \(\downarrow\) & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) \\ \hline \multirow{8}{*}{DenseNet-101} & MSP & 48.73 & 92.46 & 80.13 & 74.36 \\ & EBO & 26.55 & 94.57 & 68.45 & 81.19 \\ \cline{1-1} & ReAct & 26.45 & 94.95 & 62.27 & 84.47 \\ \cline{1-1} & DICE & 20.83\({}^{1.58}\) & 95.24\({}^{1.02}\) & 49.72\({}^{1.69}\) & 87.23\({}^{1.03}\) \\ \cline{1-1} & ASH-S & 15.05 & 96.61 & 41.40 & 90.02 \\ \cline{1-1} & **SCALE (Ours)** & **12.57** & **97.27** & **38.99** & **90.74** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **OOD detection results on CIFAR benchmarks.** SCALE outperform all postprocessors. Detailed results for each dataset are in the appendix.
### ISH for Training-Time Model Enhancement
We used the same dataset splits as the post-hoc experiments in Sec. 4.1. For training, we fine-tuned the torchvision pretrained model with ISH for 10 epochs with a cosine annealing learning rate schedule initiated at 0.003 and a minimum of 0. We additionally observed that using a smaller weight decay value (5e-6) enhances OOD detection performance. The results are presented in Table 5. We compare ISH with other training time model enhancement methods.
**Comparison with OOD training methods.**
The work LogitNorm(Wei et al., 2022) focuses on diagnosing the gradual narrowing of the gap between the logit magnitudes of ID and OOD distributions during later stages of training. Their proposed approach involves normalizing logits, and the scaling factor is applied within the logits space during the backward pass.
The key distinction between their LogitNorm method and our ISH approach lies in the purpose of scaling. LogitNorm scales logits primarily for confidence calibration, aiming to align the model's confidence with the reliability of its predictions. In contrast, ISH scales activations to prioritize weighted optimization, emphasizing the impact of high ID-ness data on the fine-tuning process.
**Comparisons with data augmentation-based methods.** Zhang et al. (2023) indicates that data augmentation methods, while not originally designed for OOD detection improvement, can simultaneously enhance both ID and OOD accuracy.
In comparison to AugMix and RegMixup, our ISH approach, while slightly reducing ID accuracy, delivers superior OOD performance with significantly fewer computational resources. When compared to AugMix, ISH achieves substantial improvements, enhancing AUROC by 0.46 and 0.8 for Near-OOD and Far-OOD, respectively, with just 0.1x the extended training epochs. Notably, ISH sets the highest AUROC records, reaching 84.01% on Near-OOD scores and 96.79% on Far-OOD scores among all methods on OpenOODv1.5 benchmark.
## 5 Conclusion
In this paper, we have conducted an in-depth investigation into the efficacy of scaling techniques in enhancing out-of-distribution (OOD) detection. Our study is grounded in the analysis of activation distribution disparities between in-distribution (ID) and OOD data. To this end, we introduce SCALE, a post-hoc model enhancement method that achieves state-of-the-art OOD accuracy when integrated with energy scores, without compromising ID accuracy. Furthermore, we extend the application of scaling to the training phase, introducing ISH, a training-time enhancement method that significantly bolsters OOD accuracy.
|
2309.06735 | GelFlow: Self-supervised Learning of Optical Flow for Vision-Based
Tactile Sensor Displacement Measurement | High-resolution multi-modality information acquired by vision-based tactile
sensors can support more dexterous manipulations for robot fingers. Optical
flow is low-level information directly obtained by vision-based tactile
sensors, which can be transformed into other modalities like force, geometry
and depth. Current vision-tactile sensors employ optical flow methods from
OpenCV to estimate the deformation of markers in gels. However, these methods
need to be more precise for accurately measuring the displacement of markers
during large elastic deformation of the gel, as this can significantly impact
the accuracy of downstream tasks. This study proposes a self-supervised optical
flow method based on deep learning to achieve high accuracy in displacement
measurement for vision-based tactile sensors. The proposed method employs a
coarse-to-fine strategy to handle large deformations by constructing a
multi-scale feature pyramid from the input image. To better deal with the
elastic deformation caused by the gel, the Helmholtz velocity decomposition
constraint combined with the elastic deformation constraint are adopted to
address the distortion rate and area change rate, respectively. A local flow
fusion module is designed to smooth the optical flow, taking into account the
prior knowledge of the blurred effect of gel deformation. We trained the
proposed self-supervised network using an open-source dataset and compared it
with traditional and deep learning-based optical flow methods. The results show
that the proposed method achieved the highest displacement measurement
accuracy, thereby demonstrating its potential for enabling more precise
measurement of downstream tasks using vision-based tactile sensors. | Zhiyuan Zhang, Hua Yang, Zhouping Yin | 2023-09-13T05:48:35Z | http://arxiv.org/abs/2309.06735v1 | GelFlow: Self-supervised Learning of Optical Flow for Vision-Based Tactile Sensor Displacement Measurement
###### Abstract
High-resolution multi-modality information acquired by vision-based tactile sensors can support more dexterous manipulations for robot fingers. Optical flow is low-level information directly obtained by vision-based tactile sensors, which can be transformed into other modalities like force, geometry and depth. Current vision-tactile sensors employ optical flow methods from OpenCV to estimate the deformation of markers in gels. However, these methods need to be more precise for accurately measuring the displacement of markers during large elastic deformation of the gel, as this can significantly impact the accuracy of downstream tasks. This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors. The proposed method employs a coarse-to-fine strategy to handle large deformations by constructing a multi-scale feature pyramid from the input image. To better deal with the elastic deformation caused by the gel, the Helmholtz velocity decomposition constraint combined with the elastic deformation constraint are adopted to address the distortion rate and area change rate, respectively. A local flow fusion module is designed to smooth the optical flow, taking into account the prior knowledge of the blurred effect of gel deformation. We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods. The results show that the proposed method achieved the highest displacement measurement accuracy, thereby demonstrating its potential for enabling more precise measurement of downstream tasks using vision-based tactile sensors.
Keywords:Vision-based tactile sensor Optical flow Elastic deformation estimation Deep learning.
## 1 Introduction
Vision and tactile are crucial sources of information for perceiving and interacting with the world [1]. With computer vision and robotics advancement, vision-based tactile sensors fusing both modalities are becoming increasingly popular
for enabling intelligent robots to perceive and manipulate delicate objects precisely. A typical visual-tactile sensor hardware comprises three components: a contact module, a camera module, and an illumination module [2]. The contact module requires resilient and optically favorable materials, as its performance directly affects the accuracy of subsequent optical measurements. It is often embedded with a marker layer to visualize the contact material's deformation. The camera and illumination modules can be classified into two main categories based on the measurement principle: a monocular camera system with multi-color illumination systems and multi-camera systems with a monochromatic illumination system. The integration of these three modules allows vision-based tactile sensors to capture and measure various types of information, including force [3], geometry reconstruction [4], sliding detection [5], and object recognition [1].
The displacement of markers in the contact module provides valuable information for measuring additional physical properties, such as the shape of the contacting object, the forces applied, and the roughness of its surface. This information can be analyzed by the robot for downstream tasks. Accurate and dense displacement measurements improve the resolution of other modal information, providing better input for subsequent tasks, thereby enhancing the accuracy of robot perception and manipulation [6]. However, the contact module composed of gel material is characterized by large elastic deformation, which can result in errors in estimating the displacement of existing vision-based tactile sensors using the optical flow algorithm in OpenCV [7]. These inaccuracies in displacement measurements can lead to inaccuracies in the final estimated physical information [8]. Therefore, our motivation is to develop an accurate pixel-level optical flow estimation method to better deal with the deformation properties of the gel materials.
In this study, we introduce a self-supervised learning optical flow approach, named GelFlow, for a more precise measurement of displacement in gel-like materials. Our proposed method offers two novel loss terms, namely the Helmholtz velocity decomposition constraint and elastic deformation constraint, and a practical local flow fusion module to track the movement of gel materials captured by a monocular camera. These contributions improve displacement measurement accuracy and enhance vision-based tactile sensors' capability to estimate physical information. The rest of this paper is organized as follows. Section 2 provides an introduction to previous works related to vision-based tactile sensors and dense displacement processing. In Section 3, the structure and individual modules of the proposed GelFlow method are elaborated on and discussed in detail. The comparison results with other optical flow methods and the ablation study are presented in Section 4. Finally, the conclusions of this work are discussed in Section 5.
## 2 Related Work
The ability to perceive and model the contact surface's three-dimensional (3D) geometry is a fundamental feature that distinguishes vision-based tactile sensors
from conventional tactile sensors. Based on different principles of 3D surface reconstruction, vision-based tactile sensors can be divided into two categories: sensors based on photometric stereo reconstruction and sensors based on multi-view geometry reconstruction. Among the first type of sensors, the GelSight [9] sensor uses an RGB trichromatic light source to illuminate the contact layer and a monocular camera to capture the image. This algorithm enables it to obtain the normal gradient of each pixel, resulting in high accuracy for contact geometry. However, this method requires rigorous structural design and light source calibration. The GelSlim [10] sensor improves GelSight's optical path system by using a mirror-reflective structure so that the camera no longer has to face the contact body, making the entire sensor compact. The DIGIT [11] sensor is low-cost, compact, and provides high-resolution tactile perception, making it more practical for robotic finger manipulation. Unlike the previous flat structure contact layer, DenseTact [12] sensor uses a spherical contact layer, making it more suitable for sensing complex object surfaces. Among the second type of sensors, OmniTact [13] sensor uses multiple micro-cameras to capture multi-directional high-resolution deformation information to obtain accurate and reliable measurement results. GelStereo [14] sensor simplifies the number of cameras required by using binocular cameras to calculate the depth information of the contact surface through the disparity map in left and right views. Tac3D [15] further simplifies the number of cameras by using a monocular camera with a mirror system to achieve a pseudo-binocular imaging effect, achieving the same 3D reconstruction purpose.
In addition to 3D reconstruction, other valuable information, such as force estimation and sliding detection, is also crucial for robot perception. This information is obtained by measuring the deformation of the contact layer and then converting it according to specific criteria. Optical flow is an essential technique for deformation estimation, and accurate optical flow estimation with high resolution can provide more detailed information for more precise and dexterous operations. There are two primary approaches to enhancing the reliability of optical flow estimation: utilizing more precise optical flow algorithms and designing better marker layers. During the early stages of vision-based tactile sensor development, the Lucas-Kanada optical flow method [16] was utilized to track the movement of markers, which could only produce a sparse vector field. Interpolation methods are used for upsampling the sparse field, and significant errors occur during this process [6]. Subsequently, more robust and accurate pixel-level optical flow methods such as Farneback [17] and Dense Inverse Search (DIS) [18] methods were adopted to avoid the interpolation error and improve estimation precision [6]. The conventional marker layer consists of an array of sparse black dot markers, which does not provide rich deformation information. Moreover, a single color pattern lacked robustness in optical flow estimation due to the brightness conservation hypothesis. When using a single color pattern, the similarity of pixels made the optical flow estimation confusing. In order to overcome the limitations mentioned above, researchers have proposed various types of marker layers. [19] added high-density internal markers, which enabled dense
optical flow tracking for estimating shear-induced membrane displacement. [6] replaced the sparse black dot markers with a novel random color pattern, which achieved more accurate and higher resolution two-dimensional (2D) deformation estimation.
In this work, our purpose is to propose a pixel-level optical flow method with high accuracy and robustness. Our method takes advantage of the powerful tools of deep learning, and we hope it can be helpful in deformation estimation and other downstream tasks in vision-based tactile sensors.
## 3 Method
### Network Architecture
Fig. 1 shows the framework of the GelFlow. First, the encoder module of PWC-Net [20] is adopted to extracted the multi-scale features for the input image pairs \(I_{1}\) and \(I_{2}\) and denoted as \(F_{1}^{s}(I_{1})\) and \(F_{2}^{s}(I_{2})\), where the superscript \(s\) represents the \(s\)th scale of the pyramid. In Optical Flow Estimation Module, apart from basic operators in PWC-Net like Cost Volume Operator, Coarse Flow Estimator and Refined Flow Estimator, we add Local Flow Fusion Module (LFFM) for better gel-like materials deformation estimation. The output flow at the current pyramid scale \(V_{1\to 2}^{s}\) is upsampled by a factor of 2 using bilinear interpolation and then used as the initial flow for the subsequent scale \(V_{1\to 2}^{s-1}\). According to the coarse-to-fine strategy, the feature of the second image \(F_{2}^{s}(I_{2})\) are warped [21] using the output flow (denoted by \(\tilde{F}_{2}^{s}(I_{2})\)) to reduce the feature distance between the extracted feature of the first image \(F_{1}^{s}(I_{1})\), enabling better handling of large displacement motion. Note that the input flow at the top scale is set to \(\mathbf{0}\), and the final output flow of GelFlow \(V_{1\to 2}^{0}\) is obtained by bilinear upsampling of the output flow by a factor of 4 at the bottom scale. The multi-scale strategy allows for the extraction of richer feature information by convolving the input image at different scales, improving the robustness of optical flow estimation.
### Local Flow Fusion Module
As the cross-linking network structure of the gel material results in its deformation showing overall smoothness [22], thus the generated optical flow field exhibits a blurred effect. Taking advantage of this property, we designed a local optical flow fusion module, and Fig. 2 shows the implementation details of this module. Context features \(C^{s}\) extracted by PWC-Net from each scale are utilized to construct the weight matrix. At each position, the feature vector is dot produced with the feature vectors of its neighboring positions (the number of neighboring positions is determined by the fusion range, usually \(3\times 3\) or \(5\times 5\)) to compute the similarities. The results are then normalized using a softmax function. The weight matrix enables the flow to consider not only the position itself but also its surrounding area, thereby achieving a blurred effect.
### Helmholtz Velocity Decomposition Constraint
The deformation of gel materials is complex due to their elastic properties. Based on the Helmholtz velocity decomposition theorem, compressible motion can be decomposed into four components: translational motion, linear deformation motion, shear deformation motion, and rotational motion, given by
\[\mathbf{u}(\mathbf{x}+\delta\mathbf{x})=\mathbf{u}(\mathbf{x})+\mathrm{X}\delta \mathbf{x}+\Theta\delta\mathbf{x}+\mathrm{Z}\delta\mathbf{x}, \tag{1}\]
\[\mathrm{X}=\left[\begin{array}{cc}\varepsilon_{xx}&0\\ 0&\varepsilon_{yy}\end{array}\right],\Theta=\left[\begin{array}{cc}0& \varepsilon_{xy}\\ \varepsilon_{xy}&0\end{array}\right],\mathrm{Z}=\left[\begin{array}{cc}0&- \omega\\ \omega&0\end{array}\right] \tag{2}\]
where \(\mathrm{X},\Theta\) and \(\mathrm{Z}\) denote the linear distortion rate tensor, shear distortion rate tensor, and rotation tensor, respectively; \(\varepsilon_{xx}=\partial u/\partial x\) and \(\varepsilon_{yy}=\partial v/\partial y\) are the linear distortion rates in the \(x\) and \(y\) directions, respectively; \(\varepsilon_{xy}=(\partial u/\partial y+\partial v/\partial x)/2\) is the shear distortion rate; \(\omega=(\partial v/\partial x-\partial u/\partial y)/2\) is the rotational angular rate. By decomposing the flow, we can impose more refined constraints on each component, achieving high-precision flow estimation. Eq. 1 can be further transformed as
\[\frac{\mathbf{u}(\mathbf{x}+\delta\mathbf{x})-\mathbf{u}(\mathbf{x})}{\delta \mathbf{x}}=\mathrm{X}+\Theta+\mathrm{Z}. \tag{3}\]
Thus, the values of linear distortion rate tensor \(\mathrm{X}\), shear distortion tensor \(\Theta\), and rotational tensor \(\mathrm{Z}\) are constraint to satisfy the small motions assumption, i.e.
\[\left\|\mathrm{vec(X)}\right\|_{1}+\lambda_{\Theta}\left\|\mathrm{vec(} \Theta)\right\|_{1}+\lambda_{\mathrm{Z}}\left\|\mathrm{vec(Z)}\right\|_{1}, \tag{4}\]
Figure 1: GelFlow, the proposed architecture for deformation measurement for gel-like materials using coarse-to-fine strategy, local flow fusion module and Self-supervised losses.
where,
\[\left\|\mathrm{vec(X)}\right\|_{1} =\left|\varepsilon_{xx}\right|+\left|\varepsilon_{yy}\right|, \tag{5}\] \[\left\|\mathrm{vec(\Theta)}\right\|_{1} =2|\varepsilon_{xy}|,\] (6) \[\left\|\mathrm{vec(Z)}\right\|_{1} =2|\omega|, \tag{7}\]
\(\lambda_{\Theta}\) and \(\lambda_{\mathrm{Z}}\) are coefficients of distortion and rotation tensors, respectively, which values will affect the smooth property of the optical flow; the function \(\mathrm{vec(\cdot)}\) is used to convert the input into a vector representation. With the help of the Helmholtz velocity decomposition theorem, the compressible motion can be better estimated [23].
Furthermore, the optical flow estimated on the edge of the image is more precise due to the larger gradient. The deformation of the gel is similar between the edge and other flattened areas. Thus, we adopt an anti-edge-aware weight for the motion smooth term, and the final decomposition loss is written as
\[\mathcal{L}_{dc}^{s}=\left(1-e^{-\beta\left\|\nabla I_{1}^{s}\right\|_{1}} \right)\left(\left\|\mathrm{vec(X)}\right\|_{1}+\lambda_{\Theta}\left\| \mathrm{vec(\Theta)}\right\|_{1}+\lambda_{\mathrm{Z}}\left\|\mathrm{vec(Z)} \right\|_{1}\right) \tag{8}\]
In Eq. 8, when an edge is detected (indicated by a larger value in \(\left\|\nabla I_{1}^{s}\right\|_{1}\)), the weight \(1-e^{-\beta\left\|\nabla I_{1}^{s}\right\|_{1}}\) increases. Consequently, the remaining term in \(\mathcal{L}_{dc}^{s}\) must decrease during the minimization process, which enhances the smoothness between the detected edge and the surrounding area.
Figure 2: Structure of Local Flow Fusion Module. The feature vector at each position is dot produced with the feature vectors in the surrounding area. Then, all of the results within the local range are passed through a softmax function and used as weights for weighted fusion with the estimated optical flow. This process results in a locally smooth optical flow field.
### Elastic Deformation Constraint
To further constrain the motion change during the elastic deformation of the gel materials, we propose a novel regularization term named the deformation loss term. As shown in Fig. 3, there are two typical deformations in the motion of gel materials, i.e., shrinking and stretching. To enforce spatial consistency in gel motion, we can incorporate a constraint that regulates the change in the area between adjacent pixels before and after deformation. Firstly, the pixel-level area change ratio is estimated between the input image pairs. Then, the estimated motion is smooth over the entire gel materials by constraining the gradient of the ratio. Different from [24], we calculate the deformation ratio separately for the \(x\) and \(y\) directions:
\[(x^{{}^{\prime}}-x^{{}^{\prime}}_{c}) =\mathcal{R}_{x}(x-x_{c}),\quad x\in\mathcal{N}_{3\times 3}(x_{c}), \tag{9}\] \[(y^{{}^{\prime}}-y^{{}^{\prime}}_{c}) =\mathcal{R}_{y}(y-y_{c}),\quad y\in\mathcal{N}_{3\times 3}(y_{c}), \tag{10}\]
where \(x^{\prime}\) and \(y^{\prime}\) denote the positions \(x\) and \(y\) warped by the optical flow \(V_{1\to 2}\); the subscript \(c\) represents the center of the local window; \(\mathcal{N}_{3\times 3}\) represents the local window size of \(3\times 3\). The final deformation ratio is obtained by multiplying the two ratios together:
\[\mathcal{R}=\mathcal{R}_{x}\mathcal{R}_{y}. \tag{11}\]
Finally, the combined anti-edge-aware weight can be utilized to define the deformation loss as follows:
\[\mathcal{L}^{s}_{df}=(1-e^{-\beta\|\nabla I^{s}_{1}\|_{1}})\| \nabla\mathcal{R}\|_{1}. \tag{12}\]
### Loss Function
The widely used photometric loss function in optical flow tasks is adopted for robust flow estimation, which takes the form:
\[\mathcal{L}^{s}_{ph}=\alpha\frac{1-\text{SSIM}(\tilde{I}^{s}_{1}, I^{s}_{1})}{2}+(1-\alpha)\Big{\|}\tilde{I}^{s}_{1}-I^{s}_{1}\Big{\|}_{1}, \tag{13}\]
Figure 3: Illustration of two typical changes of the gel deformation, with the red vectors indicating the direction of markers’ movement.
where SSIM denotes the structural similarity index; \(\alpha\) represents the balance between SSIM and \(L_{1}\) distance; \(\tilde{I}_{1}^{s}\) indicates the warped image \(I_{2}^{s}\) using the optical flow \(V_{1\to 2}^{s}\) at scale \(s\). The photometric loss, combined with the proposed decomposition loss and deformation loss, constructs the loss function at each scale. The multi-scale loss is defined as the weighted sum of the losses at each scale, denoted by:
\[\mathcal{L}=\sum_{s=0}^{l-2}\lambda_{s}\mathcal{L}_{self}^{s}= \sum_{s=0}^{l-2}\lambda_{s}(\mathcal{L}_{ph}^{s}+\lambda_{dc} \mathcal{L}_{dc}^{s}+\lambda_{df}\mathcal{L}_{df}^{s}), \tag{14}\]
where \(l\) is the number of total scales created by PWC-Net; \(\lambda_{dc}\) and \(\lambda_{df}\) are coefficients that control the balance between each loss; \(\lambda_{s}\) are parameters that weigh the importance of each scale.
## 4 Experimental Analysis
### Experiment Setup
The proposed self-supervised learning method does not require labeled training data. Therefore, we extract 1327 image pairs with a resolution of \(480\times 640\) pixels from videos captured by [25]. We reserve 8 image pairs with the typical motion of gel deformation (large displacement, shrinking and stretching) for validation and comparison with other optical flow methods. We train the network for 200 epochs on the training dataset, with a batch size of 4 image pairs per epoch. Subsequently, we fine-tune the network for an additional 800 epochs using the validation dataset. The number of pyramid scales, \(l\), is set to 8. The fusion region size of the LFFM is set to \(3\times 3\). In the photometric loss term, \(\alpha\) is set to 0.85. In the decomposition loss term, \(\beta\) is set to 10, and both \(\lambda_{\Theta}\) and \(\lambda_{\mathrm{Z}}\) are set to 0.01. In the deformation loss term, \(\beta\) is set to 10. In the multi-scale loss term, \(\lambda_{s}\) is set to 1.0 for each scale, while \(\lambda_{dc}\) and \(\lambda_{df}\) are set to 75 and 0.01, respectively. The images are initially resized to a resolution of \(512\times 640\) pixels before being fed into the network. The output optical flows are then resized to the original resolution of the images for validation.
### Evaluation Metrics
Since there are no ground truth labels in the dataset, we need to warp the second images into pseudo-first images using the estimated optical flows and compare the similarity between pseudo-first and authentic-first images. The higher the similarity, the better the estimation. Two widely used metrics for evaluating image similarity are PSNR (Peak Signal-to-Noise Ratio) and SSIM. They are defined as follows:
\[\mathrm{PSNR}(I,\tilde{I}) =10\times\log_{10}\left(\frac{\left(2^{n}-1\right)^{2}}{\mathrm{ MSE}(I,\tilde{I})}\right), \tag{15}\] \[\mathrm{SSIM}(I,\tilde{I}) =\frac{\left(2\mu_{x}\mu_{y}+c_{1}\right)\left(2\sigma_{xy}+c_{2} \right)}{\left(\mu_{x}^{2}+\mu_{y}^{2}+c_{1}\right)\left(\sigma_{x}^{2}+\sigma _{y}^{2}+c_{2}\right)}, \tag{16}\]
where, \(n\) represents the bit depth of the pixels; \(\text{MSE}(I,\tilde{I})\) is the mean square error between the input image \(I\) and the warped image \(\tilde{I}\); \(\mu_{x}\) and \(\mu_{y}\) are the means of \(I\) and \(\tilde{I}\), respectively; \(\sigma_{x}\) and \(\sigma_{y}\) are the variances of \(I\) and \(\tilde{I}\), and \(\sigma_{xy}\) represents their covariance; \(c_{1}\) and \(c_{2}\) are constants used to maintain stability. Therefore, we will use these two metrics for comparisons and evaluations in the following subsections.
### Comparisons with Classical Optical Flow Methods
We compared traditional dense optical flow methods (Farnaback, DIS, TV-L1) using OpenCV and deep learning-based optical flow methods (RAFT [26], ARFlow [27], and a self-supervised method named SelfFlow using a photometric loss mentioned before with a first-order smoothness loss [28]). The results of the comparison are presented in Table 1. Notably, the self-supervised methods SelfFlow and GelFlow were fine-tuned using the strategy described in Section 4.1.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Method} & Metric & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 & \#8 \\ \hline \multirow{8}{*}{Farneback} & \multirow{2}{*}{Farneback} & PSNR & 38.48 & 35.02 & 32.57 & 32.03 & 31.73 & 32.31 & 33.70 & 32.20 \\ \cline{3-11} & & SSIM & 0.96 & 0.91 & 0.91 & 0.89 & 0.89 & 0.91 & 0.91 & 0.90 \\ \cline{2-11} & & PSNR & 39.64 & 35.12 & 32.89 & 33.23 & 32.63 & 32.92 & 34.93 & 32.39 \\ \cline{2-11} & & SSIM & 0.97 & 0.92 & 0.92 & 0.92 & 0.92 & 0.93 & 0.94 & 0.91 \\ \cline{2-11} & & PSNR & 39.75 & 35.26 & 32.89 & 33.28 & 32.60 & 32.97 & 35.11 & 32.49 \\ \cline{2-11} & & SSIM & 0.97 & 0.92 & 0.92 & 0.92 & 0.92 & 0.93 & 0.94 & 0.91 \\ \cline{2-11} & & PSNR & **40.25** & 35.41 & 33.25 & 33.86 & 33.27 & 33.24 & 35.41 & 32.73 \\ \cline{2-11} & & SSIM & **0.97** & **0.92** & **0.92** & **0.93** & **0.93** & **0.93** & **0.94** & **0.91** \\ \cline{2-11} & & PSNR & 39.98 & **35.44** & **33.31** & **33.95** & **33.68** & **33.42** & **35.64** & **32.88** \\ \cline{2-11} & & SSIM & 0.97 & 0.92 & 0.92 & 0.92 & 0.93 & 0.93 & 0.94 & 0.91 \\ \hline \multirow{8}{*}{Farneback} & \multirow{2}{*}{RAFT} & PSNR & 37.73 & 34.83 & 31.49 & 31.11 & 31.17 & 32.44 & 34.78 & 31.57 \\ \cline{2-11} & & SSIM & 0.97 & 0.91 & 0.91 & 0.91 & 0.91 & 0.92 & 0.94 & 0.90 \\ \cline{2-11} & & PSNR & 39.76 & 35.22 & 32.88 & 33.44 & 32.54 & 33.05 & 35.30 & 32.46 \\ \cline{2-11} & & SSIM & 0.97 & 0.92 & 0.92 & 0.92 & 0.92 & 0.93 & 0.94 & 0.91 \\ \cline{2-11} & & SSIM & 0.97 & 0.92 & 0.92 & 0.93 & 0.93 & 0.94 & 0.91 \\ \cline{2-11} & & SSIM & 0.97 & 0.92 & 0.92 & 0.93 & 0.93 & 0.94 & 0.91 \\ \cline{2-11} & & SSIM & 0.97 & 0.93 & 0.92 & 0.93 & 0.93 & 0.94 & 0.95 & 0.93 \\ \cline{2-11} & & \multirow{2}{*}{GelFlow} & PSNR & 40.57 & 35.90 & 33.43 & 34.34 & 33.65 & 33.96 & 35.94 & 33.11 \\ \cline{2-11} & & SSIM & 0.98 & 0.93 & 0.93 & 0.93 & 0.93 & 0.94 & 0.95 & 0.93 \\ \hline \multirow{8}{*}{Farneback} & \multirow{2}{*}{GelFlow+ft} & PSNR & **40.76** & **36.00** & **33.66** & **34.71** & **34.25** & **35.00** & **36.13** & **33.22** \\ \cline{2-11} & & SSIM & **0.98** & **0.93** & **0.93** & **0.94** & **0.94** & **0.95** & **0.95** & **0.93** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with traditional and deep learning-based optical flow methods using the validation dataset. ’#’ represents an image pair. The best and the second-best values within each category are marked as bold and underlined, respectively. The best value among all the methods is marked in red. ’ft’ denotes fine-tuning the model on the validation dataset.
The validation dataset showed that TV-L1 and DIS (medium) performed similarly and outperformed other traditional methods. However, the solving strategy of TV-L1 is time-consuming, making it much slower than the optimized DIS method. Consequently, the DIS method is widely used as the dense optical flow estimator in current vision-based tactile sensors.
It is worth mentioning that we directly utilized the pre-trained models of RAFT and ARFlow, testing them on the vision-based tactile dataset. Therefore, their performance may not be satisfactory. On the other hand, SelfFlow and GelFlow were trained on the dataset and further fine-tuned. As a result, they outperformed the existing traditional methods. The excellent performance can be attributed to the strong learning ability of convolutional neural networks and the well-designed loss functions guiding the network output towards the ground truth. Among all the candidate methods, GelFlow achieved the best performance with its proposed flow fusion operation and motion decomposition and deformation loss, which guide the parameters of the network towards global optimization. In conclusion, the comparisons indicate that the proposed GelFlow method is particularly adept at handling gel materials' deformation.
## 5 Conclusion
In this study, we propose the GelFlow method, which incorporates several novel components to address the challenges posed by gel deformation. Firstly, GelFlow constructs a multi-scale feature pyramid to extract hidden features from the input image pairs and handle large displacements effectively. A local flow fusion module fuses the flow using neighboring flows with appropriate weights. This fusion process achieves a blurred effect, which is crucial for capturing the deformations occurring in gel materials. We propose two novel loss functions to better handle the intricate gel deformations: the velocity decomposition loss and the elastic deformation loss. A photometric loss combined with the proposed two novel motion smoothness losses is used to construct the multi-scale loss to better guide the network from global optimization. Finally, the network is trained in a self-supervised manner, and the comparison result with other optical flow methods indicates that the GelFlow method performs the best due to the superior capacity of the convolutional neural networks to extract valuable features and the strong ability of global optimization.
|
2309.08718 | On Languages Generated by Signed Grammars | We consider languages defined by signed grammars which are similar to
context-free grammars except productions with signs associated to them are
allowed. As a consequence, the words generated also have signs. We use the
structure of the formal series of yields of all derivation trees over such a
grammar as a method of specifying a formal language and study properties of the
resulting family of languages. | Ömer Eğecioğlu, Benedek Nagy | 2023-09-15T19:13:37Z | http://arxiv.org/abs/2309.08718v1 | # On Languages Generated by Signed Grammars
###### Abstract
We consider languages defined by signed grammars which are similar to context-free grammars except productions with signs associated to them are allowed. As a consequence, the words generated also have signs. We use the structure of the formal series of yields of all derivation trees over such a grammar as a method of specifying a formal language and study properties of the resulting family of languages.
## 1 Introduction
We consider properties of signed grammars, which are grammars obtained from context-free grammars (CFGs) by allowing right hand sides of productions to have negative signs in front. The concept of generation for such grammars is somewhat different from that of context-free grammars. A signed grammar is said to generate a language \(\mathcal{L}\) if the formal sum of the yields over all derivation trees over the grammar corresponds to the list of words in \(\mathcal{L}\). For a signed grammar, the yields of derivation trees may have negative signs attached to them, but the requirement is that when the arithmetic operations are carried out in the formal sum, the only remaining words are those of \(\mathcal{L}\), each appearing with multiplicity one.
The structure of context-free languages (CFLs) under a full commutation relation defined on the terminal alphabet is the central principle behind Parikh's theorem [25]. In partial commutation, the order of letters of some pairs of the terminal alphabet is immaterial, that is, if they appear consecutively, the word obtained by swapping their order is equivalent to the original one. These equivalence classes are also called traces and studied intensively in connection to parallel processes [19, 13, 22, 5]. Our motivation for this work is languages obtained by picking representatives of the equivalence classes in \(\Sigma^{*}\) under a partial commutativity relation, called Cartier-Foata languages [2]. In the description of these languages with Kleene-closure type expansions, words appear with negative signs attached to them. However such words are cancelled by those with positive signs, leaving only the sum of the words of the language. An example of this is \((a+b-ba)^{*}\) which is more familiarly denoted by the regular expression \(a^{*}b^{*}\). The interesting aspect of Cartier-Foata languages is that the words with negative signs cancel out automatically, leaving only the representative words, each appearing exactly once.
Motivated by these languages, we consider grammars which are obtained from context-free grammars by allowing signed productions, i.e., normal productions (in the role of positive productions) and productions of the form \(A\to-\alpha\) (negative productions). In this way, a derivation results in a signed word where the sign depends on the parity of the number of negative rules applied in the derivation. We consider those derivations equivalent that belong to the same derivation tree, and actually, the derivation tree itself defines the sign of the derived word. The language generated by such a grammar is obtained by taking all possible derivation trees for a given word (both its positive and negative derivations) and
requiring that the sum of the yields of all derivation trees over the grammar simply is a list of the words in a language \(\mathcal{L}\). This means that the simplified formal sum is of the form \(\sum_{w\in\mathcal{L}}w\), each word of the language appearing with multiplicity one. (Without loss of generality, in this study, we restrict ourselves to grammars having finitely many parse trees for each of the derived words.)
On one hand, the requirements in the specification of a language generated by a signed grammar may seem too restrictive. But at the same time this class of languages includes all unambiguous context-free languages and it is closed under complementation, and consequently can generate languages that are not even context-free. Therefore it is of interest to consider the interplay between the restrictions and various properties of languages generated by signed grammars.
## 2 Preliminaries
Given a language \(\mathcal{L}\) over an alphabet \(\Sigma\), we identify \(\mathcal{L}\) with the formal sum of its words denoted by \(f(\mathcal{L})\):
\[f(\mathcal{L})=\sum_{w\in\mathcal{L}}w. \tag{1}\]
The sum in (1) is also referred to as the _listing series_ of \(\mathcal{L}\). A _weighted series of \(\mathcal{L}\)_ is a formal series of the form \(\sum_{w\in\mathcal{L}}n_{w}\,w\) where \(n_{w}\) are integers. Thus a weighted series of \(\Sigma^{*}\)
\[\sum_{w\in\Sigma^{*}}n_{w}\,w\]
is the listing series of some language \(\mathcal{L}\) over \(\Sigma\) iff
\[n_{w}=\left\{\begin{array}{ll}1&\text{ if }w\in\mathcal{L}\\ 0&\text{ if }w\not\in\mathcal{L}\,.\end{array}\right. \tag{2}\]
We are allowed ordinary arithmetic operations on weighted series in a natural way. The important thing is that a weighted series is the listing series of a language \(\mathcal{L}\) iff the coefficients of the words in \(\mathcal{L}\) in the weighted series are 1, and all the others are 0. So for example over \(\Sigma=\{a,b,c\}\), the weighted series \(a+b+c+ba\) is the listing series of the finite language \(\mathcal{L}=\{a,b,c,ba\}\), whereas the weighted series \(a+b+c-ba\) does not correspond to a language over \(\Sigma\). This is because in the latter example \(n_{w}\) does not satisfy (2) for \(w=ba\). As another example, the difference of the weighted series \(2a+3b-c+ba\) and \(a+2b-2c+ba\) corresponds to the language \(\mathcal{L}=\{a,b,c\}\).
### CFGs and degree of ambiguity
Next we look at the usual CFGs \(G=(V,\Sigma,P,S)\). Here the start symbol is \(S\in V\). Let \(T\) be a parse (derivation) tree over \(G\) with root label \(S\) and terminal letters as labels of the leaves of \(T\). Let \(Y(T)\in\Sigma^{*}\) be the _yield_ of \(T\). Then the language generated by \(G\) is
\[\mathcal{L}(G)=\{Y(T)\mid T\text{ is a parse tree over }G\}\,.\]
This is equivalent to \(\mathcal{L}(G)=\{w\in\Sigma^{*}\mid S\stackrel{{*}}{{ \xrightarrow{\ \ }}}w\}\). For a CFG \(G\), we can define the formal weighted sum
\[f(G)=\sum_{T\in\mathcal{T}_{G}}Y(T)=\sum_{w\in\Sigma^{*}}n_{w}w \tag{3}\]
where \(\mathcal{T}_{G}\) denotes all parse trees over \(G\). Various notions of ambiguity for CFLs can be interpreted as the nature of the coefficients \(n_{w}\) that appear in (3). Rewriting some of the definitions in Harrison [8, pp. 240-242] in terms of these coefficients, we have
1. Given \(k\geq 1\), \(G\) is _ambiguous of degree \(k\)_ if \(n_{w}\leq k\) for all \(w\in\mathcal{L}(G)\).
2. \(\mathcal{L}\) is _inherently ambiguous of degree \(k\geq 2\)_ if \(\mathcal{L}\) cannot be generated by any grammar that is ambiguous of degree less than \(k\) but can be generated by by a grammar that is ambiguous of degree \(k\). In other words the degree of ambiguity of a CFL is the least upper bound for the number of derivation trees which a word in the language can have.
3. \(\mathcal{L}\) is _finitely inherently ambiguous_ if there is some \(k\) and some \(G\) for \(\mathcal{L}\) so that \(G\) is inherently ambiguous of degree \(k\).
4. A CFG \(G\) is _infinitely ambiguous_ if for each \(i\geq 1\), there exists a word in \(\mathcal{L}(G)\) which has at least \(i\) parse trees. A language \(L\) is _infinitely inherently ambiguous_ if every grammar generating \(L\) is infinitely ambiguous.
The CFL \(\mathcal{A}=\{a^{i}b^{j}c^{k}\mid i=j\text{ or }j=k\}\) is inherently ambiguous of degree 2 [8, p. 240], \(\mathcal{A}^{m}\) is inherently ambiguous of degree \(2^{m}\)[8, Theorem 7.3.1], and \(\mathcal{A}^{*}\) is infinitely inherently ambiguous [8, Theorem 7.3.3]. Another interesting CFL which is infinitely inherently ambiguous is Crestin's language [3] of double palindromes over a binary alphabet \(\{w_{1}w_{2}\mid w_{1},w_{2}\in\{a,b\}^{*},w_{1}=w_{1}^{R},w_{2}=w_{2}^{R}\}\). Furthermore, for every \(k\geq 1\), there exist inherently ambiguous CFLs of degree \(k\). The behavior of the sequence \(n_{w}\) over all CFGs for a language was studied by Wich [25, 26].
## 3 Signed grammars
We consider _signed grammars_\(G\) which are like CFGs but with a sign associated with each production, that is, apart from the usual (say positive) productions, we allow productions of the form \(A\to-\alpha\). In the derivation relation we use the signs as usual in a multiplicative manner: We start the derivation from the sentence symbol (with \(+\) sign, but as usual we may not need to put it, as it is the default sign). The derivation steps, as rewriting steps, occur as they are expected in a CFG, the only extension is that we need to deal with also the sign. When a positive production is applied in a sentential form, its sign does not change, while whenever a negative production is applied, this derivation step switches the sign of the sentential form. Thus, in this case the yield of a parse tree of \(G\) is a word over \(\Sigma\) with a \(\pm\) sign attached to it. Furthermore, the sign of a derived word depends only on the parity of the number of negative productions used during its derivation. Therefore, different derivation trees for the same word may lead to the word with different signs attached to it. We note that, in fact, any CFG is a signed grammar. For a signed grammar \(G\), let \(f(G)\) be defined as in (3), where again \(\mathcal{T}_{G}\) denotes all parse trees over \(G\). Without loss of generality, we may assume that in the grammar \(G\) there are only finitely many parse trees for any of the words generated by the grammar.
**Definition 1**: _We say that a signed grammar \(G\) generates a language \(\mathcal{L}\) iff the weighted series \(f(G)\) in (3) is the listing series of \(\mathcal{L}\), i.e. \(f(G)=f(\mathcal{L})\)._
### Examples of languages generated by signed grammars
**Example 1**: For the signed grammar \(G_{1}\) with start symbol \(A\) and productions \(A\to-aA\,|\,\lambda\), we have
\[f(G_{1})=\sum_{i\geq 0}a^{2i}-\sum_{i\geq 0}a^{2i+1}\,. \tag{4}\]
Therefore the signed grammar \(G\) with productions \(S\to A\,|\,B\), \(A\to-aA\,|\,\lambda\), \(B\to aaB|\,a\) generates the regular language \((aa)^{*}\). As this is our first example, we provide details of the derivations in \(G\):
* The empty word \(\lambda\) can be derived only in one way, by applying a positive production, thus it is in the language.
* By applying a negative and a positive production, \(S\Rightarrow A\Rightarrow-aA\Rightarrow-a\) yields \(-a\), and \(S\Rightarrow B\Rightarrow a\) yields \(+a\). These two are the only derivations over \(G\) for \(\pm a\). This means that the word \(a\) is not in the language.
* For the word \(aa\), the only derivation is \(S\Rightarrow A\Rightarrow-aA\Rightarrow aaA\Rightarrow aa\). Consequently \(aa\) is in the generated language.
* Finally, by induction, one can see that an even number of \(a\)-s can only be produced by starting the derivation by \(S\Rightarrow A\). Following this positive production, each usage of \(A\to-aA\) introduces a negative sign. Therefore each word of the form \(a^{2i}\) is generated once this way with a \(+\) sign. On the other hand there are two possible ways to produce a string \(a^{2i+1}\) of an odd number of \(a\)-s. One of these starts with \(A\Rightarrow-aA\) as before and produces \(-a^{2i+1}\) after an odd number of usages of \(A\to-aA\); the other one starts with \(S\Rightarrow B\) and produces \(a^{2i+1}\) after an even number of applications of \(B\to aaB\), followed by \(B\to a\). Therefore odd length words cancel each other out and are not in the language generated.
Another way to look at this is to note that for the (signed) grammar \(G_{2}\) with the start symbol \(B\) and productions \(B\to aaB|\,a\), we have
\[f(G_{2})=\sum_{i\geq 0}a^{2i+1}\,, \tag{5}\]
and the words generated by \(G\) are given by the formal sum of (4) and (5).
**Example 2**: The signed grammar with productions \(S\to aS|\,bS|-baS|\,\lambda\) generates the regular language denoted by the regular expression \(a^{*}b^{*}\). First few applications of the productions give
\[\lambda;\] \[a+b-ba;\] \[a^{2}+ab-aba+ba+b^{2}-b^{2}a-ba^{2}-bab+baba;\]
in which the only immediate cancellation is of \(-ba\), though all words carrying negative signs will eventually cancel out. This is a special case of the Cartier-Foata result [2], [6, Section 8.4].
**Example 3**: Over the decimal (or the binary) alphabet we can construct an unambiguous regular grammar \(G\) that generates all nonnegative even numbers, e.g., \(S\to 9S|\,8A\,|\,7S|\,6A\,|\,5S|\,4A\,|\,3S|\,2A\,|\,1S|\,0A\) and \(A\to 9S|\,8A\,|\,7S|\,6A\,|\,5S|\,4A\,|\,3S|\,2A\,|\,1S|\,0A\,|\,\lambda\). Let, further, a regular grammar \(G^{\prime}\) be generating the numbers which are divisible by \(6\) (e.g., based on the deterministic finite automaton checking the sum of the digits to be divisible by \(3\) and the last digit must be even, we need states/nonterminals to count the sum of already read digits by mod \(3\) and take care to the last digit as we did for \(G\)).
Then \(\mathcal{L}(G)\) consists of all even numbers and \(\mathcal{L}(G^{\prime})\) consists of all numbers divisible by \(6\). Now, from \(G^{\prime}\), we may make a signed grammar \(G^{\prime\prime}\) which allows us to derive every multiple of \(6\) with the sign \(-\). Then by combining the two grammars \(G\) and \(G^{\prime\prime}\), we can easily give a signed grammar that generates all even numbers that are not divisible by \(3\) (i.e., even numbers not divisible by \(6\)).
**Example 4**: Over the alphabet \(\{a,b\}\) consider the signed grammar with productions \(S\to aSa\,|\,bSb|\,a\,|\,b\). This so far generates odd length palindromes. Let us add the productions \(S\to-A,\ A\to-abAba\,|\,a\).
Then each odd length palindrome with the letter \(b\) in the middle has exactly one derivation tree with a \(+\) sign. There are no cancellations for these and therefore all odd length palindromes with \(b\) in the middle are in the language. If the middle of an odd length palindrome \(w\) is \(a\) but not \(ababa\), then \(w\) is not in \(\cal L\) as it has also derivation tree with \(-\) sign. Similarly, if the middle of \(w\) is \(ababa\) but not \(ababababa\), \(w\) is in \(\cal L\). In general, if an odd length palindrome \(w\) has \((ab)^{2k-1}a(ba)^{2k-1}\) in the middle, but it does not have \((ab)^{2k}a(ba)^{2k}\) in its middle, then it is in \(\cal L\). Here the number of derivation trees for a word with a \(+\) sign is either equal to the number of derivation trees with a \(-\) sign for the word, or it is exactly one more.
**Example 5**: For the following signed grammar
\[S_{1}\to-aA\,|\,Ba\,|\,a\] \[A\to-aA\,|\,Ba\,|\,a\] \[B\to-aB\,|\,Ba\,|\,-a\,|\,aa\]
for \(n\) odd, there are \(2^{n-1}\) parse trees for \(a^{n}\) and \(2^{n-1}-1\) parse trees for \(-a^{n}\). For \(n\) even, there are \(2^{n-1}-1\) parse trees for \(a^{n}\) and \(2^{n-1}\) parse trees for \(-a^{n}\). In other words for the above grammar
\[f(G) = \sum_{i\geq 0}2^{2i}a^{2i+1}+\sum_{i\geq 0}(2^{2i}-1)a^{2i}-\sum_ {i\geq 0}(2^{2i}-1)a^{2i+1}-\sum_{i\geq 0}2^{2i}a^{2i}\] \[= \sum_{i\geq 0}(-1)^{i}a^{i+1}\,.\]
If we add the productions \(S\to S_{1}\,|\,S_{2},\ S_{2}\to aaS_{2}\,|\,aa\) then the resulting signed grammar generates the regular language \(a(aa)^{*}\). Even though the language generated is very simple we see that signed grammars possess some interesting behavior.
## 4 Properties of languages generated by signed grammars
In this section our aim is twofold. On the one hand we give some closure properties of the class of languages generated by our new approach and, on the other hand, we give hierarchy like results by establishing where this family of languages is compared to various other classes.
We immediately observe that in the weighted sum (3) for a CFG \(G\) (i.e. a signed grammar \(G\) with no signed productions), the coefficient \(n_{w}\) is the number of parse trees for \(w\) over \(G\), in other words the degree of ambiguity of \(w\).
**Proposition 1**: _Any unambiguous CFL is generated by a signed grammar._
**Proof** An unambiguous CFL \(\cal L\) is generated by the signed grammar \(G\) where \(G\) is any unambiguous CFG for \(\cal L\). \(\bullet\)
As the class of unambiguous CFLs contains all deterministic CFLs, \(LR(0)\) languages, regular languages, subsets of \(w_{1}^{*}w_{2}^{*}\)[8, Theorem 7.1], all of these languages are generated by signed grammars. Further, all these classes are proper subsets of the class of languages generated by signed grammars.
Now we present a closure property.
**Proposition 2**: _Languages generated by signed grammars are closed under complementation._
**Proof** Take an unambiguous CFG for \(\Sigma^{*}\) with start symbol \(S_{1}\). If \({\cal L}\) is generated by a signed grammar with start symbol \(S_{2}\) (and no common nonterminal in the two grammars), then the productions of the two grammars together with \(S\to S_{1}\mid\ -S_{2}\) with a new start symbol \(S\) generates \(\overline{{\cal L}}\). \(\bullet\)
We continue the section comparing our new class of languages with other well-known language class, the class of CFLs.
In 1966 Hibbard and Ullian constructed an unambiguous CFL whose complement is not a CFL [10, Theorem 2]. Recently Martynova and Okhotin constructed an unambiguous linear language whose complement is not context-free [15]. This shows that unambiguous linear CFLs are not closed under complementation while providing another proof of Hibbard and Ullian's result.
We know that languages generated by signed grammars are closed under complementation, and also every unambiguous CFL is generated by a signed grammar. A consequence of this is that signed grammars can generate languages that are not context-free.
**Proposition 3**: _There is a language generated by a signed grammar that is not context-free._
**Proof** If \({\cal L}\) is the unambiguous CFL constructed by Hibbard and Ullian, then \({\cal L}\) and therefore \(\overline{{\cal L}}\) are generated by signed grammars. But we know that \(\overline{{\cal L}}\) is not context-free. \(\bullet\)
Actually, our last proposition shows that the generative power of signed grammars is surprisingly large, it contains, e.g., all deterministic and unambiguous CFLs and their complements. Thus, one can easily generate some languages that are not in the class of CFLs.
Continuing with closure properties, recall that disjoint union is an operation that is defined only on disjoint sets which produces their union.
**Proposition 4**: _Languages generated by signed grammars are closed under disjoint union \(\Uparrow\)._
**Proof** Let \({\cal L}_{1}\) and \({\cal L}_{2}\) be two languages over an alphabet \(\Sigma\) such that \({\cal L}_{1}\cap{\cal L}_{2}=\emptyset\). Let \({\cal L}_{1}\) be generated by a signed grammar with start symbol \(S_{1}\) and \({\cal L}_{2}\) be generated by a signed grammar with start symbol \(S_{2}\), such that the sets of nonterminals of these two grammars are disjoint. Then the productions of the two grammars together with \(S\to S_{1}\mid S_{2}\) with a new start symbol \(S\) generates the disjoint union \({\cal L}_{1}\uplus{\cal L}_{2}\). \(\bullet\)
Now, let us define the set theoretical operation "subset minus" (\(\ominus\)), as follows: let \(A\subseteq B\), then \(B\ominus A=B\setminus A\). This type of setminus operation is defined only for sets where the subset condition holds.
**Proposition 5**: _Languages generated by signed grammars are closed under subset minus \(\ominus\)._
**Proof** Let \({\cal L}_{1}\subseteq{\cal L}_{2}\) be two languages over a given alphabet \(\Sigma\). Take the signed grammar for \({\cal L}_{1}\) with start symbol \(S_{1}\). If \({\cal L}_{2}\) is generated by a signed grammar with start symbol \(S_{2}\) (with no common nonterminals of the two grammars), then the productions of the two grammars together with \(S\to S_{1}\mid\ -S_{2}\) with a new start symbol \(S\) generates the language of \({\cal L}_{2}\ominus{\cal L}_{1}\). \(\bullet\)
Let \({\cal L}_{1},{\cal L}_{2}\subseteq\Sigma^{*}\) be two languages and \(\$\not\in\Sigma\). The \(\$\)-concatenation of \({\cal L}_{1}\) and \({\cal L}_{2}\) is the language \({\cal L}_{1}\$\).\({\cal L}_{2}\) over the alphabet \(\Sigma\cup\{\$\}\).
**Proposition 6**: _Languages generated by signed grammars are closed under \(\$\)-concatenation._
**Proof** The language \({\cal L}_{1}\$\) has the prefix property (i.e. it is prefix-free) due to the special role of the marker \(\$\). Let \(G_{1}\) and \(G_{3}\) be signed grammars with disjoint variables and start symbols \(S_{1}\) and \(S_{3}\) that generate \({\cal L}_{1}\) and \({\cal L}_{2}\), respectively. Consider also the signed grammar \(G_{2}\) with the single production
\(S_{2}\to\$\). Then the signed grammar which have all the productions of \(G_{1},G_{2},G_{3}\) together with the production \(S\to S_{1}S_{2}S_{3}\) where \(S\) is a new start symbol generates the language \(\mathcal{L}_{1}\$\). The proof follows by observing that for \(u,u^{\prime}\in\mathcal{L}_{1}\) and \(v,v^{\prime}\in\mathcal{L}_{2}\), \(u\$v=u^{\prime}\$v^{\prime}\) iff \(u=u^{\prime}\) and \(v=v^{\prime}\), so that each word that appears in the expansion of
\[\left(\sum_{w\in\mathcal{L}_{1}}w\right)\$\left(\sum_{w\in\mathcal{L}_{2}}w \right)\]
has coefficient \(1\). \(\bullet\)
In a similar manner, it can also be seen that we have a similar statement for languages over disjoint alphabet, i.e., the class of languages generated by signed grammars is closed under "disjoint concatenation" \(\square\).
**Proposition 7**: _Let \(\mathcal{L}_{1}\subseteq\Sigma_{1}^{*}\) and \(\mathcal{L}_{2}\subseteq\Sigma_{2}^{*}\) be two languages that are generated by signed grammars, where \(\Sigma_{1}\cap\Sigma_{2}=\emptyset\). Then, the language \(\mathcal{L}_{1}\square\mathcal{L}_{2}=\mathcal{L}_{1}\mathcal{L}_{2}\) can be generated by a signed grammar._
In the following proposition, \(f(\mathcal{L})\) and \(f(G)\) are as defined in (1) and (3).
**Proposition 8**: _Suppose \(\mathcal{L}\) generated by a signed grammar. Then there are CFGs \(G_{1}\) and \(G_{2}\) such that \(f(\mathcal{L})=f(G_{1})-f(G_{2})\)._
**Proof** Given a signed grammar over \(\Sigma\), add an extra letter \(t\) to \(\Sigma\) and replace all productions of the form \(A\to-\alpha\) by \(A\to t\alpha\). The words generated by this CFG over \(\Sigma\cup\{t\}\) with an even number of occurrences of \(t\) is a CFL since it is the intersection of CFL and the regular language, i.e. all words over \(\Sigma\cup\{t\}\) with an even number of occurrences of \(t\). Similarly, the words generated with an odd number of occurrences of \(t\) is a CFL. We can then take homomorphic images of these two languages generated by replacing \(t\) by \(\lambda\) and obtain two CFLs generated by CFGs \(G_{1}\) and \(G_{2}\). The weighted series \(f(G)\) is then the difference of two weighted series
\[f(G)=f(G_{1})-f(G_{2})=\sum_{w\in\Sigma^{*}}n_{w}w\ -\ \sum_{w\in\Sigma^{*}}n_{w}^{ \prime}w. \tag{6}\]
In (6), the coefficients \(n_{w}\) and \(n_{w}^{\prime}\) are nonnegative integers for all \(w\in\Sigma^{*}\) as they count the number of derivation trees for \(w\) over \(G_{1}\) and \(G_{2}\), respectively. \(\bullet\)
**Remark 1**: _In Proposition 8, \(f(G_{1})-f(G_{2})\) is the listing series of \(\mathcal{L}\), and therefore \(n_{w}-n_{w}^{\prime}=1\) or \(n_{w}-n_{w}^{\prime}=0\) for all \(w\in\Sigma^{*}\). In the first case \(w\in\mathcal{L}\), and in the second \(w\not\in\mathcal{L}\). Note that these conditions do not imply that \(\mathcal{L}=\mathcal{L}(G_{1})\setminus\mathcal{L}(G_{2})\)._
## 5 Partial commutativity
Addition of commutativity relations to CFGs was considered in [19]. Here we consider partial commutativity defined on \(\Sigma^{*}\) where \(\Sigma=\{x_{1},x_{2},\ldots,x_{m}\}\). Given an \(m\times m\) symmetric \(\{0,1\}\)-matrix \(A=[a_{i,j}]\) with 1s down the diagonal, a pair of letters \(x_{i},x_{j}\) is a commuting pair iff \(a_{i,j}=1\). This defines an equivalence relation and partitions \(\Sigma^{*}\) into equivalence classes, also known as traces. Thinking about the element of the alphabet as processes and traces as their scheduling, commuting processes are considered as independent from each other. In this way the theory of traces has been intensively studied in connection to parallel processes [11, 12]. A (linearization of a) trace language is a union of some of these equivalence classes. Trace languages based on regular, linear and context-free languages (adding a partial
commutativity relation to the language) were studied and accepted by various types of automata with translucent letters in [22, 24, 23], respectively. Traces and trajectories are also analyzed in various grids [16, 17, 21]. On the other hand, the _Cartier-Foata language_\(\mathcal{L}_{A}\) corresponding to the matrix \(A\) of a partial commutativity relation is constructed by picking a representative word from each equivalence class.
Let us define a set \(F\subseteq\Sigma\) to be _commuting_ if any pair of letters in \(F\) commute. Let \(\mathcal{C}(A)\) denote the collection of all nonempty commuting sets. Denote by \(w(F)\) the word obtained by juxtaposing the letters of \(F\). The order in which these letters are juxtaposed is immaterial since all arrangements are equivalent.
The central result is that the listing series \(f(\mathcal{L}_{A})\) can be constructed directly from the matrix \(A\):
\[f(\mathcal{L}_{A})=\left(\sum_{F\in\mathcal{C}(A)}(-1)^{\#F}w(F)\right)^{*}= \sum_{n\geq 0}\left(\sum_{F\in\mathcal{C}(A)}(-1)^{\#F}w(F)\right)^{n}\, \tag{7}\]
where \(\#F\) denotes the number of elements of \(F\).
Over \(\Sigma=\{a,b\}\) where \(a\) and \(b\) commute, the Cartier-Foata theorem gives \(\mathcal{L}_{A}\) as \((a+b-ba)^{*}\), which is to be interpreted as the weighted series \(\lambda+(a+b-ba)+(a+b-ba)^{2}+\cdots\) In this case the representatives of the equivalence classes are seen to be the words in \(a^{*}b^{*}\). The essence of the theorem is that this is a listing series, so there is exactly one representative word from each equivalence class that remains after algebraic cancellations are carried out.
Similarly over \(\Sigma=\{a,b,c\}\) with \(a,b\) and \(a,c\) commuting pairs, the listing series is \(\lambda+(a+b+c-ba-ca)+(a+b+c-ba-ca)^{2}+\cdots\)
The words in this second language are generated by the signed grammar
\[S\to\lambda\,|\,aS\,|\,bS\,|\,cS\,|\,-baS\,|\,-caS\.\]
## 6 Conclusions and a conjecture
Proposition 8 provides an expression for the listing series of a language generated by a signed grammar in terms of weighted listed series of two CFLs. However this result is short of a characterization in terms of CFLs. It is also possible to change the way signed grammars generate languages by requiring \(n_{w}\geq 1\) in (2) instead of equality. In this way, every signed grammar would generate a language, and obviously, the class of generated languages would also change. However, our consideration in this paper to allow only \(0\) and \(1\) to be the signed sum, gives a nice and immediate connection to Cartier-Foata languages in the regular case by special regular like expressions.
Since by signed grammars, we generate languages based on counting the number of (signed) derivation trees, it is straightforward to see the connection between our grammars and unambiguous CFLs. On the other hand, there may be more than one derivation tree for a given word \(w\), with the proviso that the algebraic sum of the yields of derivation trees for it has multiplicity \(n_{w}\in\{0,1\}\). Therefore signed grammars may also generate ambiguous CFLs. In this sense, the bottom of the hierarchy, the unambiguous CFLs are included in the class we have investigated. On the other hand, if there are multiple derivation trees for a word generated by a grammar, by playing with their signs, we have a chance to somehow have their signed sum to be in \(\{0,1\}\). Thus, it may be possible to generate languages that are higher in the hierarchy based on ambiguity. However, this is still an open problem.
We have shown that signed grammars can generate languages that are not context-free. It would be of interest to use the fact that the languages generated by signed grammars are closed under complementation to show that signed grammars can generate inherently ambiguous CFLs. One way to do this
would be to start with an unambiguous CFL whose complement is an inherently ambiguous CFL. The standard examples of inherently ambiguous CFLs do not seem to have this property. By the Chomsky-Schutzenberger theorem [3] the generating function of an unambiguous CFL is algebraic. Using the contrapositive and analytical methods, Flajolet [7] and later Koechlin [14] devised ingenious methods to show the transcendence of the generating function of a given language to prove its inherent ambiguity. However if the generating function of \(\mathcal{L}\) is transcendental so is the generating function of its complement \(\overline{\mathcal{L}}\). This means that one needs to look among inherently ambiguous languages with algebraic generating functions (e.g. \(\{a^{i}b^{j}c^{k}\mid i=j\text{ or }j=k\}\), see [14, Proposition 14]) if the complement has any chance of being unambiguous.
So it would be nice to have an answer to the following question: _Is there an unambiguous CFL whose complement is an inherently ambiguous CFL?_
A related problem of showing the existence of an inherently ambiguous CFL whose complement is also an inherently ambiguous CFL was settled by Maurer [18].
|
2309.04136 | Shape-Morphing Dynamics of Soft Compliant Membranes for Drag and
Turbulence Modulation | We study the kinematics and dynamics of a highly compliant membrane disk
placed head-on in a uniform flow. With increasing flow velocity, the membrane
deforms nonlinearly into increasingly parachute-like shapes. These
aerodynamically elongated materials exhibit a modified drag law, which is
linked to the elastohydrodynamic interactions. We predict the unsteady
structural response of the membranes using a nonlinear, aeroelastic model -- in
excellent agreement with experimental measurements of deformations and force
fluctuations. With simultaneous membrane interface tracking, force measurements
and flow tracing, we reveal that a peculiar skewness in the membrane's
oscillations triggers turbulence production in the wake, thereby modulating the
drag. The present work provides a demonstration of the complex interplay
between soft materials and fluid turbulence, leading to new, emergent system
properties. | Varghese Mathai, Asimanshu Das, Dante L. Naylor, Kenneth S. Breuer | 2023-09-08T05:31:18Z | http://arxiv.org/abs/2309.04136v1 | # Shape-morphing dynamics of soft compliant membranes for drag and turbulence modulation
###### Abstract
We study the kinematics and dynamics of a highly compliant membrane disk placed head-on in a uniform flow. With increasing flow velocity, the membrane deforms nonlinearly into increasingly parachute-like shapes. These aerodynamically elongated materials exhibit a modified drag law, which is linked to the elastohydrodynamic interactions. We predict the unsteady structural response of the membranes using a nonlinear, aeroelastic model - in excellent agreement with experimental measurements of deformations and force fluctuations. With simultaneous membrane interface tracking, force measurements and flow tracing, we reveal that a peculiar skewness in the membrane's oscillations triggers turbulence production in the wake, thereby modulating the drag. The present work provides a demonstration of the complex interplay between soft materials and fluid turbulence, leading to new, emergent system properties.
The interaction of elastic structures with fluids is a problem of central importance to the mechanics of continua and interfaces. When a flexible structure is placed in a flow, its shape change can induce modified interactions between the structure and surrounding flow [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. This coupling can lead to complex, fluid-structure interactions; common examples range from the fluttering of a flag or a flexible structure in the wind, to the swimming of fish [14; 15; 16; 17; 18; 19; 20; 21; 22]. Often, these interactions affect the thrust/drag response of the systems involved [2; 3; 4; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 23; 24; 25; 26; 27; 28]. A few studies have focused on the steady and unsteady interactions of elastic materials [29; 30], wherein the materials were typically operated within the linear elastic limit of small strains (e.g., [31; 10; 32]). In contrast, a broader class of highly deformable (nonlinear) materials can be envisioned, with a complex, strain-dependent, elasto-fluidic response. In such situations, the interplay between the nonlinearities of the material and the nonlinearities in the flow could pave way for emergent system properties.
A circular disk placed head-on in a uniform stream represents a classic example of a bluff body flow that has been extensively studied [33]. Ganedi et. al. [9] recently studied how an oil film suspended by a circular ring deformed in an external flow. When large stretchability, coupled with strain-stiffening behavior (or strain softening), is introduced to this problem, the resulting system can exhibit rich variability in its dynamical behavior. The unsteady behavior in such situations emerges out of interactions between the material's oscillations and the induced flow field around it.
In this work we present a combined experimental, theoretical, and numerical study of the unsteady fluid-structure interactions of an ultrasoft, compliant membrane placed head-on in a uniform flow. The incoming fluid flow deforms the membrane into parachute-like shapes (Fig. 1a). We will quantitatively show how these aeroelastically morphed membranes can enter a state of skewed resonance, triggering a modified drag/thrust response when compared to similarly shaped rigid shells. Our analysis, which combines theoretical predictions with membrane interface tracking and time-resolved flow field tracing, reveals that the unsteady motions of the soft, elastic membrane can induce turbulence in the far wake.
The membranes were fabricated using an addition-cure type of silicone rubber material with a range of thicknesses, \(h=250-3500\)\(\mu\)m. The shear modulus, \(G\), ranged between \(4-34\) kPa, controlled by adding different amounts of thinner [34]. A circular sample of the
Figure 1: (a) Schematic of the side-view of an initially circular membrane disk of diameter, \(D\), deforming in response to a uniform incoming flow of velocity, \(U_{\infty}\), where \(r\) and \(z\) are the radial and axial coordinates, respectively. The membrane disk is placed inside a low-speed wind tunnel and the membrane bulges to a mean maximum deformation \(w_{0}\), where \(w(r)\) is the deformation profile of the membrane. The two squares at the top and bottom denote the cut-section of the rigid circular ring that is used to hold the pre-stretched membrane. The unsteady oscillations induced about the mean bulge are denoted as \(w^{\prime}\). The inset depicts a small portion of the membrane with a force balance between pressures, \(p_{i}\) and \(p_{c}\), and tension \(\mathcal{T}\) developed across the membrane due to stretching (quasi-static approximation). (b) A representative schematic showing the normalized tension (\(\mathcal{T}/\mathcal{T}_{0}\)) with increasing strain rate (\(\epsilon\)) for different classes of materials.
cured membrane was mounted with a desired pre-stretch, \(\lambda_{0}\), onto a rigid acrylic ring with inner diameter \(D=120\) mm, outer diameter of 128 mm, and thickness 3 mm (see Supplemental Material for details [35]).
The membrane disk was fixed to a non-intrusive steel "claw", attached to a six-axis load cell and mounted head-on in the uniform air stream, in a low-speed wind tunnel, with a test-section of 1.2 m \(\times\) 1.2 m cross section, and 3.6 m length. Tests were conducted over a range of flow speeds, \(U_{\infty}=8-25\) m/s (Reynolds number, \(Re\equiv U_{\infty}D/\nu=10^{5}-10^{6}\)). The membrane's centerline deflection, \(w_{0}\), was varied in uniform steps from \(w_{0}/D=0.08-0.5\) by adjusting the flow speed, and force and torque data were collected at 20 kHz at each of the mean deflection values. A high-speed camera recorded side-view images at 500 frames/s. In a separate experiment, conducted in a different wind tunnel (test section: 0.61\({}^{2}\)m), velocity fields were measured, at 700 Hz, using Particle Image Velocimetry (PIV) (see Supplemental Material for details [35]).
As the velocity increases, the membrane starts to balloon from a flat disk shape toward increasingly parachute-like shapes, with a maximum deformation, \(w_{0}\), at the centerline. The resulting steady state deformations (Fig. 2a) show dependency on all of the experimental parameters: \(G\), \(h\), \(U_{\infty}\), and \(\lambda_{0}\), and varies monotonically, but nonlinearly with flow speed. At all deformations the membrane shape is well-approximated by a spherical cap (Supplemental Material [35]), with curvature,
\[\kappa=\frac{16w_{0}}{D^{2}+4w_{0}{}^{2}}. \tag{1}\]
The corresponding drag coefficient, \(C_{d}=F_{d}/(0.5\rho U_{\infty}^{2}A)\), where \(F_{d}\) is the drag force and \(A\) is the projected area of the disk, is shown in Fig. 2b. The drag coefficient for 3D-printed _rigid_ shells (Fig. 2b; black circles) increases monotonically from a value of 1.17 to a value of 1.42, in agreement with prior work [33]. In contrast, the membranes with the same _mean shape_ experience a higher drag. The drag coefficient for the membranes with the same shape varies non-monotonically with the membrane thickness (Fig. 2b). Furthermore, when compared to a rigid spherical cap the soft membranes exhibit oscillations about the mean shape (see Supplemental Movie [35]).
We can understand the membrane behavior using a simple analytical model, the unsteady deformation of a membrane, \(w(r,t)\), is given by [36]
\[\rho_{m}h\frac{\partial^{2}w}{\partial t^{2}}+\mathcal{T}\kappa=\Delta p, \tag{2}\]
where \(\rho_{m}\) is the membrane's mass density, \(\Delta p(r,t)\) is the pressure difference across the membrane, and \(\mathcal{T}\) is the membrane tension. Non-dimensionalizing Eq. 2 using length scale \(D\), time scale \(D/U_{\infty}\), and pressure scale \(0.5\rho U_{\infty}^{2}\), where \(\rho\) is the fluid density, we obtain
\[R\frac{\partial^{2}w^{*}}{\partial t^{*2}}+Ae_{i}\ \kappa^{*}=C_{p}, \tag{3}\]
where \(w^{*}\) and \(\kappa^{*}\) are the dimensionless deformation and three dimensional curvature, respectively, \(R=2\rho_{m}h/\rho D\) is the mass ratio, \(Ae_{i}=\mathcal{T}/(0.5\rho U_{\infty}^{2}D)\) is the so-called Aeroelastic number [23], and \(C_{p}\) is the pressure coefficient (see also Supplemental Material [35]).
While Eq. 3 appears to be linear, the membrane's material response and curvature introduce nonlinearity into the second term. The silicone material exhibits a hyperelastic stretch-strain response and strain-stiffens at large deformations [34]. Using a two-parameter Gent model for biaxial deformation [34; 37], the tension in the membrane can be expressed as \(\mathcal{T}=G_{m}h(1-\lambda^{-6})\), where \(G_{m}=GJ_{m}/(J_{m}-I_{1}+3)\), \(G\) is the material shear modulus, \(J_{m}\) is the locking parameter and \(I_{1}\) is the first invariant of the left Cauchy-Green deformation gradient
Figure 2: (a) Normalized mean deformation, \(w_{0}/D\), of various membranes as a function of the flow velocity, \(U_{\infty}\). Each of the dashed lines represents one value of thickness of membrane. The corresponding shear moduli values were \(G=[1.7,\,1.9,\,2.1,\) 3.5. 5.4, 9.5, 19.1] kPa. (b) Drag coefficient, \(C_{d}\) of various membranes as a function of mean deformation. The color map denotes the membrane thickness. Both the plots demonstrate the wide scatter (non-monotonic) in the deformations and drag measurements.
tensor [38]. Further, the stretch-ratio for the spherical cap geometry can be written in terms of the curvature and pre-stretch as \(\lambda=(4\lambda_{0}/\kappa^{*})\sin^{-1}(\kappa^{*}/4)\). Combining these, we can express the tension, \(\mathcal{T}\), or the aeroelastic number, \(Ae_{i}\), in terms of the membrane curvature \(\kappa^{*}\) and other material and fluid properties in Eq. 3.
At steady state, Eq. 3 yields a relation for \(w_{0}/D\) in terms of \(Ae_{i}\), which needs to be solved implicitly for the entire range of deformation (Fig. 3a). In the small deformation limit (or large \(Ae_{i}\)), the solution can be approximated as
\[\frac{w_{0}}{D}\approx\frac{1}{16Ae_{i}}. \tag{4}\]
The measured mean deformation, \(w_{0}/D\), is in excellent agreement with the model predictions for all values of \(Ae_{i}\), \(\lambda_{0}\) and \(R\). Additionally, uniaxial and biaxial characterizations were conducted in order to model the material stresses in response to prescribed strains (see Supplemental Material for details [35]).
We turn our attention to the unsteady kinematics of the membrane. The origin of these fluctuations can be linked to vortex shedding, which is commonly observed to occur in flows over bluff bodies, with a characteristic frequency, \(\omega_{s}\) at a constant Strouhal number, \(St=\omega_{s}D/2\pi U_{\infty}\)[39; 40]. The shedding generates an unsteady force, and one can expect the membrane to experience an inertial reaction force, \(F^{\prime}_{d}=m_{m}a^{\prime}_{m}\), where \(m_{m}\) and \(a^{\prime}_{m}\) are the membrane mass and characteristic scale of membrane acceleration, respectively. The spectra of the force measurements at all speeds show a nearly constant Strouhal number, \(\mathrm{St}=0.12\) (see Fig. S-1). The acceleration of the oscillating membrane can be expected to scale as \(a^{\prime}_{m}\propto w^{\prime}\omega_{s}^{2}\), where \(w^{\prime}\) is oscillation amplitude. Therefore, we can express \(F^{\prime}_{d}=\rho_{m}hAw^{\prime}\omega_{s}^{2}\), and correspondingly a fluctuating drag coefficient, \(C^{\prime}_{d}\approx 8\pi^{2}St^{2}Rw^{\prime}\). Comparing the experimental measurements of the force fluctuations with this inertial prediction (Fig. 3b), we observe an excellent agreement. This demonstrates that the measured drag fluctuations (second moment of \(w^{\prime}\)) are predominantly due to the _breathing mode_ of membrane oscillations (first mode).
Considering the oscillating membrane as a dynamical system, forced at the vortex shedding frequency, we adopt a forced harmonic oscillator model to understand the oscillation amplitude, and to explain the nonmonotonic variation of \(w^{\prime}\) with \(R\). By invoking axisymmetry, and considering small oscillations about a mean shape, we linearize Eq. 2 to obtain
\[\rho_{m}h\frac{\partial^{2}w^{\prime}}{\partial t^{2}}-2\mathcal{T}\frac{ \partial^{2}w^{\prime}}{\partial r^{2}}=C_{s}F_{\mathrm{dyn}}\sin\omega_{s}t. \tag{5}\]
Here the pre-factor \(C_{s}\) is the relative strength of the unsteady vortex shedding forces with respect to the dynamic pressure. A typical bluff body experiences unsteady vortex-induced forces that are about 10% of \(F_{dyn}=0.5\rho U_{\infty}^{2}A\)[41; 42]. Measurements of the force fluctuations for a rigid hemisphere yield \(C_{s}\sim 0.1\) (see Supplemental Material [35]).
The unsteady membrane equation (Eq. 5) supports modes that resonate when the natural frequency of the membrane, \(\omega_{m}\), coincides with the frequency of the vortex shedding, \(\omega_{s}\). Approximating that the membrane oscillates similar to a stretched drum (small curvature), the first mode of Eq. 5 has a natural frequency, \(\omega_{m}=3\pi c/2D\), where \(c=\sqrt{2\mathcal{T}/\rho_{m}h}\) is the wave speed [43]. This can be re-written in terms of the Aeroelastic parameter and the mass ratio:
\[\frac{\omega_{m}D}{2\pi U_{\infty}}=\frac{3}{2\sqrt{2}}\sqrt{\frac{Ae_{i}}{R}}. \tag{6}\]
Resonance will occur when \(\omega_{s}/\omega_{m}=1\), a prediction that is confirmed in our experimental measurement shown in
Figure 3: (a) Comparison of steady state membrane deformation from experiment with the model predictions. Here, the normalized mean deformation is plotted against the effective aeroelastic number, \(Ae_{i}\). In the small deformation limit the model approaches a slope of -1. (b) Normalised drag fluctuation measurements vs. prediction based on inertial scaling. (c) Amplitude of membrane oscillations, \(w^{\prime}/D\) vs. membrane mass ratio, \(R\), for a fixed mean deformation (\(w_{0}/D=0.5\)). The dotted line shows the prediction of resonance from the analytical model (Eq. 5). The solid green line incorporates the membrane damping.
Fig. 3c, for \(w_{0}/D=0.5\). (See also Supplemental Material [35]). By measuring the amplitude decay of an oscillating membrane (a "ring-down" test - see Supplemental Material [35]), we can include an empirical damping term to Eq. 5, which provides an upper bound on the amplitude at resonance (solid green curve in Fig. 3c). We observe a nearly parameter-independent-resonance point, i.e. a single physical membrane (with \(R=18\pm 2\)) can resonate at a broad range of flow conditions. This has been achieved because the membrane passively adapts its shape and natural frequency in proportion to the change in the flow speed (see Supplemental material [35] for further details).
Note the subtle asymmetry observed in the measurements of \(w^{\prime}/D\) about the resonance point (Fig. 3c). The origin of this asymmetry lies in the nonlinearity introduced by finite curvature of the membrane - unaccounted in the simplified drum head model - which is captured numerically by solving the unsteady membrane structural equation at large oscillation amplitudes (see Supplemental Material [35] for details). Lastly, with the mean deformation and unsteady oscillations explained, we focus on the mechanism responsible for the modified mean drag coefficient, \(C_{d}\), for the membranes. Despite the relatively low oscillation amplitude of the membrane: \(w^{\prime}/D\sim\mathcal{O}(10^{-2})\), the drag coefficient for the membrane is noticeably higher (by up to 20%) than that of a similarly shaped rigid shell (Fig. 2b). It is interesting to note that such small amplitudes of oscillations could induce a significant drag modification.
The drag on the body is reflected in the wake momentum deficit which can be obtained by radial integration of the mean and unsteady wake momentum contributions (see Supplemental Material for details [35]):
\[C_{d}=\frac{16}{D^{2}}\int_{0}^{R}\Bigg{(}\underbrace{\frac{\bar{u}_{z}}{U_{ \infty}}\bigg{[}1-\frac{\bar{u}_{z}}{U_{\infty}}\bigg{]}}_{mean}+\underbrace{ \frac{-\overline{u_{z}^{\prime\,2}}}{U_{\infty}^{2}}+\frac{1}{2}\frac{ \overline{u_{r}^{\prime\,2}}}{U_{\infty}^{2}}}_{unsteady}\Bigg{)}rdr \tag{7}\]
We perform two-dimensional particle image velocimetry (PIV) of the wake behind the membrane and a similarly-shaped rigid shell, measuring the axial, \(u_{z}\), and radial, \(u_{r}\), velocities. Comparing the velocity fields from these two cases, we find that the mean wake velocity profiles, \(\bar{u}_{z}\), are nearly identical (Fig. 4a), and hence the contribution to the drag from the steady term in Eq. 7 is comparable for the two cases. However, the turbulent kinetic energy, \(\text{TKE}\approx 3/4(u_{z}^{\prime 2}+u_{r}^{\prime 2})\), in the wake behind the membrane is significantly greater than for the rigid shell (Fig. 4b,c), and when one includes the unsteady velocity terms in the calculation of \(C_{d}\), we find excellent agreement between the force measurements and the PIV estimations for both the membranes and the rigid shells (Fig. 4d). Downstream in the wake, the small-scale fluctuations are expected to tend toward local isotropy, and the periodic signature of vortex shedding has nearly disappeared [44]. Remarkably, the increase in the wake TKE exceeds the energy density of the oscillating membrane by an order of magnitude, i.e. \(u_{z}^{\prime 2}/(w^{\prime}\omega_{s})^{2}\sim\mathcal{O}(10)\) and it is this energy that accounts for the increase in the mean drag coefficient. The weak correlation between TKE production and \(w^{\prime}\) can be rationalized by noting that it is the subtle skewness of oscillations that drives turbulence production. We performed a set of interface-resolved numerical simulations of a membrane oscillating within a fluid flow field. The membrane's rate of stretching as compared to its relaxation rate, i.e. the skewness of motion, dictates the degree of drag modulation (drag increase vs. drag reduction; see Supplemental Material [35]). A detailed exploration of this is part of an ongoing investigation.
In summary, we have conducted a systematic study of the aeroelastic response of an ultrasoft membrane disk in a uniform flow. We observe that the material deforms nonlinearly into parachute-like shapes. The time-averaged shape of the membrane can be accurately modeled using a hyperelastic Gent constitutive model [37] that depends on a single dimensionless parameter - the Aeroelastic number, \(Ae_{i}\). The unsteady membrane vibrations are driven by vortex shedding, and the fluctuations are accurately modeled using a simple spring-mass system that depends on the Aeroelastic number and the membrane mass parameter, \(R\). Through shape
Figure 4: Comparison of the (a) mean axial velocity profile \(\bar{u}_{z}(r)\) and (b) turbulent kinetic energy (TKE), at \(z/D=1.5\) downstream of the body with \(w_{0}/D=0.5\). The TKE field (c) downstream of a rigid shell and deformed membrane disk, with \(w_{0}/D=0.5\). The hatched region represents an area where velocity vectors were not available due to laser light reflections. (d) Comparison between the drag coefficient calculated independently from (i) direct force measurements (circles and squares) and (ii) a control volume analysis based on the velocity field (diamond symbols). The black and green symbols are for the rigid shells and the membranes, respectively.
morphing, the membrane adapts its natural frequency with the flow speed, resulting in a single physical membrane exhibiting (or avoiding) resonance over a broad range of flow conditions. We anticipate that triggering the nonlinear elastic response of materials within fluid flows may open up a number of opportunities for drag control using soft, stretchy materials.
We thank Anupam Pandey and Detlef Lohse for fruitful discussions. K.B acknowledges funding from the U.S. Army/Soldier Systems Center, Natick, MA. and support from NSF Grant #2035002. D.L.N. acknowledges funding from the Kenneth and Joanne Langley Research Fund and the Simenas Fellowship.
V. M. and A. D contributed equally to this work and are joint first authors. Experiments by A. D. and V. M. Numerical simulations and theoretical work by V. M., D. L. N., A. D., and K. B. Data analysis and writing of the manuscript by V. M., K. B. and A. D. Project conception by K. B and V. M.
|
2309.03659 | Towards Comparable Knowledge Distillation in Semantic Image Segmentation | Knowledge Distillation (KD) is one proposed solution to large model sizes and
slow inference speed in semantic segmentation. In our research we identify 25
proposed distillation loss terms from 14 publications in the last 4 years.
Unfortunately, a comparison of terms based on published results is often
impossible, because of differences in training configurations. A good
illustration of this problem is the comparison of two publications from 2022.
Using the same models and dataset, Structural and Statistical Texture
Distillation (SSTKD) reports an increase of student mIoU of 4.54 and a final
performance of 29.19, while Adaptive Perspective Distillation (APD) only
improves student performance by 2.06 percentage points, but achieves a final
performance of 39.25. The reason for such extreme differences is often a
suboptimal choice of hyperparameters and a resulting underperformance of the
student model used as reference point. In our work, we reveal problems of
insufficient hyperparameter tuning by showing that distillation improvements of
two widely accepted frameworks, SKD and IFVD, vanish when hyperparameters are
optimized sufficiently. To improve comparability of future research in the
field, we establish a solid baseline for three datasets and two student models
and provide extensive information on hyperparameter tuning. We find that only
two out of eight techniques can compete with our simple baseline on the ADE20K
dataset. | Onno Niemann, Christopher Vox, Thorben Werner | 2023-09-07T11:56:23Z | http://arxiv.org/abs/2309.03659v1 | # Towards Comparable Knowledge Distillation in Semantic Image Segmentation
###### Abstract
Knowledge Distillation (KD) is one proposed solution to large model sizes and slow inference speed in semantic segmentation. In our research we identify 25 proposed distillation loss terms from 14 publications in the last 4 years. Unfortunately, a comparison of terms based on published results is often impossible, because of differences in training configurations. A good illustration of this problem is the comparison of two publications from 2022. Using the same models and dataset, Structural and Statistical Texture Distillation (SSTKD) [14] reports an increase of student mIoU of 4.54 and a final performance of 29.19, while Adaptive Perspective Distillation (APD) [30] only improves student performance by 2.06 percentage points, but achieves a final performance of 39.25. The reason for such extreme differences is often a suboptimal choice of hyperparameters and a resulting underperformance of the student model used as reference point.
In our work, we reveal problems of insufficient hyperparameter tuning by showing that distillation improvements of two widely accepted frameworks, SKD and IFVD, vanish when hyperparameters are optimized sufficiently. To improve comparability of future research in the field, we establish a solid baseline for three datasets and two student models and provide extensive information on hyperparameter tuning. We find that only two out of eight techniques can compete with our simple baseline on the ADE20K dataset.
Keywords:Knowledge Distillation Efficient Semantic Segmentation Model Compression.
## 1 Introduction
Advances in Deep Learning techniques for Semantic Image Segmentation brought major performance improvements to fields such as autonomous driving, medical image analysis, robotic perception and video surveillance. As these performance gains often came at the price of increased model complexity and required computational power, efficient deep learning techniques have become increasingly relevant [18].
Two techniques, which start with a large model and make it more efficient are model Pruning and Quantization. Pruning shrinks a model by dropping less important nodes and Quantization reduces the numerical precision of weights. Knowledge distillation
takes a different approach and does not change model efficiency during training. Instead it starts with a small model (student) and improves its performance by leveraging guidance from a more complex model (teacher). Teacher model weights are frozen and knowledge is distilled into the student by making an addition to the student loss, which penalizes differences in student and teacher output. Incentivizing the student to mimic the more complex behaviour of the teacher can significantly lift its performance.
KD was introduced in the image classification domain and originally the distillation loss was applied to student and teacher only at output-level. The extension of teacher guidance to intermediate layers and the transfer to semantic segmentation have been studied in various publications. Re-using the most basic, output-level distillation loss in a segmentation context is straightforward, as it can be applied on pixel instead of image-level in the student and teacher output. However, several papers criticize that this naive approach treats pixels in isolation and introduce more complex techniques. Most of them build upon the naive pixel-wise distillation by adding loss terms to it. A problem with these published methods is that a significant share focuses solely on performance improvement in their own training framework and fails to provide a good baseline for comparison against other literature.
The dimension of this comparability problem is illustrated by the overview over important publications and their results we provide in Table 1. The comparison clearly shows that the baseline performance of the student model varies strongly between publications, making it hard to compare the quality of the proposed techniques based on the lift they provide. To address this problem of comparability, we perform an extensive hyperparameter optimization of the most fundamental of KD frameworks for semantic segmentation and provide detailed information on optimal hyperparameters for training of two student models on three datasets. As a byproduct of this optimization, we find that the temperature parameter used to "soften" student and teacher output in image classification can improve distillation in segmentation, although many publications in the field ignore it.
In summary, we point out challenges in comparing different methods and present a training procedure, which sets the ground for a fair comparison of KD techniques. We further put the performance of three commonly used loss terms into perspective by comparing them to our achieved results.
## 2 Related Work
Semantic Segmentation.Most earlier image segmentation techniques were based on Partial Differential Equations or Random Forest methods [10], before the advent of deep learning sparked a series of publications leveraging the powers of Convolutional Neural Networks (CNNs). The majority of this early CNN-based segmentation research focusses on improving model performance, which is achieved by adding skip connections [17] and decoder networks [24; 19; 2], or concatenating convolutions at different scales in the pyramid pooling module of Pyramid Scene Parsing Networks (PSPNets) [44]. More recent publications investigate the application of transformer based models [28; 33; 45]. Another branch of segmentation research focuses more on model efficiency instead of performance. Real-time semantic segmentation aims for fast infer
ence speed while maintaining performance [15; 21; 37; 43], but there is always a trade-off between performance and efficiency.
Knowledge Distillation in ClassificationKD aims to increase the performance of a compact student model by leveraging guidance from a large teacher model during training. It is one of the most widely used model compression techniques due to its broad applicability [31]. Unlike Model Pruning or Quantization, KD is model agnostic [3] as no requirements are imposed on the teacher or student model. Additionally, KD allows leveraging unlabelled data as the teacher can provide soft labels for student training [31].
Generally, the teacher model weights are frozen during student training and the student is encouraged to mimic the teacher's output logits by training on a weighted combination of standard cross entropy and KD loss [4; 13] (Eq. 1). The cross entropy term (Eq. 2) ensures performance on the labeled training data while the distillation term (Eq. 3) penalizes deviations from the teacher output. The weight of the distillation loss, \(\lambda\), is a hyperparameter of student training. Scaling output logits of both models by \(\tau>1\) is crucial for a successfull distillation of knowledge in image classification [13] and extending the level of student teacher matching to intermediate feature layers can give a further boost [23; 40].
\[L_{S}=L_{CE}+\lambda*L_{KD} \tag{1}\]
\[L_{CE}(z_{s},y)=-\sum_{c=1}^{C}y_{c}\log\sigma_{c}(z_{s}) \tag{2}\]
\[L_{KD}(z_{s},z_{t})=-\tau^{2}\sum_{c=1}^{C}\sigma_{c}(\frac{z_{t}}{\tau})\log \sigma_{c}(\frac{z_{s}}{\tau}) \tag{3}\]
Despite the simplicity of the classical KD framework, the underlying mechanisms are still not well-understood. Commonly, the success of KD is associated with the information contained in teacher predictions for wrong classes, referred to as "dark knowledge" [39]. However, phenomena such as students outperforming teachers when trained solely on teacher output and high disagreement between student and teacher predictions suggest that there are other explanations for the success of KD [27]. Two alternative explanations are that teacher guidance has a regularizing effect similar to label smoothing [39] and that teacher output provides a sample-wise importance weighting, making the student focus on samples of low teacher confidence [8].
KD in Image SegmentationApplying the classical KD framework to semantic image segmentation problems is straight-forward. The distillation loss in Equation 3 is usually applied on image-level, summing over all classes, but since image segmentation is essentially pixel-wise classification, it can be applied to each pixel in the image instead. This most basic distillation loss is referred to as pixel-wise distillation (\(L_{PI}\)) and is a useful baseline for more complex distillation schemes. Since pixel-wise distillation treats each pixel in isolation and ignores the fact that segmentation depends strongly on
contextual information, various alternative distillation techniques exist, many of which use pixel-wise distillation as one part of their framework.
The earliest proposed technique investigates a "consistency" loss comparing regional differences in student and teacher output [34] aiming for a more contextual distillation of knowledge by matching the distance of the center pixel and an 8-neighborhood. Instead of a direct matching of student and teacher output, in Knowledge Adaptation (KA) [12] teacher output is compressed to a more dense latent space by an autoencoder before being compared to student logits. Additionally, an affinity loss term is introduced to better capture long-range dependencies.
Structured Knowledge Distillation (SKD) [16] is the most cited publication in the field and uses a combination of three loss terms, one of them the basic pixel-wise loss. Again the other two loss terms are supposed to focus more on contextual information in both intermediate and output layers. The pair-wise loss (\(L_{PA}\)) is based on a pair-wise Markov random field framework and encourages students to mimic the teacher at intermediate layers. The holistic loss (\(L_{HO}\)) requires an additional discriminator model with the task to differentiate student and teacher output. Student and discriminator compete with each other following a similar training protocol as used to train Generative Adversarial Networks [9], ultimately leading to the student output being as similar as possible to the teacher output [16].
Building upon SKD, Intra-Class Feature Variation Distillation (IFVD) [32] also uses the pixel-wise and holistic loss terms, but replaces the pair-wise loss with the new IFV loss, \(L_{IFV}\). Per-class prototypes are calculated and the distance of intermediate features to the respective prototype is aligned between student and teacher. Another suggested approach, CSCACE [20], introduces a Channel and Spatial Correlation (CSC) and an Adaptive Cross Entropy (ACE) term. CSC calculates correlation matrices between intermediate features and can be understood as an extension of the pair-wise loss of SKD. ACE combines the classic pixel-wise KD loss with the ground truth label by only using the teacher output when the teacher prediction is correct.
As one of the few approaches that modify the pixel-wise distillation loss, Channel-Wise Distillation (CWD) [26] proposes a channel/class-wise normalization of student and teacher outputs before calculating the KL-divergence, a technique that is copied by several later methods. Double Similarity Distillation (DSD) [7] introduces a pixel-wise similarity loss, which matches intermediate student and teacher features by utilizing self-attention maps across multiple layers. A category-wise similarity distillation loss makes the student mimic the teacher at output level by minimizing the L2 distance between correlation matrices of student and teacher output. Masked Generative Distillation (MGD) [36] is a more general technique, which can also be used for knowledge distillation in image classification or object detection. The proposed method masks some parts of the input while still requiring the student to mimic the full teacher output.
Inter-Class Distance Distillation (IDD) [41] and Feature-Augmented Knowledge Distillation (FAKD) [38] use the pixel-wise-distillation loss with channel-wise normalization as suggested in CWD [26]. IDD additionally includes all terms introduced by SKD and an inter-class feature distance loss following a similar reasoning as IFVD. FAKD does not introduce additional loss terms but proposes the perturbation of inter
mediate student features in multiple ways and training the student to mimic the teacher despite the applied perturbations.
Structural and Statistical Texture Knowledge Distillation (SSTKD) [14] and Adaptive Perspective Distillation (APD) [30] use the pixel-wise distillation loss. SSTKD additionally includes the holistic loss of SKD and two novel loss terms, which encourage the student to mimic low-level texture information of the teacher [14]. The authors of APD argue that segmentation networks learn to generalize and thus acquire a universal perception. Their introduced adaptive perspective includes calculating class-wise average features of individual images. According to the authors, this process distills contextual information more explicitly. APD is the only reviewed technique that updates part of the teacher model during student training.
Self-attention and Self-distillation [1] introduces a self-attention loss term to make the student learn contextual information from the teacher and a layer-wise context distillation loss. The second term is different from other discussed techniques, as it is applied across student layers to ensure a consistent representation of contextual information in shallow layers. Cross-Image Relational Knowledge Distillation (CIRKD) [35] again uses the pixel-wise distillation loss and uses three more loss terms. Unique about this method is the introduction of a pixel queue to make student models mimic the teachers distance to output for pixels of the same class from previous images.
## 3 Related Work Comparison
A comparison of the discussed knowledge distillation frameworks is rarely straightforward. A very common problem is differences in student and teacher architectures. Especially earlier methods use a variety of models and results cannot be compared to recent literature. Table 1 presents the results of different distillation frameworks that report performance on the most common choice of student and teacher model, PSPNets with ResNet101 and ResNet18 backbone, respectively. This choice of models has been the standard in the field since the publication of SKD [16], but Consistency [34], CSCACE [20], KA [12] and SALC [1] use other architectures and are excluded from our table.
Even though the techniques in Table 1 use the same model architectures we can observe significant performance differences for both the student-only training ("S only") and the full KD framework ("+T"). While the differences in "+T" are to be expected for different KD algorithms, the inconsistent baseline performance of the student model makes it very hard to compare different algorithms.
Additionally, the listed techniques often propose a number of loss terms ("Ls"), but do not provide comprehensive ablation studies that isolate the effect of individual terms across different datasets. Finally, many results in recent literature are reported without standard deviations, with SKD being the only exemption in Table 1. This introduces another complicating factor when comparing results of KD algorithms.
For a specific example we look at APD vs. DSD: Judging only by the final performance of 39.25, APD outperforms all other techniques on ADE20K. Considering its starting point of 37.19, however, student performance is only lifted by 2.06. As DSD
achieves a lift of 4.20 using one loss term less, but only achieves a final performance of 38.00, it is unclear, which of the two techniques is superior.
## 4 Methodology
In this work, we present a series of experiments highlighting the importance of individual hyperparameter tuning for every combination of datasets and models. We grid search initial learning rate \(\mu(0)\) and regularization rate \(\gamma\) for student with and without teacher model separately to ensure the best performance possible in both settings. In this way we guarantee a fair performance comparison when measuring the lift from KD later. In a second step, we optimize the temperature parameter \(\tau\) of the combined student and teacher training.
We also fine-tune the weight of the pixel-wise distillation loss, \(\lambda_{PI}\), and find an optimal performance for \(\lambda_{PI}=1e-1\) for all experiments. Finally, we test different proposed loss terms in isolation to measure their effect on student performance by optimizing their respective loss weights.
## 5 Experiments
### Datasets and Evaluation Metric
We evaluate our approach on three public semantic segmentation benchmark datasets, PascalVOC [6], Cityscapes [5], and ADE20K [46]. All datasets have pixel-wise annotations.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{ADE20K} & \multicolumn{3}{c}{Cityscapes} \\ \cline{2-7} & Ls & S only & + T & S only & + T \\ \hline SKD [16]* & 3 & 33.82 & 36.55 & 69.10 & 72.67 \\ IFVD [32]* & 3 & - & - & 69.10 & 74.54 \\ CWD [26]\(\dagger\) & 1 & 24.65 & 26.80 & 70.09 & 75.90 \\ DSD [7] & 2 & 33.80 & 38.00 & 69.42 & 73.20 \\ MGD [36] & 1 & - & - & 69.85 & 74.10 \\ IDD [41]\(\dagger\) & 6 & 24.65 & 27.69 & 70.09 & 77.59 \\ FAKD [38]\(\dagger\) & 1 & 29.42 & 35.30 & 68.99 & 74.75 \\ SSTKD [14]* & 4 & 24.65 & 29.19 & 69.10 & 75.15 \\ APD [30]* & 3 & 37.19 & 39.25 & 74.15 & 75.68 \\ CIRKD [35] & 4 & - & - & 72.55 & 74.73 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of reported student Mean Intersection over Union (mIoU) of various techniques following the most common architectural setup. The ”Ls” column refers to the number of loss terms in addition to the CE loss. ”S only” shows the performance of the student model trained without teacher. ”+T ”contains the best performance reported in the publication applying all proposed loss terms. Methods followed by * use the \(L_{PI}\), the ones followed by \(\dagger\) additionally do channel-wise normalization before applying \(L_{PI}\).
**Cityscapes** is an urban street scene understanding dataset showing street scenes recorded in 50 cities. It contains 5,000 finely annotated images with labels from 19 classes. All images have dimension 2048x1024 and train, validation and test set contain 2,975, 500 and 1,525 images, respectively.
**ADE20K** is a complex scene understanding dataset containing 20K/2K/3K images of different sizes in train, validation and test set. Images show objects, parts of objects and stuff from varying context and pixels are assigned one of 150 object and stuff class labels.
**PascalVOC** contains 1,464 images for training, 1,449 images for validation and a private test set. It has 20 different object classes and images are of varying size.
**Mean Intersection over Union** is the metric used in all our experiments.
### Implementation Details
As most other relevant methods (Table 1), we follow the architectural setup and training procedure of SKD [16]. To be precise, this means using PSPNets [44] with different backbones as student and teacher models. The teacher model has a ResNet101 [11] backbone and for the student we test a ResNet18 [11] and an EfficientNet-B0 [29] backbone. For simplicity we will refer to the two student models as ResNet and EffNet students. The learning rate \(\mu(i)\) decays over training steps according to Eq. 4.
\[\mu(i)=\mu(0)*(1-\frac{i}{\eta})^{0.9}\,;\;\;i:=[1\ldots\eta] \tag{4}\]
\(\eta\) is the total number of training batches and i is the current batch. Unless otherwise stated the ResNet and EffNet student backbones are initialized with weights pre-trained on ImageNet. The pre-trained student weights were obtained from the torchvision package (v0.12) of the PyTorch library [22] for both backbones and the teacher weights were taken from [16] for Cityscapes and from [42] for the other datasets.
All experiments are conducted with a batch size of 8 and students are trained with crops of size \(512\times 512\) on Cityscapes and \(473\times 473\) on ADE20K and PascalVOC.
All experimental results are calculated on the validation datasets of Cityscapes, PascalVOC and ADE20K.
### The Impact of Temperature
The authors of the original KD framework [13] strongly emphasize the importance of logit scaling by a temperature parameter \(\tau>1\) in image classification, the main reason for its success being the "softening" of teacher output class distributions [13]. To show that teacher output distributions are "hard" also in image segmentation we analyze the effect of different values of \(\tau\) on the Shannon Entropy [25] of teacher output. When all probability mass is assigned to one class the Shannon Entropy is zero and when the probability mass is distributed evenly over all classes Shannon Entropy is maximal.
We generate teacher output for 800 randomly selected images from the Cityscapes dataset, resulting in probability distributions over the 19 classes for 209,715,200 pixels. These distributions are scaled with different temperature values according to Equation
3. The shares of Shannon Entropies over all pixels for different temperatures are shown in Figure 1. Temperatures of 1, 2, 4, 8, and 16 were chosen as [13] report successful distillation for values up to 20.
An important observation is that when the teacher output is not scaled (\(\tau=1\)), the entropy strongly spikes at 0. More than 60% of pixels are classified with a confidence close to 1, suggesting that teacher output might be too "hard" for efficient distillation of knowledge and higher temperatures might help. On the other hand, for a value of \(\tau=16\) almost 100% of distributions have an entropy of approximately 2.9, indicating an almost even distribution of probability mass over all classes.
### Hyperparameter Optimization
As described previously, we optimize hyperparameters separately for student only and the combined student and teacher training. The results of the grid search for initial learning rate \(\mu(0)\) and weight decay \(\gamma\) for both students on Cityscapes are presented in Table (a)a, right of it in Table (b)b are the optimal hyperparameters for all datasets.
We use the same grid for the student and teacher training and set the temperature parameter to \(\tau=1\). The results are shown in Tables 3 where again the left shows the detailed grid for Cityscapes and the right the optimal hyperparameters for all students and datasets.
In a second stage of hyperparameter optimization, we tune the temperature parameter \(\tau\). The tested values and their responses for all models and datasets can be found in Table 4. A choice of \(\tau=1\) appears to work well on PascalVOC and ADE20K, but greater values yield performance gains on Cityscapes.
Figure 1: Effect of logit scaling on Shannon Entropy of teacher output probability distributions over classes. The histograms are based on class probability distributions of randomly selected pixels from Cityscapes. Without scaling (\(\tau=1\)) over 60% of pixels are assigned one class with a probability close to 1.
\begin{table}
\end{table}
Table 2: Grid search results for student-only training. a) shows the whole grid for Cityscapes, b) the best hyperparameters for all datasets. The best performance in a) is highlighted in bold.
\begin{table}
\end{table}
Table 3: Grid search results for student and teacher training. a) shows the whole grid for Cityscapes, b) the best hyperparameters for all datasets. The best performance in a) is highlighted in bold.
are presented in Table 5.
For the EffNet student the optimal values of loss weights are consistently smaller than for the ResNet student.
\begin{table}
\begin{tabular}{l|l l|l l|l l} \hline \hline & \multicolumn{2}{c}{PascalVOC} & \multicolumn{2}{c}{Cityscapes} & \multicolumn{2}{c}{ADE20K} \\ \(\tau\) & **EffNet** & **ResNet** & **EffNet** & **ResNet** & **EffNet** & **ResNet** \\ \hline
1 & **66.24 \(\pm\) 0.46** & **65.22 \(\pm\) 0.44** & 65.26 \(\pm\) 0.59 & 71.33 \(\pm\) 0.85 & **36.32** & **37.74** \\
2 & 66.08 \(\pm\) 0.16 & 64.37 \(\pm\) 0.08 & 64.62 \(\pm\) 0.77 & 72.15 \(\pm\) 0.10 & 35.51 & 36.85 \\
3 & 65.91 \(\pm\) 0.26 & 63.30 \(\pm\) 0.44 & 65.08 \(\pm\) 0.63 & **72.75 \(\pm\) 0.29** & 34.87 & 36.04 \\
4 & 65.02 \(\pm\) 0.82 & 63.42 \(\pm\) 0.54 & 65.26 \(\pm\) 0.43 & 72.44 \(\pm\) 0.31 & 35.02 & 36.34 \\
6 & 66.11 \(\pm\) 0.67 & 63.65 \(\pm\) 0.51 & 64.52 \(\pm\) 0.45 & 72.49 \(\pm\) 0.43 & 35.23 & 36.49 \\
8 & 66.11 \(\pm\) 0.13 & 64.18 \(\pm\) 0.36 & **65.58 \(\pm\) 0.03** & 72.09 \(\pm\) 0.50 & 34.91 & 36.51 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Impact of the temperature parameter \(\tau\) on segmentation performance. The best performances are highlighted in bold. Results within one standard deviation of the best result are underlined. For ADE20K only one run was computed.
\begin{table}
\begin{tabular}{l l l} \hline \hline & \multicolumn{2}{c}{**EffNet**} & \multicolumn{2}{c}{**ResNet**} \\ \(\lambda_{PA}\) & & \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 5: Impact of individual loss terms on segmentation performance on Cityscapes. The results for three loss terms, \(L_{PA}\), \(L_{HO}\) and \(L_{IFV}\) are presented in tables a), b) and c), respectively. The best mIoU is highlighted in bold for each student model.
### Final Performance Comparison
Table 6 shows the results of the additional loss experiments in comparison to student-only or simple pixel-wise distillation training. It is clear that adding the teacher model and the pixel-wise distillation loss, \(L_{PI}\), improves student performance, while none of the three tested additional loss terms provides a further lift. An exception is the PascalVOC dataset, where the conclusion is less clear.
Comparing our results to the related work in Table 1 shows that our student model with a performance of 37.74 clearly outperforms six out of eight methods on ADE20K, even though it was trained using only the most basic distillation loss. As the ADE20K dataset is the most complex dataset with 150 classes, this observation is surprising.
## 6 Ablation Study
Another hyperparameter of KD is the initialization of student weights. The effect of pre-training student backbone models on the ImageNet dataset compared to random weight initialization has been studied with varying conclusions. The authors of SKD find KD to be more efficient when the student is initialized randomly [16]. CWD [26] contradicts this by stating that the pre-trained weights help distillation, but calls the lift in student performance less significant compared to the randomly initialized case. The provided reason for this statement is that the relative improvement of the student is smaller since the model was better when trained without a teacher. [35] and [38] also report a higher absolute improvement for random initialization. [30] is very unclear on the subject of weight initialization, stating in a sketch of their algorithm that student weights are initialized randomly, while the performance of student models trained without a teacher suggests an initialization with pre-trained weights.
Our investigation of student weight initialization suggests that distillation of teacher knowledge by the simple pixel-wise distillation loss does not improve the performance of a randomly initialized student, while it does improve the pre-trained one. Table 7 compares our findings to the results of [16] and [26]. Our experiments find the validation mIoU of the student trained without a teacher to be 63.68, averaged over four runs. This
\begin{table}
\begin{tabular}{c|c|c|c||c c|c c|c c} \hline \hline \multirow{3}{*}{\(L_{PI}\)} & \multirow{3}{*}{\(L_{PA}\)} & \multirow{3}{*}{\(L_{HO}\)} & \multirow{3}{*}{\(L_{IFV}\)} & \multicolumn{2}{c}{Pascal Voc} & \multicolumn{2}{c}{Cityscapes} & \multicolumn{2}{c}{ADE20K} \\ \cline{5-10} & & & & **EffNet** & **ResNet** & **EffNet** & **ResNet** & **EffNet** & **ResNet** \\ \hline & & & & 65.43 \(\pm\) 0.38 & 64.50 \(\pm\) 0.34 & 64.42 \(\pm\) 0.46 & 70.45 \(\pm\) 0.10 & 34.10 & 35.23 \\ x & & & & 66.24 \(\pm\) 0.46 & 65.22 \(\pm\) 0.44 & **65.58 \(\pm\) 0.03** & **72.75 \(\pm\) 0.29** & **36.32** & **37.74** \\ x & x & & & 66.40 \(\pm\) 0.35 & **65.84 \(\pm\) 0.35** & 64.94 \(\pm\) 0.28 & 71.77 \(\pm\) 1.05 & 34.98 & 36.57 \\ x & & x & & **66.72 \(\pm\) 0.21** & 65.21 \(\pm\) 0.28 & 65.00 \(\pm\) 0.73 & 71.87 \(\pm\) 1.14 & 35.22 & 36.50 \\ x & & & x & 66.38 \(\pm\) 0.23 & 65.83 \(\pm\) 0.39 & 65.24 \(\pm\) 0.70 & 70.96 \(\pm\) 1.03 & 35.41 & 37.09 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluation of more complex loss terms \(L_{PA}\), \(L_{HO}\), and \(L_{IFV}\) when added to \(L_{CE}+L_{PI}\) with optimal hyperparameters. Best results are highlighted in bold; results within one standard deviation of the best result are underlined.
result is in line with CWD, but 6.18 percentage points higher than what is reported by [16]. Also similar to CWD we find no positive effect of distillation of teacher knowledge by means of the investigated loss terms.
## 7 Environmental Impact
KD is an effective technique to reduce energy consumption at inference time, as the student model requires less energy than the larger teacher. On the other hand, energy consumption during student training is increased compared to training the student without teacher. These two phenomena result in two trade-offs: the first is performance loss compared to teacher vs. reduced energy consumption during inference and the second is increased performance compared to student-only vs. increased energy consumption during training. The decision about when to use KD always depends on the exact use case. If a model is expected to be deployed on a large number of devices, which all process images at a high rate, energy use during training might be negligible compared to inference and thus KD might be extremely beneficial.
Using the codecarbon.io tool we calculate the \(CO_{2}\) emissions of training and inference of our experiments with the ResNet student on Cityscapes and show them in Table 8.
Training the student on its own takes 0.841kWh, which with the German electricity conditions means 298g of emitted \(CO_{2}\). As expected, the addition of the teacher in
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & SKD & CWD & Ours \\ \hline Teacher & 78.56 & 78.5 & 78.24 \\ \hline \(L_{CE}\) & 57.50 & 63.63 & **63.68** \\ \(L_{CE}+L_{PI}\) & 58.63 & - & 63.34 \\ \(L_{CE}+L_{PI}+L_{PA}+L_{HO}\) & 63.24 & 63.20 & - \\ \hline \hline \end{tabular}
\end{table}
Table 7: mIoU of randomly initialized ResNet student trained on Cityscapes compared to literature. The best mIoU is highlighted in bold.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{training} & \multicolumn{2}{c}{inference} \\ \cline{2-5} & \multicolumn{2}{c}{mIoU} & \multicolumn{2}{c}{energy (kWh)} & \multicolumn{2}{c}{\(CO_{2}\) (kg)} & \multicolumn{2}{c}{energy (kWh)} & \multicolumn{2}{c}{\(CO_{2}\) (kg)} \\ \hline S & 70.45 & 0.841 & 0.298 & 0.161 & 0.057 \\ S + KD & 72.75 & 2.52 & 0.891 & 0.161 & 0.057 \\ T & 78.24 & 3.01 & 1.06 & 0.413 & 0.146 \\ \hline \hline \end{tabular}
\end{table}
Table 8: \(CO_{2}\) emissions of different model combinations at training and inference on Cityscapes. S is the ResNet student, numbers for inference are per 10,000 images.
KD leaves energy consumption during inference unchanged, but increases energy consumption and emissions during training to almost the three-fold. Comparing student and teacher reveals that the ResNet student emits only 57g of \(CO_{2}\) per 10,000 images at inference compared to 146g for the teacher model. This means, if we accept the decrease in performance we could save over 60% of \(CO_{2}\) emission during inference time.
## 8 Conclusion
In this work, we point out a significant comparability problem in the field of KD for semantic segmentation, which will grow in relevance as more techniques are published. We argue that compatibility can be improved by thoroughly optimizing training hyperparameters and show that doing so eliminates the gains from two accepted techniques, SKD and IFVD, on the two more complex of the three investigated datasets. To facilitate an easier comparison in the future we provide a detailed training protocol including optimal hyperparameters for two student models and three datasets. As part of the hyperparameter tuning we investigate the temperature parameter \(\tau\), which most publications in the field simply set to 1. We investigate the entropy of class probability distributions of teacher output to visualize the softening effect of \(\tau>1\) and show that the temperature parameter can be beneficial to the distillation of teacher knowledge.
Following up on this work and using the presented training with optimized hyperparameters, it would be useful to provide a fair comparison of the other loss terms in Table 1. Additionally, the understanding of KD in segmentation could be deepened by an analysis of factors that decide when logit scaling is helpful and why it improves distillation on Cityscapes and not on the other datasets.
## Disclaimer
The results, opinions and conclusions expressed in this publication are not necessarily those of Volkswagen Aktiengesellschaft.
|
2310.00233 | CausalImages: An R Package for Causal Inference with Earth Observation,
Bio-medical, and Social Science Images | The causalimages R package enables causal inference with image and image
sequence data, providing new tools for integrating novel data sources like
satellite and bio-medical imagery into the study of cause and effect. One set
of functions enables image-based causal inference analyses. For example, one
key function decomposes treatment effect heterogeneity by images using an
interpretable Bayesian framework. This allows for determining which types of
images or image sequences are most responsive to interventions. A second
modeling function allows researchers to control for confounding using images.
The package also allows investigators to produce embeddings that serve as
vector summaries of the image or video content. Finally, infrastructural
functions are also provided, such as tools for writing large-scale image and
image sequence data as sequentialized byte strings for more rapid image
analysis. causalimages therefore opens new capabilities for causal inference in
R, letting researchers use informative imagery in substantive analyses in a
fast and accessible manner. | Connor T. Jerzak, Adel Daoud | 2023-09-30T02:52:49Z | http://arxiv.org/abs/2310.00233v3 | CausalImages: An R Package for Causal Inference with Earth Observation, Bio-medical, and Social Science Images
###### Abstract
The **causalimages** R package enables causal inference with image and image sequence data, providing new tools for integrating novel data sources like satellite and bio-medical imagery into the study of cause and effect. One set of functions enables image-based causal inference analyses. For example, one key function decomposes treatment effect heterogeneity by images using an interpretable Bayesian framework. This allows for determining which types of images or image sequences are most responsive to interventions. A second modeling function allows researchers to control for confounding using images. The package also allows investigators to produce embeddings that serve as vector summaries of the image or video content. Finally, infrastructural functions are also provided, such as tools for writing large-scale image and image sequence data as sequentialized byte strings for more rapid image analysis. **causalimages** therefore opens new capabilities for causal inference in **R**, letting researchers use informative imagery in substantive analyses in a fast and accessible manner.
Keywords:Causal inference, image analysis, image-sequence data, computer vision, machine learning, R.
## 1 Introduction: Causal Inference with Images
Satellite image data represents an emerging resource for research in global development and earth observation, yet no R package currently exists to handle images for causal inference up to now. By _causal inference_, we refer to the rich literature in statistics (Imbens and Rubin, 2016), computer science (Pearl, 2009), and beyond (Hernan and Robins, 2020). Satellites generate temporally-rich worldwide coverage, capturing the entire Earth's surface at regular intervals, except when obscured by clouds (Burke _et al._, 2021). Historical archives date back to the 1970s. Unlike snapshots of political, economic, or educational systems at a single time point, satellites revisit each location every 2 weeks or more, providing approximately 26 temporal observations annually. This time-series information has proven valuable for studying phenomena like transportation network growth (Nagne and Gawali, 2013), urbanization (Schneider _et al._, 2009), health and living conditions (Daoud _et al._, 2023; Chi _et al._, 2022), living standards (Yeh _et al._, 2020; Pettersson _et al._, 2023), and neighborhood characteristics
(Sowmya and Trinder 2000). Thus, satellite data facilitates observational inference where ground-level data is lacking. Moreover, image quality and frequency continue improving as the satellite population proliferates from hundreds to thousands (Tatem _et al._ 2008), with sub-100 cm resolution now available (Hallas 2019). We see an example of this data source in Figure 1.
Methodological guidance has remained limited for causal estimation from satellite images (Daoud and Dubhashi 2023). To address that methodological gap, recently Jerzak _et al._ (2022) proposed methods to estimate confounding, and Jerzak _et al._ (2023) developed methods for estimating effect heterogeneity in images. Our **causalimages** package encompasses these methods and some future ones. Our confounding method helps address that research need by examining observational causal inference amidst image-based confounding. Our heterogeneity method shows how researchers may use past satellite images to proxy geographical and historical processes important for moderating the treatment effect of both randomized experiments and observational studies.
Our causal inference work with images complements the growing use of visual data in climate science, sociology, economics, political science, and biomedical research (Kino _et al._ 2021; Daoud and Dubhashi 2023). Examples include qualitative photo analysis (Pauwels 2010; O'Hara and Higgins 2019), image similarity calculation (Zhang and Peng 2022), crowd size estimation (Cruz and Gonzalez-Villa 2021), and relating social outcomes to Street View scenes (Gebru _et al._ 2017). Recent extensions encompass video for investigating social processes like police violence (Nassauer and Legewie 2021). In large-data quantitative studies, algorithms
Figure 1: Landsat satellite imagery over Nigeria.
have been trained to identify objects of interest automatically (Torres and Cantu, 2022). Similarly, in the biomedical domain, Castro et al. (2020) shows how a variety of image data--from X-ray to ultrasound pictures to MRI scans--can be used for causal inference. However, more research is needed to close the gap between foundational and applied research across those domains, and not only earth observation data. To contribute to closing that gap, we created **causalimages**. Although our examples focus on satellite images, research can use any image data and across other domains where image data are available.
In concluding this introduction, we note that this package builds on the R ecosystem regarding data visualization and geospatial analysis. For instance, the R packages like **tensorflow** and **keras** provide the backbone for deep learning functionalities with the TensorFlow backend. The **viridis** package enhances the visualization capabilities, while **animation** facilitates dynamic plots for results involving image sequence data. Integration with Python is streamlined through **reticulate**. For geospatial operations, we rely on **geosphere** and **raster**.
## 2 Models and Software
### Package Overview
At a high level, the causalimages package provides tools for causal inference with image data. The package contains several functions whose relations are summarized in Figure 2.
The AnalyzeImageConfounding and AnalyzeImageHeterogeneity, functions run the main analysis models for image causality. They require observed treatment and outcome data, as well as a way to retrieve image data associated with each observation.
Other functions center on working with image data in a more infrastructural sense. The GetAndSaveGeolocatedImages function helps retrieve image data referenced by geographic coordinates. WriteTfRecord writes image or image sequence data to a sequentialized TFRecord file for efficient retrieval. GetImageEmbeddings generates embeddings useful for other tasks where an efficient summary of the image information is required.
Together, these tools enable causal analyses that incorporate image data as confounders, me
Figure 2: Overview of workflow in the causalimages package.
diators, and moderators of treatment effects. The analyses can adjust for spatial dependencies and estimate heterogeneous treatment effects associated with geospatial imagery.
### Package Installation and Loading
The **causalimages** package is currently installed using the devtools package. For installation, users should run:
R> devtools::install_github(repo = "AIandGlobalDevelopmentLab/causalimages-software")
To load the package into a live R environment, use:
R> library(causalimages)
The **causalimages** package uses a **tensorflow** backend for image analysis and GPU utilization. Python version 3 or above is assumed. To install **tensorflow** into your default Python environment, try
R> library(reticulate) R> py_install("tensorflow")
You may need to do the same to install dependencies such as **tensorflow-probability** and **gc**. For more fine-grained user control over CPU and GPU use, we recommend installing the requisite backend into a conda environment.
### Tutorial Data
Once the package has been successfully initiated, users can access package data useful for tutorial purposes. The data are drawn from an anti-poverty experiment in Uganda Blattman _et al._ (2020) and contain information on the treatment, experimental outcome, approximate coordinates for each unit, as well as pre-treatment covariates and geo-referenced satellite images for each unit. To allow researchers to load all images into memory, we have cropped these images to a smaller-than-original size.
R> data(CausalImagesTutorialData)
The Blattman data are then structured as follows:
R> summary(obsW) % A vector portraying treatment information R> summary(obsY) % A vector with an experimental outcome R> summary(LongLat) % Geospatial coordinates for each observational unit R> summary(X) % Pre-treatment covariate information R> causalimages::image2(FullImageArray[1,,1]) # image associated with unit 1 R> causalimages::image2(FullImageArray[3,,2]) # image associated with unit 3
### Functions for Data Assimilation
We have several functions for helping users save geo-located images. For example, **GetAndSaveGeolocatedImages** finds the image slice associated with given longitude and latitude
values and saves images by band, given a pool of.tif's. For example, the.tif pool may contain dozens of large Landsat mosaics covering the continent of Africa, and we want to extract a 500\(\times\)500 meter square image around a particular point. For the Blattman data, we have provided the Landsat images in the package. For other data, the user has to download data from USGS or Google Earth Engine.
In the following example, we have two.tif's saved in "./LargeTifs", we can search across those images for matches to the associated long and lat inputs. When a match is found, a series of.csv's are written to encompass the data in a image_pixel_width square around the target geo-point in the target.tif. These image objects are saved in the save_folder as Key[key]_BAND[band].csv where [key] refers to the appropriate entry from keys specifying the label for each image and [band] specifies the band.
R> MASTER_IMAGE_POOL_FULL_DIR <- c("./LargeTifs/tif1.tif","./LargeTifs/tif2.tif") R> GetAndSaveGeolocatedImages( + long = GeoKeyMat$geo_long, + lat = GeoKeyMat$geo_lat, + image_pixel_width = 500L, + keys = row.names(GeoKeyMat), + if_pool = MASTER_IMAGE_POOL_FULL_DIR, + save_folder = "./Data/Uganda2000_processed", + save_as = "csv", + lyrs = NULL)
### Functional Image Loading
One important part of the image analysis pipeline is writing a function that acquires the appropriate image data for each observation. This function will be fed into the acquireImageFxn argument of the package functions unless an approach using TFRecords is used instead. There are two ways that you can approach this: (1) you may store all images in R's memory (feasible only for problems involving few or small images), or you may (2) save images on your hard drive (e.g., using GetAndSaveGeolocatedImages) and read them in when needed. The second option will be more common for large images.
You will write your acquireImageFxn to take in one main argument--keys. keys is fed a character or numeric vector. Each value of keys refers to a unique image object that will be read in. If each observation has a unique image associated with it, perhaps imageKeysOfUnits = 1:nObs. If multiple observations map to the same image, then multiple observations will map to the same keys value. The keys thus serves as an identifier for a particular image, which is then referenced back to individuals who are matched with keys of their associated image.
In practice, users should ensure that acquireImageFxn returns arrays with dimensions batch by height by width by channels in the case of images and batch by time by height by width by channels in the case of image sequences/videos.
#### When Loading All Images in Memory
We here provide an example of writing an acquireImageFxn function using the tutorial data wherein the images are already read into memory.
R> acquireImageFromMemory <- function(keys, training = F){ + m_ <- FullImageArray[match(keys, KeysOfImages),,] + if(length(keys) == 1){ + m_ <- array(m_,dim = c(1L,dim(m_)[1],dim(m_)[2],dim(m_)[3])) + } + return( m_ ) +} To run this acquireImageFunction in practice, we would take
R> ImageSet <- acquireImageFromMemory(KeysOfObservations[c(5,7)]) where ImageNet contains the images associated with observations 5 and 7 (note that they could both have the same image if these units are co-located).
#### When Reading in Images from Disk
For most applications of large-scale causal image analysis, we won't be able to read the whole set of images into R's memory. Instead, we can specify a function that will read images from somewhere on your hard drive. You can also experiment with other methods--as long as you can specify a function that returns an image when given the appropriate imageKeysOfUnits value, you should be fine. Here's an example of an acquireImageFxn that reads images from disk:
R> acquireImageFromDisk <- function(keys, training = F){ + array_shell <- array(NA,dim = c(1L,imageHeight,imageWidth,NBANDS)) + array_ <- sapply(keys,function(key_){ + for(band_ in 1:NBANDS){ + array_shell[,,band_] <- + (as.matrix(data.table::read( + input = sprintf("./Data/Uganda2000_processed/Key%s_BAND%s.csv", key_, band_),header = F)[-1,] )) + } + return( array_shell ) + }, simplify="array") + array_ <- tf$squeeze(tf$constant(array_,dtype=tf$float32),OL) + array_ <- tf$transpose(array_,c(3L,OL,1L,2L)) + return( array_ ) +}
### TFRecords Integration for Fast Image Processing
We can use the aforementioned functions for acquiring images from disk to write the data corpus in an optimized format for fast reading-writing. This format is not required but is highly recommended to improve causal image analysis runtimes.
In particular, once we have acquired a pool of geo-referenced satellite images, **causalimages** also contains a function that writes the analysis data in TFRecord format, a binary storage format used by TensorFlow to store data efficiently and to enable fast acquisition of data into memory in serialized chunking--a process that speeds up the acquisition of images into memory where data are too large to fit into memory.
R> WriteTfRecord( + file = "./UgandaApp.tfrecord", + keys = KeysOfObservations, + acquireImageFxn = acquireImageFromMemory, + conda_env = "tensorflow_m1")
**WriteTfRecord** writes to TFRecords format the entire image data stream. As we discuss later, this same function can be used if the inputted acquireImageFxn function outputs image sequences.
### Image and Image Sequence Embeddings
The GetImageEmbeddings function offers a methodology for the extraction of image and video embeddings, particularly tailored for earth observation tasks that drive causal inference. Using the randomized convolutions approach in Rolf _et al._ (2021), the function generates vector representations of images and image sequences based on the similarity within these data to a large set of smaller image patterns (i.e. kernels). The parameters provided to the function allow fine-tuned control over the embedding process, especially in the kind of convolutional kernels used. This flexibility can allow the embeddings to be adapted for the given dataset.
We note that the embeddings function works with both image and image sequence data. It can also be run in the de-confounding and heterogeneity decomposition functions we will analyze later.
To use the function, you can specify how to load images via the acquireImageFxn
R> MyImageEmbeddings <- GetImageEmbeddings( + imageKeysOfUnits = KeysOfObservations[ take_indices ], + acquireImageFxn = acquireImageFromMemory, + nEmbedDim = 100, + kernelSize = 3L, + conda_env = "tensorflow_m1", + conda_env_required = T ) Again, conda_env specifies a conda environment in which the desired version of the TensorFlow backend lives. If NULL, we search in the default Python environment for the backend.
Alternatively, you may use the tfrecords approach as follows:
R> MyImageEmbeddings <- GetImageEmbeddings( + file = "./UgandaApp.tfrecord", + nEmbedDim = 100,
+ kernelSize = 3L, + conda_env = "tensorflow_m1", + conda_env_required = T )
Finally, we can also obtain embeddings over image sequences. To do so, we first write a simple function creating image sequences given keys.
R> acquireVideoRepFromMemory <- function(keys, training = F){ + tmp <- acquireImageFromMemory(keys, training = training) + + if(length(keys) == 1){ + tmp <- array(tmp,dim = c(1L,dim(tmp)[1],dim(tmp)[2],dim(tmp)[3])) + } + tmp <- array(tmp,dim = c(dim(tmp)[1], + dim(tmp)[3], + dim(tmp)[4], + 1L)) + return( tmp ) +}
To obtain video embeddings, we take:
R> MyVideoEmbeddings <- GetImageEmbeddings( + imageKeysOfUnits = KeysOfObservations[ take_indices ], + acquireImageFxn = acquireVideoRepFromMemory, + temporalKernelSize = 2L, + kernelSize = 3L, + nEmbedDim = 100, + conda_env = "tensorflow_m1", + conda_env_required = T)
We can also write a TFRecord and use that in obtaining image sequence embeddings by specifying the file argument.
### Deconfounding with Image and Image Sequence
Using the **AnalyzeImageConfounding** function, causal effects are estimated using image-based or image-sequence-based confounders, although we add the option to include tabular confounders as well.
R> ImageConfoundingAnalysis <- AnalyzeImageConfounding( + obsW = obsW[ take_indices ], + obsY = obsY[ take_indices ], + X = X[ take_indices,apply(X[ take_indices,],2,sd)>0], + long = LongLat$geo_long[ take_indices ],
* lat = LongLat$geo_lat[ take_indices ],
* imageKeysOfUnits = KeysOfObservations[ take_indices ],
* acquireImageFxn = acquireImageFromMemory,
* batchSize = 4,
* #modelClass = "cnn", # uses convolutional network (richer model class)
* modelClass = "embeddings", # uses image embeddings (faster)
* file = NULL,
* plotBands = c(1,2,3),
* dropoutRate = 0.1,
* tagInFigures = T, figuresTag = "TutorialExample",
* nBoot = 10,
* nSGD = 10, # this should be more like 1000 in full analysis
* figuresPath = "-/Downloads", # figures saved here
* conda_env = "tensorflow_m1",
* conda_env_required = T
*)
**AnalyzeImageConfounding** returns a list containing an image-adjusted ATE estimate tauMat_propensityHajek (see Jerzak _et al._ (2022) for details) and an uncertainty estimate, tauMat_propensityHajek_se. A matrix of out-of-sample performance metrics (e.g., out-of-sample negative log-likelihood) is housed in ModelEvaluationMetrics. Users can specify whether they would like to use the faster option, modelClass = "embeddings", or the more computationally intensive but richer modeling approach using end-to-end convolutional neural network (CNN) training modelClass = "cnn". To speed up performance, we recommend letting acquireImageFxn = NULL and instead writing and then specifying a TFRecords file (e.g., set file to the path of the TFRecord saved via a call to WriteTfRecord).
In addition to providing these estimated quantities, the function also writes to disk summary PDF outputs containing salience maps, which quantify areas in the image or image sequence that, if changed, would lead to the largest change in predicted treatment probability. An example of such figures is found in Figure 3. Note that this figure can be made for either the embeddings or CNN image modeling backbone.
### Heterogeneity Analysis with Image and Image Sequence Data
Using the **AnalyzeImageHeterogeneity** function, Conditional Average Treatment Effects (CATEs) are estimated using image-based or image-sequence-based pre-treatment information, although we add the option to include tabular pre-treatment covariates as well if the X argument is fed a numeric matrix input. This function would be used, for example, if we would want to learn about the kinds of geographies or developmental trajectories, proxied by satellite image data, most conducive to favorable responses to anti-poverty interventions. There are also possible applications in the biomedical domain where image data could be associated with a high or low response to a drug.
The functionality of **AnalyzeImageHeterogeneity** works much like **AnalyzeImageConfounding**:
R> ImageHeterogeneityResults <- AnalyzeImageHeterogeneity( + # data inputs + obsW = UgandaDataProcessed$Wobs, + obsY = UgandaDataProcessed$Yobs, + imageKeysOfUnits = UgandaDataProcessed$geo_long_lat_key, + acquireImageFxn = acquireImageFromDisk, + conda_env = "tensorflow_m1", # change to your conda env + conda_env_required = T, + X = X, + plotBands = 1L, + lat = UgandaDataProcessed$geo_lat, # not required, deals with redundant locs + long = UgandaDataProcessed$geo_long, # not required, deals with redundant locs
Figure 3: The three panels on the left illustrate the unprocessed data for control units, relevance diagrams for the predicted treatment likelihood, and the outcome from the concluding spatially detailed layer of the CNN image framework. The trio of panels on the right shows comparable elements for the treated units.
+ # inputs to control where visual results are saved as PDF or PNGs + plotResults = T, + figuresPath = "-/Downloads/", + printDiagnostics = T, + figuresTag = "causalimagesTutorial", + + + # optional arguments for generating transportability maps + transportabilityMat = NULL, + + # other modeling options + modelClass = "embeddings", # uses image/video embeddings model class + orthogonalize = F, + heterogeneityModelType = "variational_minimal", + kClust_est = 2, + nMonte_variational = 2L, + nSGD = 4L, + batchSize = 34L, + kernelSize = 3L, maxPoolSize = 2L, strides = 2L, + nDepthHidden_conv = 2L, + nFilters = 64L, + nDepthHidden_dense = 0L, + nDenseWidth = 32L, + nDimLowerDimConv = 3L +) The function performs the image heterogeneity decomposition analysis delineated in Jerzak _et al._ (2023). Users specify the treatment variable and outcome data, alongside a specific function that guides the loading of images using reference keys for each unit, the function produces several outputs. All outputs are saved in a list. The clusterTaus_mean serves to present the estimated image effect cluster means, while the clusterTaus_sd contains the estimated standard deviations of those effect clusters.
The function further produces the clusterProbs_mean, which contains the average probabilities of these image effect clusters. Additionally, it offers an estimation of the standard deviations for these cluster probabilities through the clusterTaus_sd. For a deeper dive into the probabilistic insights, the clusterProbs_lowerConf gives the estimated lower confidence bounds for the effect cluster probabilities.
Regarding treatment effects, the function computes the impliedATE, a derived average treatment effect, and the individualTau_est, which highlights estimated treatment effects on an individual image basis. The transportabilityMat avails a matrix filled with cluster information, essential for analyses involving areas outside the original study locales. Lastly, to ensure data integrity, the whichNA_dropped output identifies those observations that were excluded because of missing values.
In addition to providing these estimated quantities, the function also writes to disk summary PDF outputs containing salience maps, which quantify areas in the image or image sequence that, if changed, would lead to the largest change in predicted treatment effect cluster prob
ability. An example of such figures is found in Figure 4.
We note that the image (sequence) heterogeneity analysis can be made for either the embeddings or CNN image modeling backbone.
## 3 Conclusion and Future Development
As previously mentioned, **causalimages** closes the gap between foundational and applied research in using image data for causal inference (Jerzak _et al._, 2022, 2023). Besides the application cases discussed, we expect a take up in climate research, particularly pertaining to research in natural disaster evaluation (Shiba _et al._, 2021, 2022; Kakooei _et al._, 2022; Daoud _et al._, 2016), armed conflict, and ecology. These research fields include processes occurring on the surface of the earth with often substantial impact that is measurable from space. In the age of data science, an increasing number of researchers are using image data (Daoud and
Figure 4: _Left, top 3 rows:_ High probability cluster 1 images. _Left, bottom 3 rows:_ High probability cluster 2 images. “Salience Magnitude” and “Direction” represent two ways of analyzing salience. See Jerzak _et al._ (2023) for details.
Dubhashi 2023). Although the package focuses on earth observation and global development research, the package is usable across a variety of domains in economics (Hall 2010; Henderson _et al._ 2012), sociology (Daoud _et al._ 2023), public policy (Balgi _et al._ 2022), public health (Kino _et al._ 2021; Conklin _et al._ 2018), and biomedical applications (Castro _et al._ 2020).
Having discussed the current functionalities of **causalimages**, we now turn to the future development of it. We discuss four developments we will implement in the near future. First, we will modularize the image models that power **causalimages**, allowing the user to plug in their preferred image model or extend it with a different backbone (i.e., feature extractor). Currently, the **causalimages** uses two types of image-modeling backbones: the randomized embedding and the CNNs. While the CNN is a supervised procedure, the embedding backbone is unsupervised. However, there are many different foundational image-processing models or model architectures that the user might wish to consider using--VGGs, ResNets, U-NETs, LSTMS, Inception, and Transformers-based models.
These models allow the user to calibrate their modeling approach to the data at hand. Additionally, there are several pre-trained models trained on classical datasets such as ImageNet or CIFAR, or earth observation data. For example, recently, NASA and IBM trained such a model (Fraccaro _et al._ 2023). Using these models in combination with the principles of transfer learning, will enable researchers to adapt their image models to often small datasets that we encounter in the biomedical and social sciences. Thus, modularization will enable the user to adapt **causalimages** for their data and modeling needs.
Second, we will create image simulation facilities for causal inference. Often, when researchers develop or use methods in observational studies, they wish to simulate data to gain a deeper understanding of a causal system of interest. However, because images are high-dimensional objects, with a vast amount of parameters, it can be challenging to simulate image data that associates in a desired way with tabular data. To enable such simulations, we will incorporate a set of generative models to simulate counterfactual image scenarios. These generative models will partly build on the deep geography literature (Zhao _et al._ 2021).
Third, we will incorporate existing models for causal discovery. There is considerable literature on discovery in high-dimensional data (cite Bernard group), and by connecting to that research, we foresee a cross-fertilization between causal inference conduct in a deductive versus inductive manner. Thus, we will incorporate those that are able to detect signals in image data (Lopez-Paz _et al._ 2017).
Fourth, we will incorporate different uncertainty quantification. Currently, **causalimages** is able to quantify sampling uncertainty by using bootstrapping or a Bayesian approach. However, both approaches can be computationally expensive. Thus, a future area is to improve computational efficiency and keep updating the package to incorporate insights from the state of art (Smith 2014; Abdar _et al._ 2021).
Fifth, we will likely need to develop a grammar for causal inference with image data-inspired by the vision of the grammar of graphics (Wilkinson _et al._ 2005; Tufte 2001). As we discuss in Jerzak _et al._ (2022, 2023), image data comes with varying bands, resolution, and revisiting time. Thus, they have varying data structures. That also implies that the extent to which image data provides a window to causally analyzing the phenomena of interest will vary with these data structures. To handle that variability, researchers will likely need to have different functions or arguments to work efficiently and precisely with these data. That grammar development entails that we need to both align our **causalimages** with existing functions in
common geospatial packages as well as develop extension software.
## Acknowledgments
We thank the members of the AI and Global Development Lab: James Bailie, Cindy Conlin, Devdatt Dubhashi, Felipe Jordan, Mohammad Kakooei, Eagon Meng, Xiao-Li Meng, and Markus Pettersson for valuable feedback on this project. We also thank Xiaolong Yang. In particular, we would like to acknowledge Cindy Conlin for being the first user of the package and for providing excellent feedback.
|
2309.12754 | Exact coherent structures in two-dimensional turbulence identified with
convolutional autoencoders | Convolutional autoencoders are used to deconstruct the changing dynamics of
two-dimensional Kolmogorov flow as $Re$ is increased from weakly chaotic flow
at $Re=40$ to a chaotic state dominated by a domain-filling vortex pair at
$Re=400$. The highly accurate embeddings allow us to visualise the evolving
structure of state space and are interpretable using `latent Fourier analysis'
(Page {\em et. al.}, \emph{Phys. Rev. Fluids} \textbf{6}, 2021). Individual
latent Fourier modes decode into vortical structures with a streamwise
lengthscale controlled by the latent wavenumber, $l$, with only a small number
$l \lesssim 8$ required to accurately represent the flow. Latent Fourier
projections reveal a detached class of bursting events at $Re=40$ which merge
with the low-dissipation dynamics as $Re$ is increased to $100$. We use doubly-
($l=2$) or triply- ($l=3$) periodic latent Fourier modes to generate guesses
for UPOs (unstable periodic orbits) associated with high-dissipation events.
While the doubly-periodic UPOs are representative of the high-dissipation
dynamics at $Re=40$, the same class of UPOs move away from the attractor at
$Re=100$ -- where the associated bursting events typically involve larger-scale
($l=1$) structure too. At $Re=400$ an entirely different embedding structure is
formed within the network in which no distinct representations of small-scale
vortices are observed; instead the network embeds all snapshots based around a
large-scale template for the condensate. We use latent Fourier projections to
find an associated `large-scale' UPO which we believe to be a finite-$Re$
continuation of a solution to the Euler equations. | Jacob Page, Joe Holey, Michael P. Brenner, Rich R. Kerswell | 2023-09-22T09:53:54Z | http://arxiv.org/abs/2309.12754v1 | # Exact coherent structures in two-dimensional turbulence identified with convolutional autoencoders
###### Abstract
Convolutional autoencoders are used to deconstruct the changing dynamics of two-dimensional Kolmogorov flow as \(Re\) is increased from weakly chaotic flow at \(Re=40\) to a chaotic state dominated by a domain-filling vortex pair at \(Re=400\). The highly accurate embeddings allow us to visualise the evolving structure of state space and are interpretable using 'latent Fourier analysis' (Page _et. al._, _Phys. Rev. Fluids_**6**, 2021). Individual latent Fourier modes decode into vortical structures with a streamwise lengthscale controlled by the latent wavenumber, \(l\), with only a small number \(l\lesssim 8\) required to accurately represent the flow. Latent Fourier projections reveal a detached class of bursting events at \(Re=40\) which merge with the low-dissipation dynamics as \(Re\) is increased to 100. We use doubly- (\(l=2\)) or triply- (\(l=3\)) periodic latent Fourier modes to generate guesses for UPOs (unstable periodic orbits) associated with high-dissipation events. While the doubly-periodic UPOs are representative of the high-dissipation dynamics at \(Re=40\), the same class of UPOs move away from the attractor at \(Re=100\) -- where the associated bursting events typically involve larger-scale (\(l=1\)) structure too. At \(Re=400\) an entirely different embedding structure is formed within the network in which no distinct representations of small-scale vortices are observed; instead the network embeds all snapshots based around a large-scale template for the condensate. We use latent Fourier projections to find an associated 'large-scale' UPO which we believe to be a finite-\(Re\) continuation of a solution to the Euler equations.
## 1 Introduction
The dynamical systems view of turbulence (Hopf, 1948; Eckhardt _et al._, 2002; Kerswell, 2005; Eckhardt _et al._, 2007; Gibson _et al._, 2008; Cvitanovic & Gibson, 2010; Kawahara _et al._, 2012; Suri _et al._, 2020; Graham & Floryan, 2021; Crowley _et al._, 2022) has revolutionised our understanding of transitional and weakly turbulent shear flows. In this perspective, a realisation of a turbulent flow is considered as a trajectory in a very high-dimensional dynamical system, in which unstable periodic orbits (UPOs) and their stable and unstable manifolds serve as a skeleton for the chaotic dynamics (Hopf, 1948; Cvitanovic _et al._, 2016). However, progress with these ideas in multiscale turbulence at high Reynolds numbers |
2309.03881 | The simple $\mathscr{B}_ψ$-groups | In a finite group $ G $, $ \psi(G) $ denotes the sum of element orders of $ G
$. A finite group $ G $ is said to be a $\mathscr{B}_{\psi}$-group if $ \psi(H)
< |G| $ for any proper subgroup $ H $ of $ G $.
In \cite{Lazorec} Lazorec asked: "what can be said about the
$\mathscr{B}_{\psi}$ property of the finite simple groups $
\operatorname{PSL}(2, q) $?" In this paper, we answer this question for the
case of not only the finite simple groups $ \operatorname{PSL}(2, q) $ but also
all other finite simple groups. We show that if $ S $ is a finite simple group,
such that $ S \neq Alt(n) $ for any $ n \geq 14 $, then $S$ is a
$\mathscr{B}_{\psi}$-group. | Morteza Baniasad Azad | 2023-09-07T17:44:55Z | http://arxiv.org/abs/2309.03881v1 | # The simple \(\mathscr{B}_{\psi}\)-groups
###### Abstract.
In a finite group \(G\), \(\psi(G)\) denotes the sum of element orders of \(G\). A finite group \(G\) is said to be a \(\mathscr{B}_{\psi}\)-group if \(\psi(H)<|G|\) for any proper subgroup \(H\) of \(G\).
In [6] Lazorec asked: "what can be said about the \(\mathscr{B}_{\psi}\) property of the finite simple groups \(\mathrm{PSL}(2,q)\)?" In this paper, we answer this question for the case of not only the finite simple groups \(\mathrm{PSL}(2,q)\) but also all other finite simple groups. We show that if \(S\) is a finite simple group, such that \(S\neq Alt(n)\) for any \(n\geq 14\), then \(S\) is a \(\mathscr{B}_{\psi}\)-group.
Key words and phrases:Simple groups, sum of element order, finite group, \(\mathscr{B}_{\psi}\)-groups 2010 Mathematics Subject Classification: 20D05, 20D60, 20D06, 20D08
## 1. Introduction
Throughout this paper all groups are finite. The cyclic group of order \(n\) is denoted by \(C_{n}\). The order of \(g\in G\) is denoted by \(o(g)\) and the sum of element orders of \(G\) is denoted by \(\psi(G)\). Some relations between the structure of the group \(G\) and \(\psi(G)\) are given in [1, 2]. A Group \(G\) is called \(\mathscr{B}_{\psi}\)-group if \(\psi(H)<|G|\) for all proper subgroups \(H\) of \(G\)[5].
For more details about \(\mathscr{B}_{\psi}\)-group, we refer the reader to [5, 6]; for example, the authors proved the following results:
**Theorem 1.1**.: _[_5_, Theorem 18]_ _Let \(G\) be a finite abelian group. Then \(G\) is a \(\mathscr{B}_{\psi}\)-group if and only if \(G\cong C_{p^{2}}\) or \(G\cong C_{p}^{n}\), where \(p\) is a prime and \(n\geq 1\)._
**Theorem 1.2**.: _[_6_, Theorem 2.5]_ _Let \(G\) be a finite nilpotent group. Then \(G\) is a \(\mathscr{B}_{\psi}\)-group if and only if \(G\cong C_{p^{2}}\) or \(exp(G)=p\), where \(p\) is a prime._
In [6], Lazorec put forward the following question:
**Question.** [6, Question 3] What can be said about the \(\mathscr{B}_{\psi}\) property of the finite simple groups \(\mathrm{PSL}(2,q)\)?
In this paper, we give an answer to the above question for the case of not only for \(\mathrm{PSL}(2,q)\) but also for all simple groups:
**Main Theorem.** Let \(S\) be a finite simple group, such that \(S\neq Alt(n)\) for any \(n\geq 14\). Then \(S\) is a \(\mathscr{B}_{\psi}\)-group.
**Notation.** For a group \(G\) we denote by \(meo(G)\) the maximum order of an element of \(G\) and by \(m(G)\) the minimum index of a maximal subgroup of \(G\).
\[meo(G)=\max\{o(g)|g\in G\},\qquad m(G)=\min\{|G:M||M\text{ is proper of }G\}.\]
Also, we denote by \(m_{2}(G)\) the second minimum index of a maximal subgroup of \(G\).
## 2. **Main results**
**Definition 2.1**.: We say a group \(G\) is a \(meo\)-group if \(meo(G)\leq m(G)\).
**Remark 2.2**.: For any non-trivial group \(G\), we have \(\psi(G)<|G|\cdot meo(G)\).
**Lemma 2.3**.: _Every non-trivial \(meo\)-group is a \(\mathscr{B}_{\psi}\)-group._
Proof.: Let \(G\) be a non-trivial \(meo\)-group. Then for all proper subgroups \(M\) of \(G\), \(meo(G)\leq|G:M|\). If \(M=1\), then \(\psi(M)<G\). If \(M\neq 1\), then
\[\psi(M)<|M|meo(M)\leq|M|meo(G)\leq|M||G:M|=|G|.\]
Therefore \(G\) is a \(\mathscr{B}_{\psi}\)-group.
**Lemma 2.4**.: _[_4_, Theorem 1.2]_ _For a finite non-abelian simple group \(S\), either \(meo(\operatorname{Aut}(S))<m(S)/4\) or \(S\) is listed in Table 1._
**Theorem 2.5**.: _Let \(n\geqslant 2\) and \((n,q)\neq(2,2),(2,3)\) and \(q\) is a power \(p^{a}\) of a prime \(p\). Then_
1. _the simple groups_ \(\operatorname{PSL}(n,q)\)_, where_ \((n,q)\neq(4,2)\)_, are_ \(meo\)_-groups._
2. _the simple groups_ \(\operatorname{PSL}(n,q)\) _are_ \(\mathscr{B}_{\psi}\)_-groups._
Proof.: (1) We show that \(meo(\operatorname{PSL}(n,q))\leq m(\operatorname{PSL}(n,q))\) for \((n,q)\neq(4,2)\). Using [7, Theorem 1], we have
\[m(\operatorname{PSL}(n,q))=\left\{\begin{array}{ccc}8&(n,q)=(4,2)\\ 6&(n,q)=(2,9)\\ q&n=2,q\in\{5,7,11\}\\ \frac{(q^{n}-1)}{(q-1)}&\text{o.w}\end{array}\right.\]
Using [8], we have \(meo(\operatorname{PSL}(2,5))=5\), \(meo(\operatorname{PSL}(2,7))=7\), \(meo(\operatorname{PSL}(2,11))=11\) and \(meo(\operatorname{PSL}(2,9))=5\) and so \(\operatorname{PSL}(2,5),\operatorname{PSL}(2,7),\operatorname{PSL}(2,11)\) and \(\operatorname{PSL}(2,9)\) are \(meo\)-groups. Therefore we assume that \((n,q)\notin\{(2,5),(2,7),(2,11),(2,9),(4,2)\}\). We know that \(\operatorname{PSL}(n,q)\) is a subgroup of \(\operatorname{PGL}(n,q)\). Therefore \(meo(\operatorname{PSL}(n,q))\leqslant meo(\operatorname{PGL}(n,q))\). By [4, Corollary 2.7], we see that \(meo(\operatorname{PGL}(n,q))=(q^{n}-1)/(q-1)\). Thus
\[meo(\operatorname{PSL}(n,q))\leqslant(q^{n}-1)/(q-1)=m(\operatorname{PSL}(n,q))\]
and so we get the result.
(2) First we assume that \((n,q)\neq(4,2)\). Using part (1) and Lemma 2.3, we have the simple groups \(\operatorname{PSL}(n,q)\) are \(\mathscr{B}_{\psi}\)-groups. Now we assume that \((n,q)=(4,2)\), we show that \(\operatorname{PSL}(4,2)\) is a \(\mathscr{B}_{\psi}\)-group. We know that \(\operatorname{PSL}(4,2)\cong\operatorname{Alt}(8)\). Let \(M\) be a maximal group in \(\operatorname{PSL}(4,2)\cong\operatorname{Alt}(8)\). If \(M\ncong A_{7}\), then by [3, page 22], we have \(|M|\leqslant 1344\) and therefore
\[\psi(M)<|M|meo(M)\leqslant 1344meo(\operatorname{PSL}(4,2))=1344\cdot 15=20160=| \operatorname{PSL}(4,2)|.\]
If \(M\cong\operatorname{Alt}(7)\), then by using GAP we have \(\psi(\operatorname{Alt}(7))=12601<20160=|\operatorname{PSL}(4,2)|\). Therefore \(\operatorname{PSL}(4,2)\) is a \(\mathscr{B}_{\psi}\)-group.
\begin{table}
\begin{tabular}{|c c|c|c|c|c|} \hline \(M_{11}\) & \(M_{23}\) & \(\operatorname{Alt}(n)\) & \(\operatorname{PSL}(n,q)\) & \(\operatorname{PSU}(3,3)\) & \(\operatorname{PSp}(6,2)\) \\ \(M_{12}\) & \(M_{24}\) & & & \(\operatorname{PSU}(3,5)\) & \(\operatorname{PSp}(8,2)\) \\ \(M_{22}\) & \(HS\) & & & \(\operatorname{PSU}(4,3)\) & \(\operatorname{PSp}(4,3)\) \\ \hline \end{tabular}
\end{table}
Table 1. Exceptions in Lemma 2.4
**Proof of the main theorem**. Let \(S\) be a simple group. If \(S\) is an abelian group, then \(S\cong C_{p}\), where \(p\) is prime. Therefore by Theorem 1.1, \(S\) is a \(\mathscr{B}_{\psi}\)-group. So we suppose that \(S\) is a non-abelian simple group. If \(S\) is a simple group rather than the groups listed in Table 1, then by Lemma 2.4, we have \(meo(\operatorname{Aut}(S))<m(S)/4\). Since \(meo(S)\leq meo(\operatorname{Aut}(S))\) and \(m(S)/4<m(S)\), we have \(meo(S)<m(S)\), therefore \(S\) is an \(meo\)-group and using Lemma 2.3, \(S\) is a \(\mathscr{B}_{\psi}\)-group. Now, we consider the following cases (listed in Table 1):
* Let \(S\in\{M_{11},M_{12},M_{22},M_{23},M_{24},HS\}\). Then by using [3] we have \[meo(M_{11}) =11=m(M_{11}), meo(M_{12})=11<12=m(M_{12}),\] \[meo(M_{22}) =11<22=m(M_{22}), meo(M_{23})=23=m(M_{23}),\] \[meo(M_{24}) =23<24=m(M_{24}), meo(HS)=20<100=m(HS).\] Therefore \(S\) is an \(meo\)-group ans so \(S\) is a \(\mathscr{B}_{\psi}\)-group.
* Let \(S=\operatorname{PSL}(n,q)\). Then by Theorem 2.5, we get the result.
* Let \(S\in\{\operatorname{PSU}(3,3),\operatorname{PSU}(3,5),\operatorname{PSU}(4,3) \}\cup\{\operatorname{PSp}(6,2),\operatorname{PSp}(8,2),\operatorname{PSp}(4,3)\}\). Using [4, Table 4], [3, 8], we have: \[meo(\operatorname{PSU}(3,3)) =12<28=m(\operatorname{PSU}(3,3)),\] \[meo(\operatorname{PSU}(3,5)) =10<50=m(\operatorname{PSU}(3,5)),\] \[meo(\operatorname{PSU}(4,3)) =12<112=m(\operatorname{PSU}(4,3)),\] \[meo(\operatorname{PSp}(6,2)) =15<28=m(\operatorname{PSp}(6,2)),\] \[meo(\operatorname{PSp}(8,2)) =30<120=m(\operatorname{PSp}(8,2)),\] \[meo(\operatorname{PSp}(4,3)) =12<27=m(\operatorname{PSp}(4,3)).\] Therefore \(S\) is an \(meo\)-group and then \(S\) is a \(\mathscr{B}_{\psi}\)-group.
\(\bullet\) Let \(M\not\cong\operatorname{Alt}(n-1)\). Then by the above table and [3] we have
\[\psi(M)<|M|meo(M) \leq|M|meo(\operatorname{Alt}(n))\leq|M|m_{2}(\operatorname{Alt}(n))\] \[\leq|M||\operatorname{Alt}(n):M|=|\operatorname{Alt}(n)|.\]
\(\bullet\) Let \(M\cong\operatorname{Alt}(n-1)\). Then by the above table we see that \(\psi(\operatorname{Alt}(n-1))<|\operatorname{Alt}(n)|\). Therefore \(\operatorname{Alt}(n)\), where \(5\leq n\leq 13\), is \(\mathscr{B}_{\psi}\)-group.
Thus we get the result.
**Corollary 2.6**.: If \(S\) is a simple group such that \(S\neq\operatorname{PSL}(4,2)\), \(\operatorname{Alt}(n)\), where \(n\geq 8\), then \(S\) is a \(meo\)-group.
**Corollary 2.7**.: Let \(G\) be a finite abelian group. Then \(G\) is a \(meo\)-group if and only if \(G\cong C_{p}^{n}\), where \(p\) is a prime.
**Corollary 2.8**.: Let \(G\) be a finite nilpotent group. Then \(G\) is a \(meo\)-group if and only if \(exp(G)=p\), where \(p\) is a prime.
**Remark 2.9**.: We know that group \(\operatorname{Alt}(n)\) has a subgroup \(M\cong\operatorname{Alt}(n-1)\), where \(n\geq 3\). On the other hand, \(\psi(\operatorname{Alt}(13))=46287964867\nless 43589145600=|\operatorname{Alt}(14)|\), \(\psi(\operatorname{Alt}(14))=835826439631\nless 653837184000=|\operatorname{Alt}(15)|\). Therefore \(\operatorname{Alt}(14)\) is not a \(\mathscr{B}_{\psi}\)-group.
|
2309.12032 | Human-in-the-Loop Causal Discovery under Latent Confounding using
Ancestral GFlowNets | Structure learning is the crux of causal inference. Notably, causal discovery
(CD) algorithms are brittle when data is scarce, possibly inferring imprecise
causal relations that contradict expert knowledge -- especially when
considering latent confounders. To aggravate the issue, most CD methods do not
provide uncertainty estimates, making it hard for users to interpret results
and improve the inference process. Surprisingly, while CD is a human-centered
affair, no works have focused on building methods that both 1) output
uncertainty estimates that can be verified by experts and 2) interact with
those experts to iteratively refine CD. To solve these issues, we start by
proposing to sample (causal) ancestral graphs proportionally to a belief
distribution based on a score function, such as the Bayesian information
criterion (BIC), using generative flow networks. Then, we leverage the
diversity in candidate graphs and introduce an optimal experimental design to
iteratively probe the expert about the relations among variables, effectively
reducing the uncertainty of our belief over ancestral graphs. Finally, we
update our samples to incorporate human feedback via importance sampling.
Importantly, our method does not require causal sufficiency (i.e., unobserved
confounders may exist). Experiments with synthetic observational data show that
our method can accurately sample from distributions over ancestral graphs and
that we can greatly improve inference quality with human aid. | Tiago da Silva, Eliezer Silva, Adèle Ribeiro, António Góis, Dominik Heider, Samuel Kaski, Diego Mesquita | 2023-09-21T12:53:45Z | http://arxiv.org/abs/2309.12032v1 | # Human-in-the-Loop Causal Discovery under Latent Confounding using Ancestral GFlowNets
###### Abstract
Structure learning is the crux of causal inference. Notably, causal discovery (CD) algorithms are brittle when data is scarce, possibly inferring imprecise causal relations that contradict expert knowledge -- especially when considering latent confounders. To aggravate the issue, most CD methods do not provide uncertainty estimates, making it hard for users to interpret results and improve the inference process. Surprisingly, while CD is a human-centered affair, no works have focused on building methods that both 1) output uncertainty estimates that can be verified by experts and 2) interact with those experts to iteratively refine CD. To solve these issues, we start by proposing to sample (causal) ancestral graphs proportionally to a belief distribution based on a score function, such as the Bayesian information criterion (BIC), using generative flow networks. Then, we leverage the diversity in candidate graphs and introduce an optimal experimental design to iteratively probe the expert about the relations among variables, effectively reducing the uncertainty of our belief over ancestral graphs. Finally, we update our samples to incorporate human feedback via importance sampling. Importantly, our method does not require causal sufficiency (i.e., unobserved confounders may exist). Experiments with synthetic observational data show that our method can accurately sample from distributions over ancestral graphs and that we can greatly improve inference quality with human aid.
## 1 Introduction
Drawing conclusions about cause-and-effect relationships presents a fundamental challenge in various scientific fields and significantly impacts decision-making across diverse domains Pearl (2000). The importance of having structural knowledge, often encoded as a causal diagram, for conducting causal inferences is widely recognized, a concept made prominent by Cartwright (1989)'s dictum: _"no causes in, no causes out"_. When there is no objective knowledge to fully specify a causal diagram, causal discovery (CD) tools are instrumental in partially uncovering causal relationships among variables from, for example, observational data. Formally, let \(\mathbf{V}=\{V_{1},V_{2},\ldots,V_{n}\}\) be a set of \(n\) observed variables and \(\mathcal{D}\) be a dataset containing \(|\mathcal{D}|=m\) samples for each \(V_{i}\in\mathbf{V}\). A CD algorithm takes \(\mathcal{D}\) as input and typically returns a single graph \(\mathcal{G}=(\mathbf{V},\mathbf{E})\) with well-defined causal semantics, in which each node in \(\mathbf{V}\) represents
a variable \(V_{i}\in\mathbf{V}\) and each edge in \(\mathbf{E}\) encodes the possible underlying (causal/confounding) mechanisms compatible with \(\mathcal{D}\).
This work focuses on recovering the structure of the underlying causal diagram when unobserved confounders may be at play. We propose to address this task by not only leveraging observational data but also by accounting for potentially noisy pieces of expert knowledge, otherwise unavailable as data. Throughout this work, we consider ancestral graphs (AGs) as surrogates for causal diagrams. AGs are particularly convenient since they encode latent confounding without explicitly invoking unobserved variables. Moreover, AGs capture all conditional independencies and ancestral relations among observed variables \(\mathbf{V}\), as entailed by a causal diagram (Richardson and Spirtes, 2002).
In the realm of CD from solely observational data, algorithms aim to construct a compact representation of the joint observational distribution \(P(\mathbf{V})\), which implies a factorization as a product of conditional probabilities. Notably, multiple models may entail the same conditional independencies; in such cases, they are denoted as Markov-equivalent. As a result, these algorithms can only reconstruct the class of Markov-equivalent models (AGs), denoted as the Markov Equivalence Class (MEC) and typically represented by a _Partial Ancestral Graph_ (PAG). Importantly, CD beyond the MEC by leveraging domain knowledge presents a critical challenge. Notably, there is no proper characterization of an equivalence class that accounts for knowledge stemming from both humans and data (Wang et al., 2022).
There is a variety of algorithms for CD from observational data, primarily categorized into constraint- and score-based methods. The former uses (in)dependence constraints derived via conditional independence tests to directly construct a PAG representing the MEC. The latter uses a goodness-of-fit score to navigate the space of AGs, selecting an optimum as a representative for the MEC.
Nonetheless, methods within both paradigms suffer from unreliability when data is scarce. Specifically, for the majority of the CD algorithms, formal assurances that the inferred MEC accurately represents the true causal model heavily rely on the so-called _faithfulness_ assumption, which posits that all conditional independencies satisfied by \(P(\mathbf{V})\) are entailed by the true causal model (Zhang and Spirtes, 2016). However, this presents a critical challenge in real-world scenarios, as violations of the faithfulness assumption become more prominent when relying on \(P(\mathbf{V})\) estimated from limited data Uhler et al. (2012); Andersen (2013); Marx et al. (2021). For constraint-based methods, hypothesis tests may lack the statistical power to detect conditional independencies accurately. These errors may propagate and trigger a chain reaction of erroneous orientations Zhang and Spirtes (2008); Zhalama et al. (2017); Ng et al. (2021). For score
Figure 1: **Human-in-the-loop probabilistic CD.** We first train an AGFN to fit a data-informed belief over AGs. Then, we iteratively refine it by 1) questioning (Q) experts on the relation between a highly informative pair of variables and 2) updating the belief given the potentially noisy answers (A). The histograms on top of the edges show marginals over edge types (green denotes ground truth). Notably, our belief increasingly concentrates on the true AG, \(1\to 2\leftrightarrow 3\).
based methods, although score functions directly measure goodness-of-fit on observational data, small sample sizes can significantly skew the estimates for the population parameters. Consequently, structures deemed score-optimal may not necessarily align with the ground-truth MEC Ogarrio et al. (2016). A major concern is that the overwhelming majority of CD algorithms produce a single representation of the MEC as output, without quantifying the uncertainty that arises during the learning process Claassen and Heskes (2012); Jabbari et al. (2017). This poses a significant challenge for experts, as it hinders their ability to validate the algorithm's outcome or gain insights into potential venues for improving inference quality.
To alleviate the lack of uncertainty quantification in CD, we propose sampling AGs from a distribution defined using a score function, which places best-scoring AGs around the mode by design. This effectively provides end-users with samples that reflect the epistemic uncertainty inherent in CD, thus allowing their propagation through downstream causal inferences. In particular, we sample from our belief using _Generative Flow Networks_(GFlowNets; Bengio et al., 2021a,b), which are generative models known for sampling diverse modes while avoiding the mixing time problem of MCMC methods, and not requiring handcrafted proposals nor accept-reject steps (Bengio et al., 2021a).
Acknowledging the low-data regime as CD's primary challenge, we also propose actively integrating human feedback in the inferential process. This involves modeling user knowledge on the existence and nature (confounding/ancestral) of the relations and using it to weigh our beliefs over AGs. During our interactions with experts, we probe them about the relation of the pair of variables that optimizes a utility/acquisition function, for which we propose the negative expected cross-entropy between our prior and updated beliefs. Unlike prior strategies, our acquisition avoids the need to estimate the normalizing constant and predictive distribution of our updated belief, as needed for information gain and mutual information, respectively. Notably, we use importance sampling (Marshall, 1954; Geweke, 1989) to update our initial belief with the human feedback, which avoids retraining GFlowNets after each human interaction.
While incorporating expert knowledge into CD has been a long-standing goal (Meek, 1995a; Chen et al., 2016; Li and Beek, 2018; Wang et al., 2022), existing works either rely on strong assumptions (e.g., causal sufficiency) or assume the knowledge is noiseless, aligned with the ground truth (Andrews, 2020; Wang et al., 2022). Importantly, our work introduces the first iterative CD framework for AGs involving a human in the loop and accommodating potentially noisy feedback, as depicted in Figure 1.
To validate our approach, we conduct experiments using the BIC score, for linear Gaussian causal models. Specifically, we assess: i) our ability to sample from score-based beliefs over AGs, ii) how our samples compare to samples from bootstrapped versions of state-of-the-art (SOTA) methods, and iii) the efficacy of our active knowledge elicitation framework using simulated humans. We observe that our method, Ancestral GFlowNet (AGFN), i) accurately samples from our beliefs over AGs; ii) consistently includes AGs with low structural error among its top-scored samples; and iii) is able to greatly improve performance metrics (i.e., SHD and BIC) when incorporating human in the loop.
In summary, the **contributions** of our work are:
1. We leverage GFlowNets to introduce AGFN, the first CD algorithm for scenarios under latent confounding that employs fully-probabilistic inference on AGs;
2. We show AGFN accurately learns distributions over AGs, effectively capturing epistemic uncertainty.
3. We propose an experimental design to query potentially noisy expert insights on relationships among pairs of variables that lead to optimal uncertainty reduction.
4. We show how to incorporate expert feedback into AGFN without retraining them from scratch.
## 2 Background
This section introduces the relevant notation and concepts. We use uppercase letters \(V\) to represent a random variable or node in a graph, and boldface uppercase letters \(\mathbf{V}\) to represent matrices or sets of random variables or nodes.
**Ancestral graphs.** Under the assumption of no selection bias, an _ancestral graph_ (AG) \(\mathcal{G}\) over \(\mathbf{V}\) is a directed graph comprising directed (\(\rightarrow\)) and bidirected (\(\leftrightarrow\)) edges Richardson and Spirtes (2002), Zhang
[2007]. In any directed graph, if a sequence of directed edges, \(V_{i}\rightarrow\cdots\to V_{j}\), connects two nodes \(V_{i}\) and \(V_{j}\), we refer to this sequence as a directed path. In this case, we also say that \(V_{i}\) is an ancestor of \(V_{j}\) and denote this relation as \(V_{i}\in An(V_{j})\). By definition, any AG \(\mathcal{G}\) must further satisfy the following:
1. there is no directed cycle, i.e., if \(V_{i}\to V_{j}\) is in \(\mathcal{G}\), then \(V_{j}\not\in An(V_{i})\); and
2. there is no almost directed cycle, i.e., if \(V_{i}\leftrightarrow V_{j}\) is in \(\mathcal{G}\), then \(V_{j}\not\in An(V_{i})\) and \(V_{i}\not\in An(V_{j})\).
As a probabilistic model, the nodes in an AG represent random variables, directed edges represent ancestral (causal) relationships, and bidirected edges represent associations solely due to latent confounding. For a complete characterization of AGs, refer to Richardson and Spirtes [2002].
**Data generating model.** We assume that the data-generating model corresponds to a _linear Gaussian structural causal model_ (SCM) [Pearl, 2000] defined by a 4-tuple \(\mathcal{M}=\langle\mathbf{V},\mathbf{U},\mathcal{F},P(\mathbf{U})\rangle\), in which \(\mathbf{V}=\{V_{1},V_{2},\ldots,V_{n}\}\) is a set of \(n\) observed random variables and \(\mathbf{U}=\{U_{1},U_{2},\ldots,U_{n}\}\) is the set of unobserved random variables. Further, let \(Pa_{i}\subseteq\mathbf{V}\setminus\{V_{i}\}\) be the set of observed causes (parents) of \(V_{i}\), and \(U_{i}\) be the set of unobserved causes of \(V_{i}\). Then, each structural equation \(f_{i}\in\mathcal{F}\) is defined as:
\[V_{i}=\sum_{j:V_{j}\in Pa_{i}}\beta_{ij}V_{j}+U_{i} \tag{1}\]
with \(P(\mathbf{U})\) being a multivariate Gaussian distribution with zero mean and a not necessarily identity covariance matrix \(\mathbf{\Omega}=(\omega_{ij})_{1\leq i,j\leq n}\) -- the error terms \(\{U_{i}\}\) are not necessarily mutually independent, implying that the system can be semi-Markovian (i.e., latent confounding may be present).
Consider a lower triangular matrix of structure coefficients \(\mathbf{B}=(\beta_{ij})_{1\leq i,j\leq n}\) such that \((\mathbf{I}-\mathbf{B})\) is invertible, and \(\beta_{ij}\neq 0\) only if \(V_{j}\in Pa_{i}\). Then, the set of structural equations is given in matrix form by
\[\mathbf{V}=\mathbf{BV}+\mathbf{U}\implies\mathbf{V}=(\mathbf{I}-\mathbf{B})^ {-1}\mathbf{U}. \tag{2}\]
The class of all linear Gaussian SCMs parametrized as
\[\mathcal{N}_{\mathcal{M}}=\{\mathcal{N}(\mathbf{0},\mathbf{\Sigma})|\mathbf{ \Sigma}=(\mathbf{I}-\mathbf{B})^{-1}\mathbf{\Omega}(\mathbf{I}-\mathbf{B})^{- \top}\} \tag{3}\]
is represented by an AG in which, for every \(i\neq j\), there is a directed edge \(V_{j}\to V_{i}\) if \(\beta_{ij}\neq 0\) and a bidirected edge \(V_{j}\leftrightarrow V_{i}\) if \(\omega_{ij}\neq 0\)[Richardson and Spirtes, 2002].
**GFlowNets.** Generative Flow Networks [GFlowNet; Bengio et al., 2021a,b] are generative models designed to sample from a finite domain \(\mathcal{X}\) proportionally to some reward function \(R:\mathcal{X}\rightarrow\mathbb{R}_{+}\), which may be parametrized using neural networks. In this work, we define \(R\) as a strictly decreasing transformation of the BIC (more details in section 3). GFlowNets also assume there is a compositional nature to the elements \(x\in\mathcal{X}\), meaning that they can be built by iteratively acting to modify a base object (i.e., an _initial state_). For instance, graphs can be built by adding edges to a node skeleton [Deleu et al., 2022] or molecules by adding atoms to an initial structure [Bengio et al., 2021a].
The generative process follows a trajectory of states \(s\in\mathcal{S}\) guided by a transition probability \(\pi_{F}:\mathcal{S}^{2}\rightarrow[0,1]\). In turn, \(\pi_{F}\) is proportional to a _foward flow_ function \(F_{\theta}:\mathcal{S}^{2}\rightarrow\mathbb{R}_{+}\), which is parameterized
Figure 2: **Illustration of the generative process of AGs \(\{\mathcal{G}_{5},\mathcal{G}_{6},\mathcal{G}_{7}\}\)** using GFlowNets. Starting with an empty graph \(\mathcal{G}_{0}\), we add edges between variables \(\{X_{1},X_{2},X_{3}\}\) according to the action-policy \(\pi_{F}\). Solid edges trace trajectories leading to sampled graphs. Dashed lines represent non-realized transitions to terminal state \(\square\).
by a neural network \(\theta\). Let \(\mathrm{Pa}(s^{\prime})\) (\(\mathrm{Ch}(s^{\prime})\)) be the set of all states which can transition into (directly reached from) \(s^{\prime}\). Then, \(\pi_{F}\) is defined as
\[\pi_{F}(s^{\prime}|s)=\frac{F_{\theta}(s\to s^{\prime})}{\sum_{s^{\prime}\in \mathrm{Ch}(s)}F_{\theta}(s\to s^{\prime})}. \tag{4}\]
The support \(\mathcal{X}\) of \(R\) is contained within \(\mathcal{S}\). There are also two special states in \(\mathcal{S}\): an _initial state_\(s_{0}\) and a _terminal state_\(s_{f}\). We start with the initial state \(s_{0}\) and transform it to a new valid state \(s\) with probability \(\pi_{F}(s|s_{0};\theta)\). We keep iterating this procedure until reaching \(s_{f}\). States \(s\) valid as final samples (\(s\in\mathcal{X}\)) are known as _terminating states_ and have a positive probability for the transition \(s\to s_{f}\). Figure 2 illustrates this process with \(\mathcal{X}\) being the space of AGs. Crucially, the same parameterization \(\theta\) is used for all transition probabilities \(\pi_{F}(\cdot|s;\theta)\) given any departing state \(s\), allowing for generalization to states never visited during training.
As the GFlowNet framework requires that no sequence of actions leads to a loop, we represent the space of possible action sequences by a pointed Directed Acyclic Graph (DAG) (Bengio et al., 2021). The generation of any sample \(x\in\mathcal{X}\) follows a trajectory \(\tau=(s_{0},s_{1},\ldots,s_{T}=x,s_{f})\in\mathcal{S}^{T+2}\) for a \(T\geq 0\). Different trajectories may lead to the same sample \(x\). To ensure we sample proportionally to \(R\), we search for a GFlowNet that satisfies the _flow-matching condition_, i.e., \(\forall s^{\prime}\in\mathcal{S}\):
\[\sum_{s\in\mathrm{Pa}(s^{\prime})}F_{\theta}(s\to s^{\prime})=R(s^{\prime})+ \sum_{s^{\prime\prime}\in\mathrm{Ch}(s^{\prime})}F_{\theta}(s^{\prime}\to s ^{\prime\prime}). \tag{5}\]
Equation (5) implies the flow that enters \(s^{\prime}\) equals the flow leaving \(s^{\prime}\), except for some flow \(R(s^{\prime})\) leaking from \(s^{\prime}\) into \(s_{f}\). We let \(R(s)=0\) for \(s\notin\mathcal{X}\). Eventually, it may be that all states \(s\) are valid candidates, i.e., \(\mathcal{S}=\mathcal{X}\cup\{s_{f}\}\). If so, each of eq. (5)'s solutions satisfies a _detailed-balance condition_,
\[\frac{R(s)F_{\theta}(s\to s^{\prime})F_{\theta}(s^{\prime}\to s_{f})}{F_{ \theta}(s\to s_{f})}=R(s^{\prime})F_{B,\theta}(s^{\prime}\to s), \tag{6}\]
for a parametrized backward flow \(F_{B,\theta}\colon\mathcal{S}^{2}\to\mathbb{R}_{+}\)(Deleu et al., 2022). In practice, we enforce eq. (6) by minimizing
\[\mathcal{L}(\theta)\!=\!\mathop{\mathbb{E}}_{s\to s^{\prime}}\!\left[\!\left( \!\log\frac{R(s^{\prime})\pi_{F_{B}}(s|s^{\prime};\theta)\pi_{F}(s_{f}|s; \theta)}{R(s)\pi_{F}(s^{\prime}|s;\theta)\pi_{F}(s_{f}|s^{\prime};\theta)} \right)^{2}\!\!\right]. \tag{7}\]
## 3 Ancestral GFlowNets
We propose AGFN, a GFlowNet-based method for sampling AGs using a score function. Specifically, AGFN encompasses a GFlowNet with the following characteristics:
1. Each trajectory state is a valid AG \(\mathcal{G}_{t}\).
2. A terminating state's reward \(R(\mathcal{G}_{\mathcal{T}})\) is a score-based potential suitable for CD of AGs.
3. A well-trained AGFN samples AGs with frequencies proportional to their rewards and with the best-scoring AG being, by design, the mode.
The generation of a trajectory \(\tau=\{\{\},\mathcal{G}_{1},\mathcal{G}_{2},\ldots,\mathcal{G}_{T}\}\) begins with a totally disconnected graph with nodes \(\mathbf{V}\), iteratively adding edges of types \(\{\leftarrow,\rightarrow,\leftrightarrow\}\) between pairs of variables. The following paragraphs describe AGFN. For further details, please refer to the Appendix.
**Action constraints.** To ensure AGFN only samples AGs, we mask out actions that would lead to paths forming cycles or almost cycles. To achieve this, we verify whether the resulting graph respects Bhattacharya et al. (2021)'s algebraic characterization of the space of AGs. More specifically, any AG \(\mathcal{G}\) is characterized by an adjacency matrix \(\mathbf{A}_{d}\in\mathbb{R}^{n\times n}\) for its directed edges and another adjacency matrix \(\mathbf{A}_{b}\in\mathbb{R}^{n\times n}\) for its bidirected edges, adhering to:
\[\mathrm{trace}(\exp\{\mathbf{A}_{d}\})-n+\mathbf{1}^{T}(\exp\{\mathbf{A}_{d} \}\odot\mathbf{A}_{b})\mathbf{1}=0, \tag{8}\]
in which \(\mathbf{1}\) is a \(n\)-dimensional unit vector and \(\odot\) denotes the Hadamard (elementwise) product of matrices.
**Score-based belief.** We propose using a strictly decreasing transformation of a score function \(U\) as the reward \(R\) for AGFN. More precisely, we define \(R\) as
\[R(\mathcal{G})=\exp\left\{\frac{\mu-U(\mathcal{G})}{\sigma}\right\} \tag{9}\]
for given constants \(\mu\in\mathbb{R}\) and \(\sigma\in\mathbb{R}^{+}\) that ensure numerical stability (Zhang et al., 2023). In practice, we sample \(S\) AGs \(\{\mathcal{G}^{(s)}\}_{s=1}^{S}\) from an untrained AGFN, and set \(\mu=\nicefrac{{1}}{{S}}\sum_{s}U(\mathcal{G}^{(s)})\) and \(\sigma=\sqrt{\nicefrac{{1}}{{S}}\sum_{s}(U(\mathcal{G}^{(s)})-\mu)^{2}}\).
**Score for linear Gaussian models.** Since we focus on linear Gaussian models, we choose the _extended Bayesian Information Criterion_Foygel and Drton (2010) as our score function. Specifically, for any AG \(\mathcal{G}=(\mathbf{V},\mathbf{E})\):
\[U(\mathcal{G})=-2l_{N}(\mathbf{\hat{B}},\mathbf{\hat{\Omega}})+|\mathbf{E}| \log n+2|\mathbf{E}|\log|\mathbf{V}|, \tag{10}\]
in which \((\mathbf{\hat{B}},\mathbf{\hat{\Omega}})\) is the MLE estimate of model parameters (see eq. (3)) obtained using the _residual iterative conditional fitting_ algorithm Drton et al. (2009).
**Forward flow.** We use a Graph Isomorphism Network (GIN) Xu et al. (2019)\(\Phi\) to compute a \(d\)-dimensional representation for each node in the AG \(\mathcal{G}_{t}\) at the \(t\)-th step of the generative process and use sum pooling to get an embedding for \(\mathcal{G}_{t}\). Then, considering \(\mathcal{A}_{t}\) as the space of feasible actions at \(\mathcal{G}_{t}\) (i.e., those leading to an AG), we use an MLP \(\phi\colon\mathbb{R}^{d}\to\mathbb{R}^{|\mathcal{A}_{t}|}\) with a softmax activation at its last layer to map \(\mathcal{G}_{t}\)'s embedding to a distribution over \(\mathcal{A}_{t}\). More precisely, given \(\mathbf{H}^{(t)}=\Phi(\mathcal{G}_{t})\in\mathbb{R}^{|\mathbf{V}|\times d}\), we compute
\[\mathbf{p}=\phi\left(\sum_{v\in\mathbf{V}}\mathbf{H}_{v}^{(t)}\right) \tag{11}\]
as the probability distribution over the feasible actions at \(\mathcal{G}_{t}\).
**Backward flow.** Backward actions correspond to removing edges. Following Shen et al. (2023), we parametrize the backward flow \(F_{B}\) with an MLP and alternate between updating \(\pi_{F}\) and \(\pi_{F_{B}}\), using gradient-based optimization.
## 4 Human-in-the-Loop Causal Discovery
Concluding the training, we propose leveraging the AGFN-generated samples to design questions for the expert that optimize the reduction of entropy in the distribution \(p_{\theta}(\mathcal{G})\) over the space of AGs. Then, we use the human feedback to update \(p_{\theta}(\mathcal{G})\), and iteratively repeat the process. The following paragraphs describe i) how we model human feedback, ii) how we update our belief over AGs given human responses, and iii) our experimental design strategy for expert inquiry.
**Modeling human feedback.**
We assume humans are capable of answering questions regarding the ancestral relationship between pairs of random variables. In this case, we model their prior knowledge on a relation \(r=\{U,V\}\) between nodes \(U,V\in\mathbf{V}\) as a categorical distribution over a random variable denoted \(\omega_{r}\). Fix an arbitrary total order \(<\) in \(\mathbf{V}\). By definition, \(\omega_{r}=1\) if there is no edge between \(U\) and \(V\); \(\omega_{r}=2\) if \(U\) is ancestor of \(V<U\); \(\omega_{r}=3\) if \(V\) is ancestor of \(U>V\); and \(\omega_{r}=4\) if there is a bidirected edge between \(U\) and \(V\). Since the human has access to our AGFN before being probed for the first time, we set \(\rho_{r,k}=p_{\theta}(\omega_{r}=k)\) as the prior probability of \(\omega_{r}=k\). Moreover, we consider that the expert's feedback \(f_{r}\in\{1,2,3,4\}\) on the relation \(r\) is a noisy realization of the true, unobserved value of the relation feature \(\omega_{r}\) under the expert's model. Putting all elements together results in a two-level Bayesian hierarchical scheme for categorical data:
\[\omega_{r} \sim\mathrm{Cat}(\mathbf{\rho}_{r}), \tag{12}\] \[f_{r}|\omega_{r} \sim\mathrm{Cat}\left(\delta_{\omega_{r}}\cdot\pi+(\mathbf{1}- \delta_{\omega_{r}})\cdot\left(\frac{1-\pi}{3}\right)\right), \tag{13}\]
in which \(\mathbf{\rho}_{r}=(\rho_{r,1},\rho_{r,2},\rho_{r,3},\rho_{r,4})\) represents our prior beliefs about the relations' features, \(\pi\in[0,1]\) reflects the reliability of the expert's feedback, and \(\delta_{k}\) is the \(k\)-th canonical basis of \(\mathbb{R}^{4}\). Conveniently, the posterior distribution of the relation feature \(\omega_{r}\) given the feedback \(f_{r}\) is a categorical distribution parametrized by
\[\frac{\mathbf{\rho}_{r}}{\eta_{r}}\odot\left(\pi\cdot\delta_{f_{r}}+\left(\frac{1- \pi}{3}\right)\cdot(\mathbf{1}-\delta_{f_{r}})\right), \tag{14}\]
with \(\eta_{r}=\rho_{r,f_{r}}\cdot\pi+\left(\frac{1-\pi}{3}\right)\cdot(1-\rho_{r,f _{r}})\).
#### Updating beliefs.
We update our AGFN by weighing it by our posterior over the expert's knowledge, described in the previous paragraph, similarly to a product-of-experts approach Hinton (2002). For this, let \(\mathbf{f}_{K}=(f_{r_{k}})_{1\leq k\leq K}\) be the sequence of \(K\) feedbacks issued by the expert and define our novel belief distribution \(q(\mathcal{G};\mathbf{f}_{K})\) over the space of AGs as
\[q(\mathcal{G};\mathbf{f}_{K})\propto p_{\theta}(\mathcal{G})\prod_{1\leq k \leq K}p(\omega_{r_{k}}|f_{r_{k}}). \tag{15}\]
Importantly, we use \(p_{\theta}\) as proposal distribution in importance (re-)sampling Gordon et al. (1993) to approximately sample from \(q(\mathcal{G};\mathbf{f}_{K})\) -- or to approximate the expected value of a test function. More precisely, we estimate the value of a function \(h\) over the space of AGs as:
\[\mathbb{E}_{q}[h(\mathcal{G})]\approx\sum_{t=1}^{T}c^{-1}\frac{q(\mathcal{G}^ {(t)};\mathbf{f}_{K})}{p_{\theta}(\mathcal{G}^{(t)})}h(\mathcal{G}^{(t)}) \tag{16}\]
with \((\mathcal{G}^{(t)})_{t=1}^{T}\sim p_{\theta}(\mathcal{G})\) and \(c=\sum_{t=1}^{T}q(\mathcal{G}^{(t)};\mathbf{f}_{K})/_{p_{\theta}(\mathcal{G}^ {(t)})}\).
#### Active knowledge elicitation.
To make the most out of possibly costly human interactions, we query the human about the relation that maximally reduces the expected cross-entropy between our belief over AGs before and after human feedback. More precisely, we define an acquisition function \(a_{k}:\binom{\mathbf{V}}{2}\rightarrow\mathbb{R}\) for the \(k>1\)-th inquiry as:
\[a_{k}(r)=-\mathbb{E}_{f_{r}\sim p(\cdot|\mathbf{f}_{K})}\big{[}\mathbf{H} \left(q(\mathcal{G};\mathbf{f}_{K},f_{r}),q(\mathcal{G};\mathbf{f}_{K}) \right)\big{]} \tag{17}\]
in which \(p(f_{r}|\mathbf{f}_{K})\) is the posterior predictive distribution according to the user model, \(q_{0}\propto R\) and \(\mathbf{H}(\cdot,\cdot)\) is the cross-entropy. Then, we maximize this acquisition to select which relation \(\tilde{r}_{k}\) we will probe the expert, i.e.:
\[\tilde{r}_{k}=\underset{r\in\binom{\mathbf{V}}{2}}{\text{arg max}}\,a_{k}(r). \tag{18}\]
As aforementioned, we use importance sampling with \(q_{0}\) as a proposal to estimate the acquisition function \(a_{k}\). This allows us to leverage AGFN samples and effectively avoid the need for retraining them. It is worth mentioning that because \(\mathbf{H}(p,p^{\prime})\geq\mathbf{H}(p,p)\) for any two distributions \(p\) and \(p^{\prime}\) of the same support, our strategy is equivalent to minimizing an upper bound on the entropy of \(q_{k}\). Also, different from acquisitions based on information gain or mutual information, approximating Equation (17) via Monte Carlo does not require exhaustive integration over the space of AGs to yield asymptotically unbiased estimates -- see Appendix.
Figure 3: **Sampling quality. The reward-induced marginal distribution over graph features is adequately approximated by the marginal distribution learned by the GFlowNet.**
## 5 Experiments
Our experiments have three objectives. First, we validate that AGFN can accurately learn the target distribution over the space of AGs. Second, we show that AGFN performs competitively with alternative methods on three data sets. Third, we attest that our experimental design for incorporating the expert's feedback efficiently reduces the uncertainty over AGFN's distribution. We provide further experimental details in the Appendix. Code is available in the supplement.
### Distributional Assessment of AGFN
**Data.** Since violations of faithfulness are more likely in dense graphs Uhler et al. (2012), we create 20 5-node random graphs Uhler et al. (2012) from a directed configuration model Newman (2010) whose in- and out-degrees are uniformly sampled from \(\{0,1,2,3,4\}\). We draw 500 independent samples from a structure-compatible linear Gaussian SCM with random parameters for each graph.
**Setup.** We train AGFN for each random graph using their respective samples. Then, we collect AGFN samples and use them to compute empirical distributions over the (i) edge features (i.e., \(p_{\theta}(U\to V)\), \(p_{\theta}(U\gets V)\), \(p_{\theta}(U\leftrightarrow V)\) and \(p_{\theta}(U\neq V)\) for each pair \((U,V)\)), (ii) BIC, and (iii) structural Hamming distance to the true causal diagram (SHD).
**Results.** Figure 3 shows that the AGFN adequately approximates the theoretical distribution induced by the reward in Equation (9). Furthermore, AGFNs induce distributions over BIC and SHD values that closely resemble those induced by \(p(\mathcal{G})\propto R(\mathcal{G})\). We also note an important improvement over the prior art on probabilistic CD (N-ADMG): we found that over 60% of its samples were non-ancestral and that this method was of little use for making inferences over AGs. Meanwhile, AGFN does not sample non-ancestral graphs.
### Comparison with SOTA CD algorithms
**Data.** We generate 10 datasets with 500 independent samples from the randomly parametrized linear Gaussian SCMs corresponding to the canonical causal diagram Richardson and Spirtes (2002) in each AG depicted in Figure 4. Unshielded colliders and discriminating paths are fundamental patterns in the detection of invariances by CD algorithms under latent confounding Spirtes and Richardson (1997), Zhang (2008b). Thus, we consider the following 4-node causal diagrams with increasingly difficult configurations: (i) chain4, a chain without latent confounders; (ii) collfork, a graph with triplets involving colliders and non-colliders under latent confounding, and (iii) IV, a structure with a discriminating path for \(Z\): \(W\to X\gets Z\to Y\).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} & \multicolumn{2}{c|}{chain4} & \multicolumn{2}{c|}{IV} & \multicolumn{2}{c}{collfork} \\ \hline & SHD & BIC & SHD & BIC & SHD & BIC \\ \hline FCI\({}^{\star}\) & \(3.03\pm 1.13\) & \(5481.33\pm 2.69\) & \(\mathbf{3.75}\pm 0.64\) & \(\mathbf{5426.18}\pm 1.74\) & \(6.26\pm 1.20\) & \(5433.80\pm 6.94\) \\ GCFI\({}^{\star}\) & \(\mathbf{2.24}\pm 0.64\) & \(\mathbf{5479.77}\pm 1.75\) & \(4.21\pm 0.96\) & \(5427.09\pm 2.85\) & \(\mathbf{5.23}\pm 1.08\) & \(\mathbf{5431.67}\pm 7.91\) \\ DCD\({}^{\star}\) & \(3.38\pm 1.30\) & \(5482.97\pm 5.16\) & \(5.22\pm 1.23\) & \(5429.51\pm 4.37\) & \(6.02\pm 1.22\) & \(5436.84\pm 9.41\) \\ N-ADMG & \(6.14\pm 1.49\) & \(5520.01\pm 75.34\) & \(\mathbf{8.50}\pm 1.44\) & \(5583.17\pm 79.47\) & \(7.16\pm 1.50\) & \(5491.86\pm 84.47\) \\ AGFN (ours) & \(\mathbf{6.04}\pm 2.12\) & \(\mathbf{5494.67}\pm 37.08\) & \(8.72\pm 2.04\) & \(\mathbf{5456.16}\pm 52.25\) & \(\mathbf{6.58}\pm 2.34\) & \(\mathbf{5478.01}\pm 40.36\) \\ \end{tabular}
\end{table}
Table 1: **Average SHD and BIC.** The \({}^{\star}\) denotes methods yielding point estimates. We use Bootstrap to report the mean and average standard deviation for these. For N-ADMG and AGFN, we estimate the quantities using 100k samples.
Figure 4: **Ancestral graphs** representing the data generating models for the three considered datasets in Table 1.
**Baselines.** We compare AGFN with four notable CD methods: FCI (Spirtes et al., 2001; Zhang, 2008b), GFCI (Ogarrio et al., 2016), DCD (Bhattacharya et al., 2021), and N-ADMG Ashman et al. (2023). The baselines span four broad classes of CD methods. FCI is a seminal constraint-based CD algorithm that learns a PAG consistent with conditional independencies entailed by statistical tests. GFCI is a hybrid CD algorithm that learns a PAG by first obtaining an approximate structure using FGS (Ramsey, 2015) (a BIC-score-based search algorithm for causally sufficient scenarios) and then by applying FCI to identify possible confounding and remove some edges added by FGS. DCD casts CD as continuous optimization with differentiable algebraic constraints defining the space of AGs and uses gradient-based algorithms to solve it. N-ADMG computes a variational approximation of the joint posterior distribution over the space of bow-free causal diagrams (Nowzohour et al., 2017) associated with non-linear SCMs with additive noise. While N-ADMG focuses on a more restricted setting compared to AGs, it offers some uncertainty quantification in the variational po[https://www.overleaf.com/project/650454a084b798332af29ebesterior](https://www.overleaf.com/project/650454a084b798332af29ebesterior), making it more closely comparable to our approach. We rigorously follow the experimental guidelines in the original works.
**Experimental setup** We train AGFN on each dataset and use it to sample 100k graphs. We also apply FCI, GFCI, and DCD to 100 bootstrapped resamplings of each dataset to emulate _confidence_ distributions induced by these algorithms. To compare the algorithms' outputs, we compute the sample mean and standard deviation of the BIC and SHD at the PAG level. Specifically, we compute the SHD between the ground-truth PAG and each estimated PAG obtained by each method. If the output is a PAG member (as for DCD, N-ADMG, and AGFN) we use FCI to transform the output using these graphs as oracles for conditional independencies. Furthermore, we directly compute the BIC for the outputs, as all PAG members are asymptotically score-equivalent.
**Results.** Table 1 compares AGFN against baseline CD algorithms. Notably, our method consistently outperforms the only probabilistic baseline in the literature (N-ADMG) in terms of both SHD and BIC. As expected, however, the average BIC and SHD induced by AGFN are larger than those induced by the bootstrapped versions of the non-probabilistic algorithms, and the variances are greater; this is due to the inherent sampling diversity of our method and the resulting generation of possibly implausible samples. Indeed, Table 2 shows that the three most rewarding samples from AGFN are as good as (and sometimes better than) the other CD algorithms. Results for N-ADMG comprise the three most frequent samples from the variational distribution.
### Simulating humans in the loop
**Data.** We follow the procedure from Section 5.1 to generate graphs with 4, 6, 8 and 10 nodes. We draw 500 samples from a compatible linear Gaussian SCM and use them to train an AGFN. Then, we follow our active elicitation strategy from Section 4 to probe simulated humans, adhering to the generative model described in the same section, with \(\pi=0.9\).
**Setup.** Since we are the first to propose an optimal design for expert knowledge elicitation, there are no baselines to compare AGFN against. That being said, we aim to determine whether the inclusion of expert feedback enhances the concentration of the learned distribution around the true AG, and evaluate the effectiveness of our elicitation strategy. To do so, we measure SHD to the true AG and BIC as a function of the number of expert interactions.
**Results.** Figure 5 shows that incorporating expert feedback substantially decreases the expected SHD and BIC under our belief over AGs. On the one hand, the remarkable decrease in expected SHD shows that our belief becomes increasingly focused on the true AG as we iteratively request the expert's feedback,
\begin{table}
\begin{tabular}{c|c|c|c} & chain4 & IV & collfork \\ \hline FCI & 2.07\(\pm\)2.00 & 3.83\(\pm\)2.90 & 5.43\(\pm\)1.87 \\ GFCI & **1.50\(\pm\)**1.63 & 3.63\(\pm\)3.16 & 5.53\(\pm\)2.11 \\ DCD & 2.27\(\pm\)1.46 & 4.80\(\pm\)2.17 & 5.60\(\pm\)2.13 \\ N-ADMG (top 3) & 4.38\(\pm\)0.81 & 6.08\(\pm\)1.77 & 6.87\(\pm\)0.93 \\ AGFN (top 3) & 2.00\(\pm\)1.55 & **3.50\(\pm\)**3.29 & **4.90\(\pm\)**2.70 \\ \end{tabular}
\end{table}
Table 2: **SHD for point estimates.** The mean SHD of the top-3 AGFN draws is comparable to or better than baselines.
regardless of the querying strategy. On the other hand, the second row shows that our querying strategy results in a substantial decrease in the BIC, demonstrating a faster reduction than random queries. This validates the notion that some edges are more informative than others, and we should prioritize them when probing the expert.
## 6 Related Work
**CD under latent confounding.** Following the seminal works by Spirtes et al. (2001) and Zhang (2008b) introducing the complete FCI, a variety of works have emerged. Among them are algorithms designed for sparse scenarios, including RFCI (Colombo et al., 2012) and others (Silva, 2013; Claassen et al., 2013). Notably, Silva (2013)'s framework uses a Bayesian approach to CD of Gaussian causal diagrams based on sparse covariance matrices. Nonetheless, it requires sampling one edge at a time and relies on numerical heuristics that might effectively alter the posterior we are sampling from. Colombo et al. (2012) introduced the conservative FCI to handle conflicts arising from statistical errors in scenarios with limited data, even though it yields less informative results. Subsequent efforts to improve reliability led to the emergence of constraint-based CD algorithms based in Boolean satisfiability (Hyttinen et al., 2014; Magliacane et al., 2016), although they are known to scale poorly on \(|\mathbf{V}|\)(Lu et al., 2021). In another paradigm, score-based search algorithms rank MAGs according to goodness-of-fit measures, commonly using BIC for linear Gaussian SCMs (Triantafillou and Tsamardinos, 2016; Zhalama et al., 2017a; Rantanen et al., 2021). There are also hybrid approaches that combine constraint-based strategies to reduce the search space, such as GFCI (Ogarrio et al., 2016), M3HC (Tsirlis et al., 2018), BCCD (Claassen and Heskes, 2012), and GSPo (Bernstein et al., 2020). Continuous optimization approaches have recently emerged as a novel approach to score-based CD, such as DCD (Bhattacharya et al., 2021) and N-ADMG Ashman et al. (2023).
**CD with expert knowledge.** Previous works on CD have explored various forms of background knowledge. This includes knowledge on edge existence/non-existence Meek (1995b), ancestral constraints Chen et al. (2016), variable grouping Parvainen and Kaski (2017), partial order Andrews (2020) and typing of variables (Brouillard et al., 2022). Incorporating expert knowledge is pivotal to reducing the search space and the size of the learned equivalence class. However, due to significant challenges, up to date, there are only a few works trying to integrate human knowledge into CD within the context of latent confounding (Andrews, 2020; Wang et al., 2022). These works operate under the assumption of perfect expert feedback. In contrast, our contribution is novel in that it confronts the challenges of real-world situations where expert input might be inaccurate.
## 7 Discussion
We presented AGFN, the first probabilistic CD method that accounts for latent confounding and incorporates potentially noisy human feedback in the loop. AGFN samples AGs according to a score function,
Figure 5: **CD with simulated human feedback. The top/bottom row shows the mean SHD/BIC of AGFN samples as a function of human interactions. Probing the expert about the edge that minimizes the mean cross-entropy leads to a faster decrease in BIC compared to a random strategy. The SHD decreases similarly in both cases. Results reflect the outcomes of 30 simulations.**
quantifying the uncertainty in the learning process. Furthermore, it can leverage human feedback in an optimal design strategy, efficiently reducing our uncertainty on the true data-generating model.
This work is focused on linear Gaussian models, using BIC as our score. However, the implementation of AGFNs is not restricted by this choice. In principle, we could replace the BIC with alternative score functions that are more appropriate for different types of variables, e.g. for discrete data (Dtrton and Richardson, 2008). It is also important to highlight that our framework does not require retraining the AGFN after we see human feedback. Moreover, AGFN is a GPU-powered algorithm, and while we used only one GPU in our experiments, it is possible to greatly accelerate AGFN by using cluster architectures with multiple GPUs.
By offering uncertainty-quantified CD together with a recipe for including humans in the loop, we expect AGFNs will significantly enhance the accuracy and reliability of CD, especially in real-world domains. Moreover, AGFNs bring a novel perspective to developing more comprehensive tools for downstream causal tasks Bareinboim and Pearl (2016), as the resulting distribution encodes knowledge from data and human feedback while accounting for epistemic uncertainty. For example, methods for causal reasoning that currently rely on a single AG Zhang (2008), Jaber et al. (2022) could exploit this distribution to incorporate a richer understanding of uncertainty and knowledge, thereby enhancing their robustness and reliability.
## Acknowledgments
Diego Mesquita acknowledges the support by the Silicon Valley Community Foundation (SVCF) through the Ripple impact fund, the Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ) through the _Jovem Cientista do Nosso Estado_ program, and the Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) through the grant 2023/00815-6. Antonio Gois acknowledges the support by Samsung Electronics Co., Ltd. Adele Ribeiro and Dominik Heider were supported by the LOEWE program of the State of Hesse (Germany) in the Diffusible Signals research cluster and by the German Federal Ministry of Education and Research (BMBF) [031L0267A] (Deep Insight). Samuel Kaski was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI), EU Horizon 2020 (European Network of AI Excellence Centres ELISE, grant agreement 951847), UKRI Turing AI World-Leading Researcher Fellowship (EP/W002973/1). We also acknowledge the computational resources provided by the Aalto Science-IT Project from Computer Science IT.
Additional Related Works
**Generative Flow Networks**[GFlowNets; Bengio et al., 2021a,b] are generative models that sample discrete composite objects from an unnormalized reward function. They have been successfully used to sample various structures such as protein sequences [Jain et al., 2022] and schedules [Zhang et al., 2023]. They have also been used to train energy-based models [Zhang et al., 2022]. In the field of structure learning, they have been applied to Bayesian networks -- more specifically to sample a posterior over DAGs in linear Gaussian networks, although without accounting for unobserved confounding [Deleu et al., 2022]. Recently, Deleu et al. [2023] proposed an extension to jointly infer the structure and parameters, also grounded in the assumption of causal sufficiency. It is worth highlighting that training GFlowNets in these scenarios presents optimization challenges, resulting in the utilization of a variety of loss functions [Shen et al., 2023]. Moreover, Lahlou et al. [2023] proposed an extension of GFlowNets to continuous domains.
## Appendix B Cross-entropy acquisition
The expected _mutual information_ and the _information gain_ are the most widely used information-theoretic measures to actively interact with a human and choose the most informative data points to be labeled Ryan et al. [2015]. However, we instead use the negative expected cross-entropy between the current and updated beliefs as the acquisition function of our experimental design (see eq. (17)). As we show next, the approximation of both the mutual information and the information gain is intrinsically dependent upon the estimation of the log-partition of the updated beliefs over the space of ancestral graphs. Doing so is computationally intensive, and we would either need to use a Monte Carlo estimator of the integrals or use some posterior approximation -- in both cases, leading to asymptotically biased estimates of the acquisition. In contrast, we can easily leverage AGFN samples to compute asymptotically unbiased estimates of our acquisition function. The next paragraphs provide further details.
Mutual information.The _mutual information_ between two random variables \(X\) and \(Y\) with joint distribution \(p(X,Y)\) and marginal distributions \(p(X)\) and \(p(Y)\) is
\[I(X,Y)=\mathcal{D}_{KL}[p(X,Y)||p(X)\otimes p(Y)], \tag{19}\]
in which \(\mathcal{D}_{KL}\) is the Kullback-Leibler divergence. In this context, an alternative approach to our experimental design for active knowledge elicitation would consist in iteratively maximizing the expected mutual information between the observed samples, \(\mathcal{G}\), and the elicited feedback, \(f_{K}\), to select the relation about which the expert would provide feedback. More specifically, we could choose
\[r_{K+1}=\underset{r\in\binom{n}{2}}{\arg\,\max}\mathbb{E}_{f_{r}\sim p(\cdot| \mathbf{f}_{K})}[I(\mathcal{G},f_{r})], \tag{20}\]
in which
\[I(\mathcal{G},f_{r})=\mathcal{D}_{KL}[q(\mathcal{G},f_{r}|\mathbf{f}_{K})||q( \mathcal{G}|\mathbf{f}_{K})\otimes p(f_{r}|\mathbf{f}_{K})], \tag{21}\]
at each interaction with the expert. Nonetheless, note that
\[q(\mathcal{G},f_{r}|\mathbf{f}_{K})=q(\mathcal{G}|\mathbf{f}_{K+ 1})p(f_{r}|\mathbf{f}_{K})\] \[=c_{K+1}(f_{r})p_{\theta}(\mathcal{G})\left(\prod_{1\leq k\leq K+ 1}p(\omega_{r_{k}}|f_{r_{k}})\right)\cdot p(f_{r}|\mathbf{f}_{K}),\]
with \(f_{r_{K+1}}=f_{r}\) and
\[c_{K+1}(f_{r})=\left(\hskip-1.422638pt\sum_{\mathcal{G}}p_{\theta}(\mathcal{G })\hskip-1.422638pt\left(\prod_{1\leq k\leq K+1}p(\omega_{r_{k}}|f_{r_{k}}) \hskip-1.422638pt\right)\hskip-1.422638pt\right)^{-1} \tag{22}\]
as the partition function of our updated beliefs. Note also that Equation (21) entails computing the entropy of \(q(\mathcal{G},f_{r}|\mathbf{f}_{K})\). Thus, the selection criterion in eq. (20) requires an accurate estimate of \(\log c_{K+1}(f_{r})\) -- which is well-known for being a difficult problem Ma et al. [2013] -- and the Monte Carlo estimator for the log-partition function is asymptotically biased.
Information gain.The expected _information gain_ of an elicitation is defined as the expected KL divergence between our updated and current beliefs over ancestral graphs. This approach is widely employed in Bayesian experimental design Ryan et al. (2015). In our framework, the information gain resulting from a feedback \(f_{r}\) is
\[\operatorname{IG}_{K}(f_{r})=\mathcal{D}_{KL}[q(\mathcal{G}|\mathbf{f}_{K}\cup f _{r})||q(\mathcal{G}|\mathbf{f}_{K})], \tag{23}\]
which yields the criterion
\[r_{K+1}=\operatorname*{arg\,max}_{r\in\binom{\mathcal{V}}{2}}\mathbb{E}_{f_{r} \sim p(\cdot|\mathbf{f}_{K})}\left[\operatorname{IG}_{K}(f_{r})\right]. \tag{24}\]
Nonetheless, eq. (24) suffers from the same problems of eq. (20): it requires approximating the logarithm of the partition function \(c_{K+1}(f_{r})\) of a distribution over the combinatorially large space of ancestral graphs, which is notably very challenging to estimate. Indeed, as
\[\mathcal{D}_{KL}[q(\mathcal{G}|\mathbf{f}_{K+1})||q(\mathcal{G} |\mathbf{f}_{K})]=\underset{\mathcal{G}\sim q(\cdot|\mathbf{f}_{K+1})}{\mathbb{E }}\left[\log\frac{q(\mathcal{G}|\mathbf{f}_{K+1})}{q(\mathcal{G}|\mathbf{f}_{ K})}\right]\] \[=\underset{\mathcal{G}\sim q(\cdot|\mathbf{f}_{K+1})}{\mathbb{E }}\left[\log p(f_{r}|\omega_{r})+\log c_{K+1}(f_{r})-\log c_{K}\right],\]
with \(f_{r_{K+1}}=f_{r}\), \(c_{K}\) as the partition function of \(q(\cdot|\mathbf{f}_{K})\) -- that does not depend upon \(f_{r}\) --, and \(c_{K+1}(f_{r})\) defined in eq. (22), the estimation of the information gain is inherently dependent upon the estimation of the log-partition function.
Cross-entropy.The cross-entropy between our updated and current beliefs is an intuitively plausible and practically useful strategy to interact with an expert efficiently. In fact, since
\[\mathbf{H}[q(\cdot|\mathbf{f}_{K+1}),q(\cdot|\mathbf{f}_{K})]\] \[=\underset{\mathcal{G}\sim q(\cdot|\mathbf{f}_{K+1})}{\mathbb{E }}[-\log q(\mathcal{G}|\mathbf{f}_{K})]\] \[=\underset{\mathcal{G}\sim q(\cdot|\mathbf{f}_{K+1})}{\mathbb{E }}\left[-\log p_{\theta}(\mathcal{G})-\sum_{1\leq k\leq K}\log p(\omega_{r_{k }}|f_{r_{k}})-l_{K}\right],\]
in which \(l_{K}\) is the log-partition function of the distribution \(q(\cdot|\mathbf{f}_{K})\). Further, the cross-entropy depends exclusively upon i) the logarithm of the samples' rewards, \(\log p_{\theta}(\mathcal{G})\), which is readily computed within AGFN's generative process, and ii) the posterior distribution over the relations' features \(\omega_{r}\) given the expert's feedbacks \(f_{r}\), which is available in closed form. Hence, the previously mentioned expectation is unbiasedly and consistently estimated by our importance sampling scheme. Furthermore, our empirical findings in fig. 5 suggest that the cross-entropy yields good results and consistently outperforms a uniformly random strategy with respect to the BIC score.
## Appendix C Experimental details
We lay out the experimental and implementational details of our empirical analysis in the next subsections. In Appendix C.1, we describe the specific configurations of the CD algorithms that we compared with our method in table 2. Then, we consider in appendix C.2 some practical guidelines and architectural specifications that enable us to train and make inferences with AGFN efficiently. Finally, we contemplate in appendix C.3 the algorithmic details for simulating the expert's feedback according to our model for active knowledge elicitation.
### Baselines
Fci.For the results in table 1, we first estimated a PAG using the stable version of FCI, which produces a fully order-independent final skeleton (Colombo et al., 2014). To identify conditional independencies, we used Fisher's Z partial correlation test with a significance level of \(\alpha=0.05\). The BIC score associated with the PAG estimated by the FCI was computed as the BIC of a randomly selected maximal AG
(MAG) within the equivalence class characterized by such PAG. The maximality of an AG depends on the absence of inducing paths between non-adjacent variables, which are paths where every node along it (except the endpoints) is a collider and every collider is an ancestor of an endpoint (Rantanen et al., 2021). This ensures that in the MAG every non-adjacent pair of nodes is m-separated by some set of other variables. Importantly, Markov equivalent MAGs exhibit asymptotic equivalence in terms of BIC scores (Richardson and Spirtes, 2002). As a result, the choice of a random MAG does not disrupt the validity of our results.
Gfci.Similarly, we applied GFCI with an initial search algorithm (FGS) based on the BIC score and the subsequent application of the FCI with conditional independencies identified by the Fisher's Z partial correlation test with a significance level \(\alpha=0.05\). This was performed for all datasets listed in table 1. Also similar to the procedure adopted with the FCI, the BIC score associated with the estimated PAG was computed as the BIC of a randomly selected MAG within the equivalence class characterized by such PAG.
Dcd.We adhered to the instructions provided in the official repository1 to apply the DCD method on the datasets in table 1. The SHD was obtained between the ground-truth PAG and the PAG corresponding to the estimated ADMG (i.e., the one obtained via FCI by using the d-separations entailed by the estimated ADMG as an oracle for conditional independencies). On the other hand, the BIC was computed for the estimated ADMG directly.
N-Admg.To estimate the parameters of the variational distribution defined by N-ADMG, we executed the code provided at the official repository2 For fairness, we used the same hyperparameters and architectures reported in their original work Li et al. (2023); in particular, we trained the models for 30k epochs. After this, we sampled 100k graphs from the learned distribution. It is worth mentioning that the constraints of bow-free ADMG are guaranteed in the N-ADMG samples only in an asymptotic sense. Thus, we manually removed any cyclic graphs from the learned distribution. Then, we proceeded exactly as with DCD to estimate both the average SHD and the average BIC under the variational distribution.
Footnote 1: Available online at [https://gitlab.com/rbhatta8/dcd](https://gitlab.com/rbhatta8/dcd).
Footnote 2: Available online at [https://github.com/microsoft/causica/releases/tag/v0.0.0](https://github.com/microsoft/causica/releases/tag/v0.0.0).
### Implementational details for AGFN
Masking.To ensure AGFN only samples ancestral graphs, we keep track of a binary mask \(\mathbf{m}_{t}\) that indicates which actions lead to a valid state at the iteration \(t\) of the generative process; this mask defines the support of the policy evaluated at the corresponding state. In more detail, let \(\mathbf{y}_{t}\) be the last layer embedding (prior to a softmax) at iteration \(t\) of the neural network used to parametrize the forward flow of AGFN. The probability distribution over the space of feasible actions is then
\[\mathbf{p}_{t}=\text{Softmax}\left(\mathbf{y}_{t}\odot\mathbf{m}_{t}+\epsilon \cdot(1-\mathbf{m}_{t})\right)\]
Figure 6: **Tempered rewards**. Training AGFN to sample from increasingly cold distributions (eq. (25)) enables us to increase the proportion of high-scoring graphs (i.e., with a low BIC-score) with the drawback of reducing the AGFN’s sampling diversity.
for a large and negative constant \(\epsilon\). We empirically verified that \(\epsilon=-10^{5}\) is sufficient to avoid the sampling of non-ancestral graphs.
Exploratory policy.During training, we must use an exploratory policy that (i) enables the exploration of yet unvisited states within the pointed DAG and (ii) exploits highly valuable and already visited states. To alleviate this phenomenon, we also draw trajectories from a uniform policy, which is a widespread practice in the literature (Bengio et al., 2021; Deleu et al., 2022; Shen et al., 2023). More precisely, let \(\text{Ch}(\mathcal{G}_{t})\) be the set of states (i.e., ancestral graphs) directly reachable from \(\mathcal{G}_{t}\) and \(\alpha\in[0,1]\). At each iteration \(t\) of the generative process, we sample an action (either an edge to be appended to the graph or a signal to stop the process)
\[a_{t}\sim(1-\alpha)\cdot\mathcal{U}(\text{Ch}(\mathcal{G}_{t}))+\alpha\cdot \pi_{F}(\cdot|\mathcal{G}_{t})\]
and modify \(\mathcal{G}_{t}\) accordingly. The parameter \(\alpha\) quantifies the mean proportion of on-policy actions and represents a trade-off between choosing actions that lead to highly valuable states (\(\alpha=1\)) and actions that lead to unvisited states (\(\alpha=0\)). We fix \(\alpha=\frac{1}{2}\) throughout the experiments. During inference, we set \(\alpha=1\) to sample actions exclusively from the GFlowNet's learned policy.
Detection of invalid states.We use the algebraic condition in eq.8 to check whether a graph \(\mathcal{G}\) is ancestral. At each iteration of the generative process, we draw an action from the current exploratory policy and test the ancestrality of the updated graph; if it is not ancestral, we revert the sampled action and mask it. Importantly, this protocol guarantees that all graphs sampled from AGFN are ancestral.
Batch sampling.We exploit batch sampling to fully leverage the power of GPU-based computing in AGFN. As both the maximum-log-likelihood-based reward and the validation of the states are parallelizable operations, we are able to distribute them across multiple processing units and efficiently draw samples from the learned distribution. Crucially, this end-to-end parallelization substantially improves the computational feasibility of our algorithm and is a notable feature generally unavailable in prior works Zhang (2008); Ogarrio et al. (2016); Rantanen et al. (2021). We use a batch size of 256 for all the experiments -- independently of the graph size.
Figure 7: **Sensitivity of our active knowledge elicitation framework to the reliability of the expert. Each column represents either the expected SHD (top) or expected BIC (bottom) as a function of the degree of confidence \(\pi\in[0,1]\) in the expert as a function of the number of feedbacks. As expected, the improvements entailed by the expert’s feedback become increasingly effective as we increase the expert’s reliability from 0.1 to 0.9. Results reflect the outcome of 30 scenarios simulated accordingly to algorithm 1 with a random canonical diagram \(\mathcal{G}^{\star}\) with 5 nodes. We used our active knowledge elicitation scheme to select the query at each iteration.**
Training hyperparameters.For AGFN's forward flow, we use a Graph Isomorphism Network (GIN, Xu et al. (2019)) with 2 layers to compute embeddings of dimension 256. Then, we project these embeddings to a probability distribution using a three-layer MLP having leaky RELUs with a negative slope of \(-0.01\) as activation functions. Correspondingly, we use an equally configured three-layer MLP to parametrize AGFN's backward flow. For training, we use the Adam method for the stochastic optimization problem defined by the minimization of the loss in eq. (7). Moreover, we trained the neural networks for 3000 epochs for the human-in-the-loop simulations (in which we considered graphs having up to 10 nodes) and for 500 epochs for both the assessment of the distributional quality of AGFN and the comparison of AGFN with alternative CD approaches.
Computational settings.We trained the AGFNs for the experiments in fig. 3 and table 1 and fig. 5 for 500 epochs in computers equipped with NVIDIA's V100 GPUs. All the experiments were executed in a cluster of NVIDIA's V100 GPUs and the algorithms were implemented using the machine learning framework PyTorch. To estimate the PAG corresponding to AGFN's samples and compute the SHDs reported in table 1, we used the FCI's implementation of the pcalg package in R considering the d-separations entailed by these samples as a criterion for conditional dependence.
### Human in the loop
Algorithmic details.We describe in algorithm 1 our procedure for simulating interactions with an expert. Initially, we estimate the marginal probabilities \(p(\omega_{r}=k)\) of a relation \(r\) displaying the feature \(k\in\{1,2,3,4\}\) under AGFN's learned distribution. This is our prior distribution. In algorithm 1, we denote \(\{1,2,3,4\}\) by [4]. Then, we iteratively select the relation that maximizes our acquisition function; the simulated human thus returns a feedback that equals the selected relation's true feature with probability \(\pi\) or is otherwise uniformly distributed among the incorrect alternatives. Importantly, this iterative mechanism can be interrupted at any iteration and the collected feedbacks can be used to compute the importance weights necessary for estimating expectations of functionals under our updated beliefs.
```
\(\{\mathcal{G}_{t}\}_{1\leq t\leq T}\) samples from AGFN, \(\mathcal{G}^{*}=(\mathbf{V},E)\) true ancestral graph, \(\pi\) reliability of the expert's feedback \(p(\omega_{r}=k)\leftarrow\frac{1}{T}\sum_{1\leq t\leq T}1_{\{\omega_{r}=k\ \text{in}\ G_{t}\}}\forall k\in[4],r\in\binom{\mathbf{V}}{2}\) \(\mathbf{f}\leftarrow\{\}\) \(\triangleright\) Set of feedbacks (answers) \(\mathbf{r}\leftarrow\{\}\) \(\triangleright\) Set of queries (questions) \(K\gets 1\) \(\omega_{r}^{*}\leftarrow\) relation \(r\)'s feature in \(\mathcal{G}^{*}\ \forall r\in\binom{\mathbf{V}}{2}\) while\(\mathbf{r}\neq\binom{\mathbf{V}}{2}\)do\(r_{K}\leftarrow\underset{r\in\binom{\mathbf{V}}{2}\setminus\mathbf{r}}{ \arg\max}\underset{f_{r}\sim p(\cdot)}{\mathbb{E}}[-\mathbf{H}(q(\mathcal{G}; \mathbf{f}\cup\{f_{r}\}),q(\mathcal{G};\mathbf{f}))]\) \(\mathbf{r}\leftarrow\mathbf{r}\cup\{r_{K}\}\) \(f_{K}\sim\text{Cat}\left(\pi\cdot\delta_{\omega_{r_{K}}^{*}}+\left(\frac{1-\pi }{3}\right)\cdot(1-\delta_{\omega_{r_{K}}^{*}})\right)\) \(\mathbf{f}\leftarrow\mathbf{f}\cup\{f_{K}\}\) \(K\gets K+1\) endwhile
```
**Algorithm 1** Simulating humans in the loop
## Appendix D Additional experiments
Trade-off between diversity and optimality in AGFN.We may use tempered rewards to increase the frequency of high-scoring samples and thereby reduce the diversity of AGFN's distribution. More precisely, we choose a temperature \(T\) and consider
\[R_{T}(\mathcal{G})=R(\mathcal{G})^{1/T}=\exp\left\{\frac{\mu-U(\mathcal{G})}{T \sigma}\right\} \tag{25}\]
as the reward upon which the GFlowNet is trained; if \(T\to 0\), the distribution \(p_{T}\propto R_{T}\) converges to a point mass in \(R(\mathcal{G})\)'s mode and, if \(T\to\infty\), \(p_{T}\) converges to an uniform distribution. This approach resembles the simulated tempering scheme commonly exploited in Monte Carlo methods Marinari and Parisi (1992) and was previously considered in the context of GFlowNets by Zhang et al. (2023). Figure 6 shows that progressively cold distributions (i.e., with \(T\to 0\)) lead to progressively concentrated and decreasingly diverse samples. Notably, the use of cold distributions may be adequate if we are highly confident in our score and are mostly interested in high-scoring samples (e.g., as in Rantanen et al., 2021).
Sensitivity analysis for different noise levels.Figure 7 displays the effect of the feedback of an increasingly reliable expert over the expectations of both the SHD and the BIC. Notably, the usefulness of these feedbacks increases as the feedback noise decreases. This is expected as, for example, a completely unreliable expert consistently rules out only one of four possibilities for the features of each relation;
Figure 8: **Architectural design of AGFN.** Top: The inductively biased parametrization of AGFN’s forward flow — based upon a GNN — enables the substantial reduction of the number of epochs required for training. Bottom: The use of a parametrized backward policy similarly enhances the training efficiency compared to a uniform policy. For both experiments, we considered \(\mathcal{L}(\theta)<0.1\) as the early stopping criterion to interrupt AGFN’s training.
Figure 9: **Human-aided AGFN significantly outperforms alternative CD algorithms.** Updating AGFN’s distribution according to the feedback of an oracle substantially improves AGFN’s capacity to correctly identify the true ancestral graph; indeed, a single feedback is sufficient to yield results better than (or indistinguishable from) alternative CD algorithms. We select the sampled AG with the highest posterior reward as a point estimate of AGFN and use the same datasets listed in table 2. The plots summarize the results of 30 HITL simulations using \(\pi=0.9\) and an oracle as an expert (see algorithm 1).
then, there remains a great ambiguity, albeit not as much as there was prior to their feedback, about the true nature of the elicited causal relation. Moreover, this experiment highlights the potential to adjust the reliability parameter \(\pi\) to incorporate knowledge into AGFN's learned distribution regarding the non-existence of a particular relation, rather than its existence. More specifically, assume that the expert is certain that there is no directed edge from the variable \(U\) to the variable \(V\) in the underlying ancestral graph; for instance, a doctor may be certain that cancer (\(U\)) is not an ancestor (cause) of smoking (\(V\)), but may be uncertain about the definite relation between \(U\) and \(V\) (i.e., smoking may or may not cause cancer). To incorporate such knowledge into our model, one approach is to set a necessarily small reliability parameter \(\pi\) (possibly, \(\pi=0\)) along with the improbable relation \(U\to V\). This feedback will then be modeled as a relation unlikely to exist in the true ancestral graph. We emphasize that our model for the expert's responses is straightforwardly extensible to accommodate multiple feedbacks about the same causal relation under different reliability levels.
Ablation studies.Figure 8 shows the increase in the training efficiency due to our architectural designs for parametrizing both the forward and backward flows of AGFN. Noticeably, the use of a two-layer graph isomorphism network [144] with a 256-dimensional embedding for the forward flow entailed a decrease of more than 10x in the number of epochs required for successfully training AGFN; this highlights the effectiveness of an inductively biased architectural design for the parametrization of GFlowNet's flows. Correlatively, the use of a parametrized backward flow significantly enhances the training efficiency of AGFN and emphasizes the inadequacy of a uniformly distributed backward policy pointed out in a previous work Shen et al. [2023].
Human-aided AGFN versus alternative CD methods.Figure 9 exposes the significant enhancement of AGFN's point estimates entailed by our HITL framework for CD. This underlines the usefulness of the elicited knowledge, which is simply incorporated into our model through a re-weighting of the reward function, enabling the identification of the true ancestral graph. In contrast, most alternative CD algorithms cannot be as easily adapted to include various forms of expert knowledge -- and such incorporation, when it is possible, usually precedes any inferential process Andrews [2020] or assumes the knowledge is perfect Wang et al. [2022].
|
2310.20685 | NeRF Revisited: Fixing Quadrature Instability in Volume Rendering | Neural radiance fields (NeRF) rely on volume rendering to synthesize novel
views. Volume rendering requires evaluating an integral along each ray, which
is numerically approximated with a finite sum that corresponds to the exact
integral along the ray under piecewise constant volume density. As a
consequence, the rendered result is unstable w.r.t. the choice of samples along
the ray, a phenomenon that we dub quadrature instability. We propose a
mathematically principled solution by reformulating the sample-based rendering
equation so that it corresponds to the exact integral under piecewise linear
volume density. This simultaneously resolves multiple issues: conflicts between
samples along different rays, imprecise hierarchical sampling, and
non-differentiability of quantiles of ray termination distances w.r.t. model
parameters. We demonstrate several benefits over the classical sample-based
rendering equation, such as sharper textures, better geometric reconstruction,
and stronger depth supervision. Our proposed formulation can be also be used as
a drop-in replacement to the volume rendering equation of existing NeRF-based
methods. Our project page can be found at pl-nerf.github.io. | Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna Thomas, Leonidas Guibas, Ke Li | 2023-10-31T17:49:48Z | http://arxiv.org/abs/2310.20685v2 | # NeRF Revisited: Fixing Quadrature Instability in Volume Rendering
###### Abstract
Neural radiance fields (NeRF) rely on volume rendering to synthesize novel views. Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum that corresponds to the exact integral along the ray under piecewise constant volume density. As a consequence, the rendered result is unstable w.r.t. the choice of samples along the ray, a phenomenon that we dub _quadrature instability_. We propose a mathematically principled solution by reformulating the sample-based rendering equation so that it corresponds to the exact integral under piecewise linear volume density. This simultaneously resolves multiple issues: conflicts between samples along different rays, imprecise hierarchical sampling, and non-differentiability of quantiles of ray termination distances w.r.t. model parameters. We demonstrate several benefits over the classical sample-based rendering equation, such as sharper textures, better geometric reconstruction, and stronger depth supervision. Our proposed formulation can be also be used as a drop-in replacement to the volume rendering equation for existing methods like NeRFs. Our project page can be found at pl-nerf.github.io.
## 1 Introduction
The advent of neural radiance fields (NeRF) [18] has sparked a flurry of work on neural rendering and has opened the way to many exciting applications [5; 11; 8; 20]. One of the key underpinnings of NeRF is volume rendering [14] - it is especially well-suited to end-to-end differentiable rendering [9], since the rendered image is a smooth function of the model parameters. This has made it possible to learn the 3D geometry and appearance solely from a 2D photometric loss on rendered images.
In volume rendering, the rendered colour \(\hat{y}\) for every pixel is an expectation of the colours along the ray cast through the pixel w.r.t. the distribution over ray termination distance \(s\)[6].
\[\hat{y}=\mathbb{E}_{s\sim p(s)}[c(s)]=\int_{0}^{\infty}p(s)c(s)\,\mathrm{d}s \tag{1}\]
where \(p(s)\) denotes the probability density function (PDF) of the distribution over ray termination distance \(s\) and \(c(s)\) denotes the the colour as a function of different points along the ray.
In general, \(p(s)\) and \(c(s)\) can be of arbitrary forms, so evaluating this integral analytically is not possible. Therefore, in practice, \(\mathbb{E}_{s\sim p(s)}[c(s)]\) is approximated with quadrature. The quadrature formula that is most commonly used in the NeRF literature takes the following form:
\[\mathbb{E}_{s\sim p(s)}[c(s)]=\sum_{j=0}^{N}T_{j}\left(1-e^{-\tau_{j}(s_{j+1}- s_{j})}\right)c_{j}, \tag{2}\]
where \(T_{j}=\exp\Big{(}-\sum_{k=0}^{j}-\tau_{k}(s_{k+1}-s_{k})\Big{)}\) and \(\tau_{k}\) is the opacity evaluated at a sample \(s_{k}\) along the ray.
This expression is derived from the exact integral under a piecewise constant assumption to the opacity and colour along the given ray [14].
However, this seemingly simple, innocuous assumption can result in the rendered image being sensitive to the choice of samples along the ray at which the opacity \(\sigma(s)\) and colour \(c(s)\) are evaluated. While this does not necessarily cause a practical issue in classical rendering pipelines [14; 15; 10], it has surprising consequences when used in neural rendering. Specifically, because the opacity at all points within the interval between two samples is assumed to be the same, there is a band near the surface of the geometry where opacity at all points within the band is as high as points on the surface itself. Because different rays cast from different cameras can pass through this band at different angles and offsets, both the number and the positions of samples within this band can be very different across different rays (see Fig 1 and Fig 2 for two example scenarios). Hence, simultaneously supervising these rays to produce the same colour values as in the real image captures can give rise to conflicting supervisory signals, which can result in artifacts like fuzzy surfaces and blurry texture.
Moreover, because the piecewise constant opacity assumption gives rise to a closed form expression that is equivalent to an expectation w.r.t. a discrete random variable, it is common practice in the NeRF literature to draw samples along the ray from the discrete distribution [17]. This is commonly used to draw importance samples, and also to supervise the samples along the ray [28], for example in losses that penalize deviation of the samples from the true depth [6]. Sampling from the discrete distribution requires the definition of a continuous surrogate function to the cumulative distribution function (CDF) of the discrete random variable, which unfortunately yields imprecise samples. As a result, samples that are drawn may not be close to the surface even if the underlying probability density induced by the NeRF is concentrated at the surface. Additionally, individual supervision cannot be provided to each sample drawn from the surrogate, because the gradient of the loss w.r.t. each sample would be almost zero everywhere.
All these issues, i.e. conflicting supervision, imprecise samples and lack of supervision on the CDF from the samples, stem from the assumption that opacity is piecewise constant causing the _sensitivity to the choice of samples_ both during rendering and sampling. We dub this problem as _quadrature instability_. In this paper, we revisit the quadrature used to approximate volume rendering in NeRF and devise a different quadrature formula [14] based on a different approximation to the opacity. We first show that interestingly a closed-form expression can be derived under any piecewise polynomial approximation for opacity. When the polynomial degree is 0, it reduces to the piecewise constant opacity as in existing literature, and when the degree is 2 or more, we show that it would lead to poor numerical conditioning
Therefore, we further explore a degree of 1 (i.e., piecewise _linear_) and show that it both resolves quadrature instability and has good numerical conditioning. We derive the rendering equation under _piecewise linear opacity_ explicitly and show that it has a simple and intuitive form. This results in a new quadrature method for volume rendering, which can serve as a drop-in replacement for existing methods like NeRFs. We demonstrate that this reduces artifacts, improves rendering quality and results in better geometric reconstruction. We also devise a new way to sample directly from the distribution of samples along each ray induced by NeRF without going through the surrogate, which opens the way to a more refined importance sampling approach and a more effective method to supervise samples using depth.
## 2 Related Work
NeRFs.Neural Radiance Field (NeRF) is a powerful representation for novel-view synthesis [18] that represents a scene using the weights of an MLP that is rendered by volumetric rendering [14]. A key finding to the success of NeRF was the use of positional encoding [26; 29] to effectively increase the capacity of the MLPs that models the opacity and emitted color as a function of a 3D coordinate and viewing direction. Many works extend NeRF such as handling larger or unbounded scenes [40; 3; 37; 25], unconstrained photo collections [13], dynamic and deformable scenes [11; 19] and sparser input views [6; 39; 31; 28]. There are a number of papers that aim to improve the rendering quality of NeRF. Some do so by utilizing different kinds of supervision such as NeRF in the Dark [16], while others tackle this by improving the model [2; 36; 4]. MipNeRF [2] changes the model input by introducing integrated positional encoding (IPE) to reduce the aliasing effect along the xy coordinates. DiVeR [36] predicts the rendered colour within a line interval directly from a trilinearly interpolated feature in a voxel-based representation ZipNeRF [4] modifies the proposal network to a grid enabling it to be used together with IPE. In contrast, our work focuses on changing the objective function by modifying the rendering equation from piecewise constant opacity to piecewise linear, while keeping the model and supervision fixed. Additionally, ZipNeRF [4] also
brings up a model specific issue on z-aliasing, where their model struggles under this setting. Similar to z-aliasing observed by ZipNeRF, we consider the setting of having conflicting supervision when presented with training views at different distances from the scene. While they may appear similar on the surface, the phenomena we study is different in that it is general and independent of the model, on having conflicting ray supervision from camera views, e.g. different camera-to-scene distances and the grazing angle setup.
Importance Sampling on NeRFs.Densely sampling and evaluating NeRF along multiple points in each camera ray is inefficient. Inspired by an early work on volume rendering [10], prior works typically use a coarse-to-fine hierarchical sampling strategy where the final samples are obtained by importance sampling of a coarse proposal distribution [18; 2; 3; 8]. These importance samples are drawn using inverse transform sampling where a sample is obtained by taking the inverse of the cumulative density function (CDF) of the proposal ray distribution. However, prior NeRF works that assume piecewise constant opacity result in a non-invertible CDF, and instead introduce a surrogate invertible function derived from the CDF in order to perform importance sampling. In contrast, our work that utilizes a piecewise linear opacity assumption results in an invertible CDF and a closed-form solution to obtain samples with inverse transform sampling. Other works also attempt to alter sampling using neural networks or occupancy caching [31; 23; 24; 12]. These techniques are orthogonal to our work as they propose changes to the model as opposed to our work where importance sampling is derived from a given model.
Volume rendering.Volume rendering is an important technique in various computer graphics and vision applications as explored in different classical works [10; 33; 7; 14]. These works include studying ray sampling efficiency [10] and data structures, e.g. octree [22] and volume hierarchy [21] for coarse-to-fine hierarchical sampling. The crux behind volume rendering is the integration over the weighted average of the color along the ray, where the weights is a function of the volume density (opacity). Max and Chen [14; 15] derive the volume rendering equation under the assumption of piecewise constant opacity and color, which NeRF [17] and its succeeding works use to learn their neural scene representation. However, the piecewise constant assumption results in rendering outputs that are sensitive to the choice of samples as well the non-invertible CDF introducing drawbacks to NeRF training. Following up on [14], works also [34] derive the volume rendering equation under the assumption that both opacity and color are piecewise linear that yield unwieldy expressions that lead to numerical issues and/or are expensive to compute. Some earlier works on rendering unstructured polygonal meshes that attempt to use this model [35], but it is in general not commonly used in practice due to the mentioned issues and hence has yet to be adopted into learning neural scene representations. In this work, we address both sets of issues that arise from the piecewise constant opacity and color and piecewise linear opacity and color by reformulating the volume rendering equation to assume piecewise linear opacity and piecewise constant color. Our derivation results in a simple and closed-form formulation to volume rendering making it suitable for NeRFs.
## 3 Background
### Volume Rendering Review
Definitions.In classical literature, the process of volume rendering [15] mapping a 3D field of optical properties to a 2D image and the visual appearance is computed through the exact _integration_ of these optical properties along the viewing rays. In this optical model, each point in space is an infinitesimal particle with a certain _opacity_\(\tau\) that emits varying amounts of light, represented as a scalar _color_\(c\), in all viewing directions. The opacity \(\tau\) is the differential probability of a viewing ray hitting a particle - that is for a viewing ray \(\mathbf{r}(s)=\mathbf{o}+s\mathbf{d}\), where \(\mathbf{o}\) is the view origin and \(\mathbf{d}\) is the ray direction, the probability of ray \(\mathbf{r}\) hitting a particle along an infinitesimal interval \(\mathrm{d}s\) is \(\tau(\mathbf{r}(s))\mathrm{d}s\). Moreover, the transmittance \(T_{\mathbf{r}}(s)\) is defined as the probability that the viewing ray \(\mathbf{r}\) travels a distance \(s\) from the view origin without terminating, i.e. without hitting any particles.
Continuous probability distribution along the ray \(\mathbf{r}\).As illustrated by Max and Chen [15], the probability of hitting a particle \(s\) only depends only the probability of hitting a particle at \(s\) and not any particles before it, the probability of ray \(\mathbf{r}\) terminating at distance \(s\) is given by \(\tau(\mathbf{r}(s))T_{\mathbf{r}}(s)\), where \(T_{\mathbf{r}}(s)=\exp(-\int_{0}^{s}\tau(\mathbf{u})\mathrm{d}u)\). Hence the continuous probability density function (PDF) of ray \(\mathbf{r}(s)\), which describes the likelihood of a ray terminating and emitting at \(s\), is given by
\[p(s)=\tau(\mathbf{r}(s))T_{\mathbf{r}}(s), \tag{3}\]
where \(s\in[0,\infty]\) and \(\mathbf{r}(s)\) is a point on the ray \(\mathbf{r}\). For notational simplicity we omit \(\mathbf{r}\) and write it as \(p(s)=\tau(s)T(s)\).
Volume Rendering as a Continuous Integral.The observed color of the ray is then the expected value of the colors \(c(s)\) of all particles \(s\) along the ray weighted by the probability of hitting them. Mathematically, this results in the following continuous integral1:
Footnote 1: Practically the integral is taken with near \(s_{n}\) and far \(s_{f}\) bounds.
\[\mathbb{E}_{s\sim p(s)}[c(s)]=\int_{0}^{\infty}p(s)c(s)\,\mathrm{d}s=\int_{0}^ {\infty}\tau(s)T(s)c(s)\,\mathrm{d}s\,. \tag{4}\]
Quadrature under Piecewise Constant Opacity \(\tau\).Since this integral cannot in general be evaluated analytically, it is approximated with quadrature. Let \(s_{1},s_{2},...,s_{N}\) be \(N\) (ordered) samples on the ray that define the intervals, where \(I_{j}=[s_{j},s_{j+1}]\) is the \(j^{\text{th}}\) interval, and \(I_{0}=[0,s_{1}],I_{N}=[s_{N},\infty]\). The volume density for particles along the interval \(I_{j}\) is then approximated under the assumption that opacity is constant along each interval, making it _piecewise constant_ along the ray [15]. That is, for all \(j\) we have:
\[\forall s\in[s_{j},s_{j+1}],\tau(s)=\tau(s_{j}), \tag{5}\]
for brevity we denote \(\tau(s_{j})=\tau_{j}\), i.e. \(\tau_{j}\) is the opacity for sample \(s_{j}\). Under this piecewise constant opacity assumption, the volume rendering equation Eq. 4 then becomes as follows:
\[\mathbb{E}_{s\sim p(s)}[c(s)]=\sum_{j=0}^{N}P_{j}c_{j}=\sum_{j=0}^{N}\left( \int_{s_{j}}^{s_{j+1}}\tau(u)T(u)\,\mathrm{d}u\right)c_{j}=\sum_{j=0}^{N}T_{ j}\left(1-e^{-\tau_{j}(s_{j+1}-s_{j})}\right)c_{j}, \tag{6}\]
Figure 1: **Ray Conflicts: Grazing Angle.** (Left) Illustration of conflicting ray supervision at the grazing under the piecewise constant opacity. For the constant setting, to render perpendicular rays (yellow) correctly, the model has to store the associated optical properties at a region in front of the surface as a sample takes the values of the left bin boundary. In the presence of a ray near the grazing angle, it will be crossing this region of high opacity (the gradient in front of the surface), associating it with conflicting opacity/color signals. (Middle) This results in fuzzier surfaces as shown along the side of the microphone as there is a conflict in ray supervision between the perpendicular and grazing angle rays. Our piecewise linear opacity assumption alleviates this issue and results in a clearer rendered view. (Right) As shown, the resulting PDF is peakier and the CDF is sharper for our linear setting, where the plotted distributions correspond to the ray from the marked pixel in red.
where \(T_{j}=\exp\Big{(}-\sum_{k=0}^{j}-\tau_{k}(s_{k+1}-s_{k})\Big{)}\). Here, color \(c_{j}\) is also approximated to be constant along each interval \(I_{j}\), and \(P_{j}\) is the probability of each interval. Now, let us define the discrete random variable \(\tilde{s}=f(s)\), where
\[\tilde{f}(x)=\begin{cases}s_{0}&x\leq s_{0}\\ s_{j}&s_{j}\leq x<s_{j+1}\text{ for all }j\in\{1,...,N-1\}\\ s_{N}&x>s_{N}\end{cases}\,, \tag{7}\]
which gives corresponding probability mass function \(\tilde{P}(s)\). Observe that the analytical expression of the integral Eq. 6 turns out to be the same as taking the expectation w.r.t. the discrete random variable \(\tilde{s}\), i.e. \(\mathbb{E}_{s\sim p(s)}[c(s)]=\mathbb{E}_{\tilde{s}\sim\tilde{P}(s)}[c(\tilde {s})]\). This piecewise constant opacity assumption in the volume rendering equation is used in most, if not all, existing NeRF works. We recommend the reader to read [15] for more detailed derivations and our supplementary for a more thorough walkthrough.
### Neural Radiance Fields.
Following Max and Chen [14], Mildenhall et.al. [18] introduced neural radiance fields, a neural scene representation that uses the volume rendering equation under the piecewise constant opacity assumption for novel view synthesis. A neural radiance field (NeRF) is a coordinate-based neural scene representation, where opacity \(\tau^{\theta}:\mathbb{R}^{3}\rightarrow\mathbb{R}_{\geq 0}\) and color \(c^{\psi}:\mathbb{R}^{3}\times\mathbb{S}^{2}\rightarrow[0,255]^{3}\) are predicted at each continuous coordinate by parameterizing them as a neural network. To train the neural network, 2D images are used as supervision where each viewing ray is associated with a ground truth color. Volume rendering allows for the 3D coordinate outputs to be aggregated into an observed pixel color allowing for end-to-end training with 2D supervision. The supervision signal are on the coordinates of the ray samples \(s_{1},...,s_{N}\), which updates the corresponding output opacity and color at those samples. NeRF uses a importance sampling strategy by drawing samples from the ray distribution from a coarse network to generate better samples for rendering of their fine network. To sample from a distribution, inverse transform sampling is needed, that is, one draws \(u\sim U(0,1)\) then passes it to the inverse of a cumulative distribution (CDF), i.e. a sample \(x=F^{-1}(u)\), where \(F\) is the CDF of the distribution. Under the piecewise constant assumption, the CDF of the discrete random variable \(\tilde{s}\) is given by:
\[\tilde{F}(x)=\begin{cases}0&x\leq s_{0}\\ \sum_{k<j}\tilde{P}(s_{k})&1\leq x<s_{j+1}\text{ for all }j\in\{1,...,N-1\}\\ 1&x>s_{N}\end{cases}\,. \tag{8}\]
This CDF is however non-continuous and non-invertible. NeRF's approach to get around this is to define a surrogate invertible function \(G\) derived from its CDF, then taking \(x=G^{-1}(u)\). Concretely, \(G(y)=\frac{y-s_{j-1}}{s_{j}-s_{j-1}}\tilde{F}(s_{j})+\frac{s_{j}-y}{s_{j}-s_{ j-1}}\tilde{F}(s_{j-1}),\) where \(y\in[s_{j-1},s_{j}]\). However, this does not necessarily result in the samples from the actual ray distribution \(p(s)\) from the model.
## 4 Drawbacks of Piecewise Constant Opacity \(\tau\) in NeRFs
Unfortunately, there are properties associated with the piecewise constant opacity formulation that may not be desirable in the context of NeRFs. First, it is sensitive to the choice of samples, i.e. sample positions \(s\), along the ray, a phenomenon we dub as _quadrature instability_. This quadrature instability is due to the assumption that all points in an interval take the opacity of the left bin (Eq 5), making it sensitive to sample positions. As illustrated in Figure 1, this would lead to _ray conflicts_ in optimizing a NeRF when you have rays that are directly facing, i.e. perpendicular to, the surface (yellow rays) and rays that are close to the grazing angle (red ray), i.e. parallel to, the object. To render the perpendicular rays correctly, vanilla NeRF has to store the optical properties (opacity and color) associated with the perpendicular rays at a point before its intersection with the surface. This creates inaccurate signals to the optimization process when NeRF renders the ray at a grazing angle, as it will cross multiple conflicting opacity/colors (illustrated by the blue gradient). The sample sensitivity issue also arises when having cameras at different distances from the object as illustrated in Fig. 2, as this would lead to shifted sets of samples, causing inconsistencies when rendering at different camera-to-object distances. Notice that the noise on the texture of the chair is different across different viewing distances, where the middle view has fewer artifacts compared to the closer and further views.
## 5 Generalized Form for \(P_{j}\)
We first show a generalized derivation for the probability \(P_{j}\) of each interval \(I_{j}\), which we use to formulate our approach that alleviates the problems described above. From \(T(s)=\exp{(-\int_{0}^{s}\tau(u)\mathrm{d}u)}\), we first notice that:
\[\frac{\mathrm{d}T}{\mathrm{d}s} =-\exp{(-\int_{0}^{s}\tau(u)\mathrm{d}u)}\tau(s)=-T(s)\tau(s)\] \[T^{\prime}(s) =-T(s)\tau(s).\]
This results in the probability of each interval \(I_{j}\) given as follows:
\[P_{j}=\int_{s_{j}}^{s_{j+1}}\tau(s)T(s)\,\mathrm{d}s=-\int_{s_{j}}^{s_{j+1}}T^ {\prime}(s)\,\mathrm{d}s=T(s_{j})-T(s_{j+1}). \tag{9}\]
Since \(s_{j}\)'s are arbitrarily sampled, \(P_{j}\) can be exactly evaluated in a closed-form expression, if and only if \(T(\cdot)\) is in closed-form.
## 6 Our PL-NeRF
We observe from Eq. 15 that we can obtain a closed-form expression for \(P_{j}\) for any piecewise polynomial function in \(\tau\), which can be of any degree \(d=0,1,2,...,n\). Commonly used in existing NeRF literature is choosing \(d=0\), i.e. piecewise constant, that is unstable w.r.t. the choice of samples as highlighted in the previous section. Interestingly, we also observe and show that the problem becomes numerically ill-conditioned for \(d\geq 2\) making it difficult and unstable to optimize. Please see supplementary for the full proof. Hence, we propose to make opacity piecewise linear (\(d=1\)), which we call **PL-NeRF**, leading to a _simple_ and _closed-form_ expression for the volume rendering integral that is numerically stable and is a drop-in replacement to existing NeRF-based methods. We show both theoretically and experimentally that the piecewise linear assumption is sufficient and alleviates the problems caused by quadrature instability under the piecewise constant assumption.
Figure 2: **Ray Conflicts: Different Camera-to-Scene Distances.** (Left) Rendered views from cameras at different distances from the object. At all distances, the rendered output for linear have sharper texture than constant because of the latter’s sensitivity to the choice of samples. We also highlight the instability of the constant model as shown by the noisier texture of the middle view compared to the closer and further views. (Right) An illustration that moving the camera to different distances from the object result in different samples that lead to conflicts.
The second issue comes from the CDF \(\tilde{F}\) being piecewise constant (Eq 8). This leads to two consequences. First, the piecewise constant assumption makes \(\tilde{F}\) non-invertible, hence, as mentioned in the previous section, importance sampling needs to be performed via a surrogate function \(G\). This results in uniformity across the samples within the bin - samples within a bin are assigned with equal probability, leading to imprecise importance samples. The second consequence comes from the fact that \(\tilde{F}\) is not continuous, leading to an issue in training a NeRF that has a loss based on its samples. In other words, there will be a vanishing gradient effect when taking the gradient w.r.t. the samples, and one such example of a sample-based loss used for NeRFs is depth [28].
Volume Rendering with Piecewise Linear Opacity.We propose an elegant reformulation to the sample-based rendering equation that corresponds to the exact integral under **piecewise linear** opacity while keeping piecewise constant color leading to a simple and closed-form expression for the integral. That is, instead of piecewise constant opacity as in Eq 5, we assume a linear opacity for each interval \(I_{j}\). Concretely, for \(s\in[s_{j},s_{j+1}]\), where \(\tau_{j}=\tau(s_{j}),\tau_{j+1}=\tau(s_{j+1})\), we have
\[\tau(s)=\left(\frac{s_{j+1}-s}{s_{j+1}-s_{j}}\right)\tau_{j}+\left(\frac{s-s_{ j}}{s_{j+1}-s_{j}}\right)\tau_{j+1}. \tag{10}\]
which is linear w.r.t. \(s\in[s_{j},s_{j+1}]\) as illustrated in Fig. 3.
Now, under the piecewise linear opacity assumption, transmittance is derived as the following closed-form expression:
\[T(s_{j})=\exp\left[-\int_{0}^{s_{j}}\tau(u)\,\mathrm{d}u\right]= \prod_{k=1}^{i}\exp\left[-\int_{s_{k-1}}^{s_{k}}\tau(u)\,\mathrm{d}u\right],\] \[\boxed{T(s_{j})=\prod_{k=1}^{i}\exp\left[-\frac{(\tau_{k}+\tau_{k -1})(s_{k}-s_{k-1})}{2}\right].} \tag{11}\]
Together with Eq. 15, this leads to the following simple and closed-form expression for \(P_{j}\), corresponding to the exact integral under the piecewise linear opacity assumption:
\[\boxed{P_{j}=T(s_{j})\cdot\left(1-\exp\left[-\frac{(\tau_{j+1}+\tau_{j})(s_{j+ 1}-s_{j})}{2}\right]\right).} \tag{12}\]
Precision Importance Sampling.Moreover, it also turns out that with our piecewise linear opacity assumption, we are able to derive an exact closed-form solution for inverse transform sampling. Recall that in Sec 4, we pointed a drawback of the CDF \(\tilde{F}\) being non-invertible and discontinuous under piecewise constant opacity. We show that this is alleviated in our piecewise linear setting. Concretely, given samples \(s_{1},...,s_{N}\) resulting in interval probabilities \(P_{1},...,{P_{N}}\)2 from our derivation (Eq 12), the CDF for _continuous_ random variable \(t\) is then given as
Footnote 2: We note that DS-NeRF [6] shows that this will sum to 1 assuming an opaque far plane. \(s_{N+1}\) would correspond to the far plane.
\[F(t)=\int_{0}^{t}p(s)\,\mathrm{d}s=\sum_{s_{j}<t}P_{j}+\int_{s_{j}}^{t}p(s)\, \mathrm{d}s=\sum_{s_{j}<t}P_{j}+\int_{s_{j}}^{t}\tau(s)T(s)\,\mathrm{d}s. \tag{13}\]
Note that unlike in piecewise constant opacity, we do not convert a continuous random variable \(s\) to a discrete random variable \(\tilde{s}\), thus, the resulting CDF \(F\) being continuous. Now, assuming that opacity \(\tau\geq 0\) everywhere, from Eq. 13 we see that \(F\) is strictly increasing. Since \(F\) is continuous and strictly increasing, then it is invertible.
Finally, we have our precision importance sampling, where by inverse transform sampling, we can solve for the exact sample \(x=F^{-1}(u)\) for \(u\sim U(0,1)\) from the given ray distribution \(p(s)\) since the CDF \(F\) is invertible under piecewise linear opacity. That is, without loss of generality, let sample \(u\sim U(0,1)\) fall into the bin \(u\in[C_{k},C_{k+1}]\), where \(C_{k}=\sum_{j<k}P_{j}\), which is equivalent to solving for \(x\in[s_{k},s_{k+1}]\). Reparameterizing \(x=s_{k}+t\), where \(t\in[0,s_{k+1}-s_{k}]\), the exact solution for sample \(u\) is given by
\[\boxed{t=\frac{s_{k+1}-s_{k}}{\tau_{k+1}-\tau_{k}}\left[-\tau_{k}+\sqrt{\tau_ {k}^{2}+\frac{2(\tau_{k+1}-\tau_{k})\left(-\ln\frac{1-u}{T(s_{k})}\right)}{(s_ {k+1}-s_{k})}}\right].} \tag{14}\]
Please see supplementary for full derivation. This leads to precisely sampling from the ray distribution \(p(s)\) resulting in better importance sampling and stronger depth supervision.
Figure 3: Illustration of opacities \(\tau\) along a ray under the piecewise constant (green) and piecewise linear (orange) assumptions.
## 7 Results
In this section, we present our experimental evaluations to demonstrate the advantages our piecewise linear opacity formulation for volume rendering, which we call **PL-NeRF**.
### Datasets, Evaluation Metrics and Implementation Details.
**Datasets and Evaluation Metrics.** We evaluate our method on the standard datasets: Blender and Real Forward Facing (LLFF) datasets as used in [18]. We use the released training and test splits for each. See supplementary for more details. For quantitative comparison, we follow the standard evaluation metrics and report PSNR, SSIM [32] and LPIPS [41] on unseen test views. We also report the root-mean-squared-error (RSME) on the expected ray termination in our depth experiments.
**Implementation Details.** **PL-NeRF** is implemented on top of NeRF-Pytorch [38], a reproducible Pytorch implementation of the constant (vanilla) NeRF, where we simply change the volume rendering to our formulation under piecewise linear opacity and utilize our exact importance sampling derivation. Similar to [18] we optimize a separate network for the coarse and fine models that are jointly trained with the MSE loss on ground truth images. We use a batch size of 1024 rays and a learning rate of \(5\times 10^{-4}\) that decays exponentially to \(5\times 10^{-5}\) throughout the course of optimization. We train each scene for 500k iterations which takes \(\sim 21\) hours on a single Nvidia V100 GPU 3. Our precision importance sampling enables us to use fewer samples for the fine network, hence keeping the total number of rendering samples the same, we use 128 coarse samples and 64 fine samples to train and test our method.
Footnote 3: We rerun and train the vanilla (constant) model using the released reproducible configs from [38].
### Experiments on Blender and LLFF Datasets
We first evaluate our **PL-NeRF** on the standard Blender and Real Forward Facing datasets. Table 1 shows that our **PL-NeRF** (linear) outperforms the vanilla [18] (constant) model that assumes piecewise constant opacity in all metrics for both the synthetic Blender and Real Forward Facing datasets. Figure 1 and Figure 4 show qualitative results. As shown our **PL-NeRF** is able to achieve sharper
\begin{table}
\begin{tabular}{c c|c|c c c c c c c c} & **Blender** & Avg. & Chair & Drums & Ficus & Hotdog & Lego & Mat. & Mic & Ship \\ \hline \multirow{2}{*}{PSNR\(\uparrow\)} & Const. (Vanilla) & 30.61 & 32.54 & 24.79 & 29.63 & 36.08 & 32.01 & 29.31 & 32.55 & 27.95 \\ & Linear (Ours) & **31.10** & **32.92** & **25.07** & **30.18** & **36.46** & **32.90** & **29.52** & **33.08** & **28.71** \\ \hline \multirow{2}{*}{SSIM\(\uparrow\)} & Const. (Vanilla) & 0.943 & 0.966 & 0.918 & 0.960 & 0.975 & 0.959 & 0.943 & 0.978 & 0.846 \\ & Linear (Ours) & **0.948** & **0.969** & **0.923** & **0.965** & **0.977** & **0.966** & **0.948** & **0.981** & **0.857** \\ \hline \multirow{2}{*}{LPIPS\(\downarrow\)} & Const. (Vanilla) & 5.17 & 3.19 & 7.97 & 4.14 & 2.48 & 2.33 & 4.32 & 2.16 & 14.8 \\ & Linear (Ours) & **4.39** & **2.85** & **7.10** & **3.03** & **2.28** & **1.81** & **3.21** & **1.73** & **13.1** \\ \hline \hline \multirow{2}{*}{PSNR\(\uparrow\)} & **LLFF** & Avg. & Fern & Flower & Fortress & Horns & Leaves & Orchid & Room & Trex \\ \hline \multirow{2}{*}{PSNR\(\uparrow\)} & Const. (Vanilla) & 27.53 & 26.79 & 28.23 & 32.53 & 28.54 & 22.35 & 21.20 & 33.03 & 27.58 \\ & Linear (Ours) & **28.05** & **26.85** & **28.71** & **32.95** & **29.38** & **22.51** & **21.25** & **33.99** & **28.79** \\ \hline \multirow{2}{*}{SSIM\(\uparrow\)} & Const. (Vanilla) & 0.874 & 0.746 & 0.886 & 0.925 & 0.893 & 0.816 & 0.746 & 0.956 & 0.916 \\ & Linear (Ours) & **0.885** & **0.863** & **0.902** & **0.932** & **0.911** & **0.826** & **0.754** & **0.961** & **0.933** \\ \hline \multirow{2}{*}{LPIPS\(\downarrow\)} & Const. (Vanilla) & 7.37 & 9.67 & 6.34 & 2.92 & 7.26 & 11.0 & 11.8 & 4.33 & 5.66 \\ & Linear (Ours) & **6.06** & **7.92** & **4.93** & **2.46** & **5.51** & **9.59** & **10.2** & **3.54** & **4.38** \\ \hline \end{tabular}
\end{table}
Table 1: **Quantitative Results on Blender and LLFF Datasets. LPIPS scores \(\times 10^{2}\).**
Figure 4: **Qualitative Results for Blender and Real Forward Facing.**
textures as shown in the Lego's scooper and the bread's surface in the hotdog. Moreover, our approach is also able to recover less fuzzy surfaces as shown in the microphone scene (Figure 1) where training views are close to the grazing angle of its head. As illustrated, the resulting probability density of the ray corresponding to the marked pixel is peakier than constant as our precision importance sampling allows us to have better samples closer to the surface. We also see clearer ropes in ship, less cloudy interior of the drum, and more solid surfaces such as cleaner leg of the swivel chair in the room scene.
### Geometric Extraction
We also show improvement in geometric reconstruction of **PL-NeRF**. We extract the geometry from the learned density field from the trained models of PL-NeRF and Vanilla NeRF using marching cubes with a threshold of 25 following [30]. Figure 5 shows qualitative results on the reconstruction of our piecewise linear vs the original piecewise constant formulation. As shown, we are able to better recover the holes on the body and wheels of the Lego scene as well as the interior structure inside the Mic. Moreover, interestingly, the surface of the drum is reconstructed to be transparent as visually depicted in the images, as opposed to the ground truth being opaque.
### Effectiveness of our formulation on other Radiance Field Methods
We also demonstrate our formulation's effectiveness on other radiance field methods and show that our approach can be used as a drop-in replacement to existing NeRF-based methods. We integrate our piecewise linear opacity formulation to the volume rendering integral into Mip-NeRF (**PL-MipNeRF**). Table 2 shows our quantitative results demonstrating consistent improvement across all scenes in the original hemisphere Blender dataset. Figure 6 shows qualitative examples where we see that under difficult scenarios such as when ray conflicts arise in the fine details of the Chair and in the presence of grazing angle views in the Mic, our PL-MipNeRF shows significant improvement over the baseline. Our results show that our piecewise linear opacity and piecewise constant color formulation scales well to Mip-NeRF as well. See supplementary for implementation details. We also plug our
\begin{table}
\begin{tabular}{l l|c|c c c c c c c} \hline \hline & **Blender** & Avg. & Chair & Drums & Ficus & Hotdog & Lego & Mat. & Mic & Ship \\ \hline \multirow{2}{*}{PSNR\(\uparrow\)} & Mip-NeRF & 31.76 & 33.95 & 24.39 & 31.20 & 36.12 & 33.84 & 30.55 & 34.63 & 29.41 \\ & PL-MipNeRF & **32.48** & **35.11** & **24.92** & **32.25** & **36.51** & **35.15** & **30.69** & **35.22** & **30.00** \\ \hline \multirow{2}{*}{SSIM\(\uparrow\)} & Mip-NeRF & 0.955 & 0.975 & 0.921 & 0.971 & 0.978 & 0.971 & 0.957 & 0.987 & 0.876 \\ & PL-MipNeRF & **0.959** & **0.981** & **0.928** & **0.977** & **0.980** & **0.976** & **0.959** & **0.989** & **0.882** \\ \hline \multirow{2}{*}{LPIPS\(\downarrow\)} & Mip-NeRF & 3.64 & 1.80 & 6.82 & 2.35 & 1.97 & 1.44 & 2.39 & 0.973 & 11.4 \\ & PL-MipNeRF & **3.09** & **1.32** & **5.78** & **1.66** & **1.67** & **1.07** & **2.09** & **0.788** & **10.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Quantitative Results of Mip-NeRF v.s. PL-MipNeRF** LPIPS scores \(\times 10^{2}\).
Figure 5: Geometry Extraction Qualitative Examples.
Figure 6: **Qualitative Results for Mip-NeRF vs PL-MipNeRF.** We see that under difficult scenarios such as in fine texture details of the chair and grazing angle views on the mic, PL-MipNeRF visually shows significant improvement.
piecewise linear opacity formulation into DIVeR [36], a voxel-based NeRF model, and show that our formulation is also an effective drop-in replacement outperforming the original DIVeR [36]. Please see supplementary for experiment results and implementation details.
### Experiments on Close-up Views
We further consider the challenging setting of testing on cameras closer to the objects. Table 3 (top) shows quantitative results when training on the original hemisphere dataset and tested on different close-up views. As shown by the drop in metrics, the difficulty increases as the camera moves closer to the object (0.75x to 0.5x to 0.25x of the original radius) where details get more apparent. Our **PL-NeRF** outperforms the vanilla piecewise constant model in all settings and the gap (SSIM and LPIPS) between ours and the constant assumption increases as the setting becomes harder highlighting the importance of recovering shaper texture and less fuzzy surfaces.
We also consider the set-up of training with cameras with different distances to the object that result in different sets of ray samples causing conflicts. We generate training views following the data processing pipeline from [18] with a random distance scaling factor sampled from \(U(0.5,1.0)\). As shown in Table 3 (bottom), our **PL-NeRF** outperforms the vanilla constant baseline in all metrics across different camera distances, where the gap (LPIPS and SSIM) is also larger the closer the camera is to the object, where details are more apparent. The difficulty for the constant (vanilla) case under multiple camera distances is its sensitivity to the choice of samples along the ray. Figure 2 shows that the conflicting rays cause quadrature instability for under piecewise constant opacity leads to unstable outputs as shown by the noisy texture on the chair. For the constant, the level of noise (gold specs) and blurriness vary at different camera distances, whereas our **PL-NeRF** renders crisper and more consistent outputs even as the camera is moved closer or further for the object.
### Experiments with Depth Supervision
Finally, we also show that our **PL-NeRF** enables stronger depth supervision under our piecewise linear opacity assumption due our precision importance sampling that allows for gradients to flow to these more refined samples resulting in more accurate depth. As in previous works [28], we use a sample-based loss for to incorporate depth supervision4. Table 4 shows quantitative results when training and testing on the less constrained Blender dataset with cameras at random distances from the object, as described in the previous section, with depth supervision. As shown, our **PL-NeRF** outperforms the vanilla constant baseline on all metrics including depth RSME demonstrating that our approach allows for stronger depth supervision.
Footnote 4: We use the original hyperparameters from [28] in this experiment.
## 8 Conclusion
We proposed a new way to approximate the volume rendering integral that avoids quadrature instability, by considering a piecewise linear approximation to opacity and a piecewise constant approximation to color. We showed that this results in a simple closed-form expression for the integral that is easy to evaluate. We turned this into a new objective for training NeRFs that is a drop-in replacement to existing methods and demonstrated improved rendering quality and geometric reconstruction, more accurate importance sampling and stronger depth supervision.
\begin{table}
\begin{tabular}{l|c c c c} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & RMSE\(\downarrow\) \\ \hline
**Const. (Vanilla)** & 29.20 & 0.898 & 11.2 & 0.178 \\
**Linear (Ours)** & **29.54** & **0.905** & **10.4** & **0.147** \\ \hline \end{tabular}
\end{table}
Table 4: **Depth Supervision. Reported LPIPS score is multiplied by \(10^{2}\).**
\begin{table}
\begin{tabular}{l|c||c c c|c c c|c c c} & \multicolumn{3}{c}{Dist 0.25x} & \multicolumn{3}{c}{Dist 0.5x} & \multicolumn{3}{c}{Dist 0.75x} \\ \cline{3-10} Train Set & & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline \multirow{2}{*}{**Hemisphere**} & Const. (Vanilla) & 20.18 & 0.612 & 54.1 & 22.80 & 0.753 & 30.4 & 25.97 & 0.867 & 14.1 \\ & Linear (Ours) & **20.34** & **0.637** & **50.0** & **23.00** & **0.767** & **27.5** & **26.28** & **0.876** & **12.6** \\ \hline \multirow{2}{*}{**Multi Dist.**} & Const. (Vanilla) & 22.30 & 0.677 & 45.7 & 25.51 & 0.811 & 23.1 & 28.02 & 0.891 & 11.3 \\ & Linear (Ours) & **22.66** & **0.705** & **41.1** & **26.04** & **0.828** & **20.3** & **28.55** & **0.900** & **9.90** \\ \hline \end{tabular}
\end{table}
Table 3: **Testing on close-up views:**_Hemisphere_ are with training cameras located on the original hemisphere. _Multi Dist._ with training cameras are at random distances across a depth scale range of \(0.5-1.0\) of the original hemisphere. Reported LPIPS score is multiplied by \(10^{2}\).
Acknowledgements.This work is supported by a Apple Scholars in AI/ML PhD Fellowship, a Snap Research Fellowship, a Vannevar Bush Faculty Fellowship, ARL grant W911NF-21-2-0104, a gift from the Adobe corporation, the Natural Sciences and Engineering Research Council of Canada (NSERC), the BC DRI Group and the Digital Research Alliance of Canada.
|
2309.04888 | Semi-supervised Instance Segmentation with a Learned Shape Prior | To date, most instance segmentation approaches are based on supervised
learning that requires a considerable amount of annotated object contours as
training ground truth. Here, we propose a framework that searches for the
target object based on a shape prior. The shape prior model is learned with a
variational autoencoder that requires only a very limited amount of training
data: In our experiments, a few dozens of object shape patches from the target
dataset, as well as purely synthetic shapes, were sufficient to achieve results
en par with supervised methods with full access to training data on two out of
three cell segmentation datasets. Our method with a synthetic shape prior was
superior to pre-trained supervised models with access to limited
domain-specific training data on all three datasets. Since the learning of
prior models requires shape patches, whether real or synthetic data, we call
this framework semi-supervised learning. | Long Chen, Weiwen Zhang, Yuli Wu, Martin Strauch, Dorit Merhof | 2023-09-09T22:55:25Z | http://arxiv.org/abs/2309.04888v1 | # Semi-supervised Instance Segmentation with a Learned Shape Prior +
###### Abstract
To date, most instance segmentation approaches are based on supervised learning that requires a considerable amount of annotated object contours as training ground truth. Here, we propose a framework that searches for the target object based on a shape prior. The shape prior model is learned with a variational autoencoder that requires only a very limited amount of training data: In our experiments, a few dozens of object shape patches from the target dataset, as well as purely synthetic shapes, were sufficient to achieve results en par with supervised methods with full access to training data on two out of three cell segmentation datasets. Our method with a synthetic shape prior was superior to pre-trained supervised models with access to limited domain-specific training data on all three datasets. Since the learning of prior models requires shape patches, whether real or synthetic data, we call this framework semi-supervised learning. The code is available to the public1.
Footnote 1: [https://github.com/looooongChen/shape_prior_seg](https://github.com/looooongChen/shape_prior_seg)
Keywords:Semi-supervised Instance segmentation Shape prior Variational autoencoder Edge loss
## 1 Introduction
Instance segmentation, where many instances of an object have to be segmented in one image, is the basis of several practically relevant applications of computer vision, such as cell tracking [1]. Many approaches [2, 3, 4] have been proposed for instance segmentation, the majority of which are based on supervised learning. The practical applicability of these methods is often limited by the lack of a large training dataset with manually outlined objects. Here, we introduce an instance segmentation approach that only relies on a shape prior which can be learned from a considerably smaller number of training samples or even synthetic data.
The shape is one of the most informative cues in object segmentation and detection tasks. Anatomically constrained neural networks (ACNNs) [5] improve segmentation results by including a shape prior for model regularization. For
segmentation refinement, a shape prior has been used by [6] as a separate post-processing step. Segmentations generated by the shape prior model are reconstructed to the original MRI images through several convolutional layers in [7]. By minimizing the reconstruction error, the segmentation model can be trained in an unsupervised fashion. All these works report promising results, but are limited to cases where object position and extent are roughly the same in all images, such as for the cardiac images in [5], the lung X-ray images in [6] and the brain MRI scans in [7]. To our knowledge, this is the first work considering instance segmentation based on a shape prior, i.e. we detect and segment multiple, scattered object instances. Similar to [8], we use the spatial transformer [9] to localize objects. The main advantage of using the spatial transformer lies in its differentiability, making the whole framework end-to-end trainable.
The main contributions of this work are: We propose (1) an semi-supervised instance segmentation approach that searches for target objects based a shape prior, and (2) a novel loss computing the difference between two gradient maps. This framework provides a way to achieve instance segmentation with a small amount of manual annotations, or by utilizing unpaired annotations (where the correspondence between annotations and images is unknown). We compared our approach to the state-of-the-art supervised method, Mask R-CNN [2], in different training scenarios. On three experimental datasets, our approach is proved to be en par with a Mask R-CNN with full access to training data, while it outperforms a pre-trained Mask R-CNN with limited access to domain-specific training data.
## 2 Approach
As shown in Figure 1, our framework consists of three main parts: 1) the localization network, 2) the spatial transformer [9], and 3) the patch segmentation network. Based on the localization prediction, the spatial transformer crops local patches and feeds them to the patch segmentation network. The gradient maps of segmented patches are then stitched together. The entire model is trained by minimizing the reconstruction error of the gradient map.
During training, the model learns to predict the object position and to find the correspondence between the image patch and the segmentation. The shape prior model (gray part in Fig. 1; fixed during training) is guaranteed to output a plausible shape, but the correspondence has to be learned by the model itself.
### Localization network
The localization network consists of 8 convolutional layers and 4 max pooling layers after every 2 convolutional layers. Given an image of size \((H_{img},W_{img})\), the localization network will spatially divide the image into an \((H_{img}/S_{cell},W_{img}/S_{cell})\) grid of cells, where \(S_{cell}\) is the cell size and also the downsampling rate. Since 4 pooling layers with stride 2 are used, we have \(S_{cell}=16\).
Each cell is responsible to predict the presence of an object \(L_{presence}\in[0,1]\), its range described by the bounding box size \((H_{obj},W_{obj})\) and the offset with respect to the cell center \((O_{x},O_{y})\) (Figure 2(a)), with the implementation:
\[L_{presence} = sigmoid(f_{presence})\] \[L_{scale} = sigmoid(f_{scale})\cdot(S_{max}-S_{min})+S_{min}\] \[L_{ratio} = \exp(tanh(f_{ratio})\cdot\log(R_{max}))\] \[(L_{x},L_{y}) = (0.5\cdot tanh(f_{x}),0.5\cdot tanh(f_{y}))\]
where \(f_{[\cdot]}\) is the corresponding input feature map. \(sigmoid(\cdot)\) and \(tanh(\cdot)\) denote the sigmoid and tanh activation function. \(S_{min}\), \(S_{max}\) and \(R_{max}\) are hyperparameters, which are the minimal scale, the maximal scale and the maximal aspect ratio, respectively. The position is parameterized according to:
\[(H_{obj},W_{obj}) = (L_{scale}\cdot S_{cell}/\sqrt{L_{ratio}},\;L_{scale}\cdot S_{ cell}\cdot\sqrt{L_{ratio}})\] \[(O_{x},O_{y}) = (L_{x}\cdot S_{cell},\,L_{y}\cdot S_{cell})\]
It is worth mentioning that the maximal offset is \(0.5\cdot S_{cell}\), which means that an object will be detected by the cell in which its center lies.
### Patch crop and stitch
Given the location parameters obtained from the localization network, we use a spatial transformer to crop local patches. The spatial transformer implements the crop by sampling transformed grid points, which is differentiable, enabling end-to-end training. The patch crop of the \(i\)-th cell can be described by transform:
\[T^{i}_{crop}=\begin{bmatrix}W_{img}/W^{i}_{obj}&0&W_{img}\cdot(X^{i}_{cell}+O^ {i}_{y})/W^{i}_{obj}\\ 0&H_{img}/H^{i}_{obj}&H_{img}\cdot(Y^{i}_{cell}+O^{i}_{x})/H^{i}_{obj}\\ 0&0&1\end{bmatrix}\]
Figure 1: Architecture of our framework: the localization network predicts the object position and a presence score, based on which object patches are cropped by a spatial transformer. A variational autoencoder with the decoder part fixed (shape prior) is responsible for the patch segmentation. At last, the gradient maps of segmented patches are stitched together. The model is trained by minimizing the reconstruction loss of the gradient map with the KL-divergence loss as regularization.
where \((X^{i}_{cell},Y^{i}_{cell})\) is the cell center. \((O^{i}_{x},O^{i}_{y})\) and \((H^{i}_{obj},W^{i}_{obj})\) are the predicted offset and size of the object. All cropped patches will be rescaled to size \(S_{patch}\times S_{patch}\) (\(S_{patch}=32\) in this work) and segmented by the patch segmentation network, as described in Section 2.3. After that, the gradient map of segmented objects will be stitched together by adding up back transformed patches through:
\[T^{i}_{stitch}=\begin{bmatrix}W^{i}_{obj}/S_{patch}&0&X^{i}_{cell}+O^{i}_{y}\\ 0&H^{i}_{obj}/S_{patch}&Y^{i}_{cell}+O^{i}_{x}\\ 0&0&1\end{bmatrix}\]
The gradient map is computed by applying the x- and y-directional Sobel filter to the image and taking the square root of the summed square. The gradient map is normalized to range 0 to 1. In this work, we use an input size of \(256\times 256\) for all experiments. Considering \(S_{cell}=16\), 256 patches are cropped in total.
### Shape prior and patch segmentation network
Similar to [5, 6, 7], we employ a variational autoencoder (VAE) as our shape model. As shown in Figure 2(b), the model is trained to reconstruct plausible patch segmentation masks with the KL-divergence loss as regularization.Compared to a standard autoencoder, a VAE learns a more continuous latent space, which is expected to generate plausible new shapes that do not appear in training data.
In this work, the VAE is trained with \(32\times 32\) patches. The encoder and decoder consist of 6 convolutional layers and 3 pooling/upsampling layers, respectively. Based on our experiments, model training requires only a small amount of data, especially when the shape variation is small. We train the shape prior with either annotations from a single image or synthetic data (Section 3).
After training, the decoder part will be used as the shape prior in the detector (Figure 1). Its parameters will be fixed during the detector training. The encoder will be reinitialized and trained together with the localization network.
### Training
The model is trained end-to-end by minimizing the gradient map reconstruction error with the KL-divergence loss as regularization. In initial experiments,
Figure 2: (a) Demonstration of parameters of a bounding box. (b) Architecture of the patch segmentation network, which is firstly trained with shape patches. During the detector training, the decoder part is fixed and plays the role of shape prior.
we found the mean absolute/squared error (MAE/MSE) to be very unstable during training: The shape prior model tends to generate distorted shapes or degenerates into empty output. Thus, we propose the following novel loss:
\[L_{edge}=1-\frac{\frac{1}{N}\sum_{i}min^{2}(G^{i}_{image},G^{i}_{reconstruction})}{ \frac{1}{N}\sum_{i}G^{i}_{reconstruction}+\alpha} \tag{1}\]
where \(G_{image}\) and \(G_{reconstruction}\) indicate the gradient map of the image and the reconstructed gradient map. \(N\) is the number of pixels. The \(min()\) operation are conducted pixelwise. The parameter \(\alpha\) prevents the model from pushing \(G_{reconstruction}\) to zero and is set to 0.01 empirically.
Instead of optimizing the value of each pixel, as MSE and MAE, this loss maximizes the proportion of the reconstructed gradient map under the image gradient map. In addition, the square operator in the numerator is proved to be crucial for stable training in our experiments. Our interpretation is that the square operator modulates the back-propagated gradient with the reconstructed gradient map, giving more emphasis to positions around the edge.
### Pre- and post-processing
To reduce the influence of extreme values on the loss, we equalized the image and the gradient map by clipping and streching. For all datasets, we truncated the gradient map at 0.8 times the maximum and normalized the value to the range 0 to 1. In addition, we also performed image equalization for the Fluo-N2DH-SIM+ dataset due to the bright spots inside the cell (Figure 3). The clip value was set to 1.2 times the image mean.
As post-processing, we first filtered out predictions with \(L_{presence}\) smaller than 0.1. Non-max suppression is then performed to eliminate duplicate predictions: An instance mask is compared with another mask, when the overlapping area is larger than \(p_{non\_max}=0.1\) with respect to its own area. A mask is only retained if its score is the highest in all comparisons.
## 3 Experiments and results
### Datasets and experiments
We evaluate our approach on three datasets: the BBBC006 dataset2 and two datasets Fluo-N2DH-SIM+ and PhC-C2DL-PSC from the cell tracking challenge [1]. In the following, we use BBBC, FLUO and PHC as abbreviations. The BBBC dataset contains 768 microscopic images of human U2OS cells, while the FLUO (HL60 cells with Hoechst staining) and PHC (pancreatic stem cells on a polystyrene substrate) datasets are smaller with 215 and 202 annotated images.
Footnote 2: [https://data.broadinstitute.org/bbbc](https://data.broadinstitute.org/bbbc)
For comparison, we also report the performance of the supervised method Mask R-CNN. The following experiments are performed:
**Ours-annotation:** We first evaluate our approach with the shape prior learned from manual annotations. We only took segmentation patches from one image. Specifically, 67, 8 and 138 object patch masks were used for the BBBC, FLUO and PHC shape model training. To model small shape changes and object rotation, we performed rotation (in steps of 30 degrees) and elastic deformation [11] to augment the training set. The scale range and maximal aspect ratio was set to 2-3/3, 1-2/1.5 and 1-2/3, respectively.
**Ours-synthetic:** Since the objects are approximately circular, especially for the BBBC and FLUO datasets, we could train the shape prior model with synthetic data consisting of elastically deformed ellipses [11] with random angle and major-minor axis ratio. The maximal major-minor axis ratio was 2, 1.5 and 3 for the BBBC, FLUO and PHC dataset, respectively.
**MRCNN-scratch-one/full:** We trained a Mask R-CNN from scratch using ResNet-50 backbone. The anchor box scale, aspect ratio and non-maximum suppression (NMS) threshold were set to values equivalent to those used in our approach. Since the Ours-annotation scenario can be considered as one image training, we also trained a Mask R-CNN with one image for comparison.
**MRCNN-finetune-one/full:** Since the dataset in our experiments is small, especially for FLUO and PHC, we pretrained the Mask R-CNN on the MS COCO dataset3. Afterwards, we finetuned the model, with only the head layers trainable, on the actual target dataset.
Footnote 3: [https://cocodataset.org/](https://cocodataset.org/)
For the BBBC and PHC dataset, we cropped images to \(256\times 256\) and \(128\times 128\) for training and test. All images were resized to \(256\times 256\) for the network input. For the scenarios using one training image (Ours-annotation, MRCNN-scratch-one, MRCNN-finetune-one), the images a01_s1, 02/t000, 02/t150 were used for BBBC, FLUO and PHC, respectively. MRCNN-scratch-full and MRCNN-finetune-full used a01_s1-b24_s2, 02/t000-t149, 02/t150-t250 for training. Ours-synthetic requires no manual annotations. All remaining images were kept for testing.
### Results and discussion
We report the _average precision4_ (AP) over a range of IoU (intersection over union) thresholds from 0.3 to 0.9 as the evaluation score (Table 1). Our approach, including the evaluation scenarios where the shape prior is learned from one image annotation and synthetic data, outperforms the Mask R-CNN trained or finetuned with one image, which shows the advantage of our approach in cases where few or no annotations are available. Furthermore, our approach achieves comparable results with the Mask R-CNN trained/finetuned with the full training set on the BBBC and FLUO dataset, while the performance gap is apparent for the PHC dataset.
Footnote 4: [https://www.kaggle.com/c/data-science-bowl-2018](https://www.kaggle.com/c/data-science-bowl-2018)
While Mask R-CNN achieved the best mean AP (mAP) on the BBBC dataset, our approach outperformed Mask R-CNN on the FLUO dataset by a relatively large margin. The main reason is that the FLUO dataset is indeed a very small
one for Mask R-CNN training, even with finetuning. This again illustrates the advantage of our method on small datasets.
On the PHC dataset, neither method performed particularly well. Both methods tended to detect nearby objects as one if there was no clearly visible edge between them. The average precision of our method in the low IoU range was close to or better than that of Mask R-CNN. Figure 3 shows that our method could detect most objects as well as the Mask R-CNN. However, our method has been designed to heavily rely on the edge clue, so that the segmentation will converge to strong edges. For the PHC dataset, the object boundaries do not generally correspond to the strongest edges. This explains why objects were undersegmented by our approach (Figure 3) and why the average precision decreased rapidly with increasing IoU (Table 1).
The performance improvement through training the shape prior with manually outlined shapes depends on the nature of the shape. On the FLUO dataset, annotated data and synthetic data shape priors performed almost equally well, while training with manual annotations was superior on the other two datasets, even though only a few dozen shapes were used.
## 4 Conclusion and outlook
We have proposed an instance segmentation framework which searches for target objects in images based on a shape prior model. In practice, this allows segmenting instances with a very limited amount of annotations, segmenting synthesizable shapes without any annotation, as well as reusing object annotations from other datasets.
Figure 3: Qualitative results: from top to bottom, the rows show the results on the BBBC006, Fluo-N2DH-SIM+ and PhC-C2DL-PSC datasets, respectively.
The main limitation of our approach lies in the dependency on the edge cues. Images should have a relatively clear background, which is, however, the case for many biomedical datasets[4]. Future work will focus on including area-based information, which will make our approach applicable to further datasets, e.g. in cases where edges and object boundaries do not always coincide.
|
2309.16254 | On the Challenges of Fully Incremental Neural Dependency Parsing | Since the popularization of BiLSTMs and Transformer-based bidirectional
encoders, state-of-the-art syntactic parsers have lacked incrementality,
requiring access to the whole sentence and deviating from human language
processing. This paper explores whether fully incremental dependency parsing
with modern architectures can be competitive. We build parsers combining
strictly left-to-right neural encoders with fully incremental sequence-labeling
and transition-based decoders. The results show that fully incremental parsing
with modern architectures considerably lags behind bidirectional parsing,
noting the challenges of psycholinguistically plausible parsing. | Ana Ezquerro, Carlos Gómez-Rodríguez, David Vilares | 2023-09-28T08:44:08Z | http://arxiv.org/abs/2309.16254v1 | # On the Challenges of Fully Incremental Neural Dependency Parsing
###### Abstract
Since the popularization of BiLSTMs and Transformer-based bidirectional encoders, state-of-the-art syntactic parsers have lacked incrementality, requiring access to the whole sentence and deviating from human language processing. This paper explores whether fully incremental dependency parsing with modern architectures can be competitive. We build parsers combining strictly left-to-right neural encoders with fully incremental sequence-labeling and transition-based decoders. The results show that fully incremental parsing with modern architectures considerably lags behind bidirectional parsing, noting the challenges of psycholinguistically plausible parsing.
## 1 Introduction
Human understanding of natural language is widely agreed to be _incremental_: humans do not need to read a complete sentence to start understanding it. Instead, we update partial interpretations as we receive more input (Marslen-Wilson, 1985).
While the exact way in which this incrementality works is still unclear (Kitaev et al., 2022), its presence implies that some form of incrementality is an obvious necessary condition for a parser to be psycholinguistically plausible as a model of human processing (Miller and Schuler, 2010). Since human processing is the gold standard for automatic parsing, we know that it should be possible to achieve accurate parsing with incremental systems. Yet, in recent years, none of the competitive syntactic parsers that have been proposed for either of the main syntactic formalisms can be said to be incremental, even under the loosest possible definitions of the term. This poses challenges in the intersection between syntax and computational psycholinguistics, e.g., use cases both for modeling of human parsing and for real-time settings where one wants partial results before waiting for a sentence to end. Currently, most parsers use bidirectional encoders, such as BiLSTMs (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) or Transformers (Zhou and Zhao, 2019; Mrini et al., 2020; Yang and Deng, 2020), so the whole sentence is being used before even processing the first word. An exception is the constituent parser by Kitaev et al. (2022), who use a fully incremental encoder, but the rest of the model is bidirectional, as it uses Transformer layers and a CYK-like, non-left-to-right span-based decoder (Stern et al., 2017).
This paper explores the viability of _fully incremental_ dependency parsing, i.e., parsers where all the components (from the encoder to the decoder) work _strictly_ from left to right. To our knowledge, this is the first attempt to build fully incremental dependency parsers with modern deep learning architectures.
## 2 Incrementality in Parsing
In transition-based parsingTransition-based parsing has traditionally been linked to incrementality (Nivre, 2008), as it works from left to right and builds partial outputs. Some authors consider that transition-based parsers as a whole are incremental, as they have internal states with partial outputs (Eisape et al., 2022). We will call this criterion _weak incrementality_. Others exclude algorithms like the arc-standard dependency parser, where dependencies are not built in left-to-right order and input arbitrarily far in the future might be needed to build right-branching dependencies (Christiansen and Chater, 2016). We will call this stricter view _strong incrementality_, and formalize it as follows: given a monotonic parser (i.e., one where each partial parse is a superset of the previous), we say that it is _strongly incremental with delay \(k\)_ if every possible partial parse for a prefix \(w_{1}\ldots w_{i-k}\) can be built upon reading the prefix \(w_{1}\ldots w_{i}\), without the parser having accessed the rest of the input.1 Analo
gous considerations about the limitations with right-branching in weak incrementality, and parsers that try to avoid it to various extents, have been studied in the CCG literature Ambati et al. (2015); Stanojevic and Steedman (2019, 2020); Stanojevic et al. (2021). Contrary to arc-standard, other transition-based dependency parsers are strongly incremental: the arc-eager Nivre (2003), Covington Covington (2001) or multiplanar Gomez-Rodriguez and Nivre (2013) parsers all fit our definition above.
Footnote 1: In the context of the _non-monotonicity_, the _non-monotonicity_ is a natural choice for the _non-monotonicity_.
Classic implementations of strongly incremental parsers typically have positive delay Beuck et al. (2011) between input and output due to lookahead. Some approaches have considered zero delay, albeit with weaker performance Kohn and Menzel (2013). Solutions are also available for speculativity in incremental parsing Kitaev et al. (2022), by introducing non-monotonicity Honnibal et al. (2013); Fernandez-Gonzalez and Gomez-Rodriguez (2017).
Thus, the paradigm supports incrementality and many implementations of these parsers from the pre-deep-learning era, which did not use contextualized encoders, were strongly incremental, leading to the observation by Gomez-Rodriguez (2016) that at that point, some state-of-the-art parsing models were converging with psycholiguistically plausible models. However, in recent years bidirectional encoders have become ubiquitous, ruling out even weak incrementality from recent implementations of transition-based parsers, be them for dependency or other grammatical formalisms Kiperwasser and Goldberg (2016); Stanojevic and Steedman (2019); Fernandez Astudillo et al. (2020); Fernandez-Gonzalez and Gomez-Rodriguez (2023). In this respect, it is worth mentioning that the approach by Yang and Deng (2020) is described as "strongly incremental constituency parsing" but this refers to the decoder, as they use a bidirectional encoder. The only recent proposal we are aware of that aims for incrementality in the whole system is the CCG parser by Stanojevic and Steedman (2020), also a constituency parser, but its labelled F-score is over 7 points lower than a non-incremental baseline in an English-only evaluation.
In label-based parsingOther parsing paradigms that yield themselves to incrementality, as they could work from left to right, are seq2seq parsing Vinyals et al. (2015) and sequence-labeling parsing Gomez-Rodriguez and Vilares (2018); Strzyz et al. (2019). However, for the former, we are not aware of any implementation without bidirectional encoders. For the latter, while there are strongly incremental sequence-labeling decoders for both constituency Gomez-Rodriguez and Vilares (2018) and dependency Strzyz et al. (2020), most implementations use bidirectional encoders as well. The exception are some experiments with feed-forward encoders in Gomez-Rodriguez and Vilares (2018), using a sliding window to model near future context (and thus, with delay). Yet, their F-score is 14 points below their non-incremental counterparts in the same paper, and almost 20 below the overall state of the art.
## 3 Incremental models
The research question arises whether it is possible to have competitive incremental dependency parsers in the neural era. We take the first step and test how mainstream approaches would work in a setting of strong incrementality. In our work, we will focus on models with _strictly_ zero delay, but we also evaluate less strict setups, in particular with delays 1 and 2. To do so, we will rely on modern encoder-decoder models. All source code is available on GitHub ([https://github.com/anaezquerro/incpar](https://github.com/anaezquerro/incpar)).
### Incremental encoders
Let \(w=[w_{1},w_{2},...w_{|w|}]\) (with \(w_{i}\in\mathcal{V}\)) be an input sentence. An encoder can be seen as a parameterized function \(\Omega_{\theta,|w|}:\mathcal{V}^{|w|}\rightarrow\mathcal{H}^{|w|}\), where \(\mathcal{V}\) is the input vocabulary space, and \(\mathcal{H}\in\mathcal{R}^{N}\) is the hidden representational space in where each \(w_{i}\) is projected. In this work we are particularly interested in incremental encoders, i.e., those where given a token \(w_{i}\), the computation of its projected representation \(h_{i}\) only needs the sub-sequence \(w_{[1:i]}\). We consider different encoders for this purpose: (i) **4 stacked left-to-right LSTMs**Hochreiter and Schmidhuber (1997), where input is a concatenation of a word and PoS tag vector (random init) and a char-level unidirectional LSTM; (ii) **BLOOM**Scao et al. (2022) (due to resource constraints, we run the smallest version with 560M parameters); and (iii) **mGPT**Shliazhko et al. (2022).
As _control_ encoders (upper bound baselines), we use non-incremental encoders: (i) bidirectional
LSTMs (same setup as for left-to-right LSTMs), and (ii) XLM-RoBERTa (Conneau et al., 2020).
### Incremental decoders
We consider incremental (i) sequence labeling parsing, and (ii) transition-based parsing decoders.
#### 3.2.1 Sequence labeling decoders
A sequence labeling decoder is a parametrized function \(\Phi_{\theta,|w|}\): \(\mathcal{H}^{|w|}\rightarrow\mathcal{L}^{|w|}\), which maps each hidden vector (\(h_{i}\in\mathcal{H}\)) outputted by a generic encoder into an output label \(l_{i}\in\mathcal{L}\) that represents a part of the output parse. As the decoder, we use a 1-layered feed-forward network and a softmax. As for label encodings, we select representatives from two encoding families (Strzyz et al., 2019, 2020):
**Head-based** We study three variants, all of them supporting non-projective trees. First, the absolute-indexing encoding (abs-idx), where the token labels are the index of their head. Second, the relative-indexing encoding (rel-idx), where the label is the difference between the head and dependent indexes. Third, the PoS-tag-based encoding (PoS-idx), where each label is encoded as an offset that indicates that the \(n\)th word to its left/right with a given PoS tag is the head.2
Footnote 2: The PoS-tag based encoding needs PoS tags for decoding the sequence of labels to a tree. Instead of introducing PoS-tag information in those models, our PoS-tag-based decoders predict in multitask learning both the syntactic label and PoS tag associated to each word, in order to remove bias with respect to other encodings.
**Strings of brackets** First, we consider the 1-planar bracketing encoding (1p), where the label for each token is represented using a string of brackets, with each arc represented by a bracket pair. This encoding can only model crossing arcs in opposite directions. To tackle this, there is a 2-planar variant (2p), analogous, but defining a second plane of brackets.
In the context of full incrementality, we will say that an encoding is _forward-looking_ if a label for a token \(w_{i}\) can refer to some token to the right of \(w_{i}\). The abs-idx, rel-idx and PoS-idx encodings are forward-looking (e.g., with abs-idx, the word \(w_{2}\) could have \(4\) as its label, meaning that its head is \(w_{4}\), which has not been read yet); while the bracketing encodings are _not_ forward-looking. Forward-lookingness does not break incrementality: all the considered encodings are still strongly incremental with delay 0 (all dependencies involving \(w_{1}\ldots w_{i}\) can be retrieved from the labels \(l_{1}\ldots l_{i}\)). However, one could expect forward-looking encodings to suffer more from using incremental encoders, due to needing to make decisions involving future words that the system cannot yet access.
In our implementation, for models with delay zero, the \(i\)th label is predicted directly from \(h_{i}\). For models with delay \(k>0\), labels are predicted from a concatenated representation \(h_{i}\cdot...\cdot h_{i+k-1}\cdot h_{i+k}\).
It is also worth noting that the obtention of the tree encoded by a sequence labeling encoding can require simple postprocessing heuristics (e.g. to remove cycles in head-selection encodings). This does not break incrementality, as these heuristics are applicable to partial outputs as well.
#### 3.2.2 Transition-based decoders
A transition-based decoder is defined as a tuple (\(C\), \(T\), \(\mathbf{c}_{s}\), \(\mathbf{C}_{t}\)), where \(C\) is a set of configurations (or parsing states) with associated partial parses, \(T\) a set of transitions between states, and \(c_{s}\) and \(C_{t}\) are the initial state and set of valid final states, respectively. In the case of the arc-eager parser (Nivre, 2008), states are triplets of the form (\(\sigma\),\(\beta\),\(A\)) where \(\sigma\) is a stack of partially processed words, \(\beta\) a buffer of remaining words3 which always takes the form \(\beta_{i}=w_{i}\ldots w_{|w|}\) for some \(i\), and \(A\) is the partial parse at that state. This parser is strongly incremental, as the way in which the algorithm constructs dependencies (in a strictly left-to-right manner) means that a configuration with buffer \(\beta_{i}\) can hold every possible partial parse for the prefix \(w_{1}\ldots w_{i}\). The parser's delay depends on the number of buffer words used as lookahead features in the implementation. In our case, this is only one (we only use the first stack word and the first buffer word) so we can obtain partial parses for \(w_{1}\ldots w_{i}\) accessing only \(w_{1}\ldots w_{i}\), hence the delay is 0. Equivalently to sequence-labeling decoders, for models with delay \(k>0\), we access a concatenated vector of the form \(w_{i}\cdot...\cdot w_{i+k-1}\cdot w_{i+k}\). For prediction of transitions, we again use a 1-layered feed-forward network.
Footnote 3: Buffer words are often described as “unread” words when describing the algorithm, but for incrementality purposes we need to count them as “accessed” if they are used as features, as the parser implementation is using them for prediction.
## 4 Experiments
We choose 12 diverse treebanks from UD 2.11 (Nivre et al., 2020), supported by the tested LLMs. We test all possible combinations of encoders and decoders. As a well-known baseline, we use the biaffine (DM; Dozat and Manning, 2017) parser
in super. We use unlabelled attachment score (UAS) for evaluation. Labelled (LAS) results and individual treebank results are in Appendix A.
Table 1 shows an aggregated summary of the results with strict delay zero. It shows that fully incremental models considerably lag behind their counterparts with bidirectional (control) encoders: the best fully incremental model for each language is 11.2 UAS points behind on average (sd: 5.0) with respect to the corresponding best control model. There is large inter-linguistic variability, Telugu (teMTG) being especially amenable to incremental processing, 5.3 UAS points behind, and the opposite for Chinese (zhGSD), 23.1 points behind. Our incremental-decoder-only models with LLMs as encoders are competitive against the BiLSTM-based version of the baseline (BiLSTM encoder, biaffine decoder), surpassing it on 7 out of 12 languages. However, they are a few points behind with respect to a version of the biaffine parser using RoBERTa encodings (which can be taken as a state-of-the-art system), consistent with existing comparisons of sequence-labeling parsers and biaffine parsers Anderson and Gomez-Rodriguez (2021). Put together, this seems to suggest that the challenge of incrementality falls mostly on the encoding side. If we focus on comparing different strongly incremental models we see that, as expected, forward-looking encodings suffer greatly from incremental encoders.
Table 2 compares the results from Table 1 against the corresponding models using delays 1 and 2. Improvements are consistent across the board. Interestingly, moving from delay 0 to 1 already shows a clear and large increase in robustness, especially for forward-looking encodings: the average gap between these and non-forward-looking encodings goes from over 10 points with delay 0 to nonexistent with delay 1, although considerable gaps remain in some languages like Chinese (zhGSD) or English (enEWT).
Finally, Figure 1 complements Table 2 with an analysis of the F-score with respect to dependency displacement (signed distance) for English and Chinese, chosen because they yielded the largest improvements when using positive delay. In particular, the figure shows that the lower performance of delay zero models is mainly due to poor performance of forward-looking encodings on leftward dependencies (right half of figure), and that a small positive delay already translates into clear improvements, even for long-distance dependencies.
## 5 Conclusion
We evaluated modern neural NLP architectures for incremental dependency parsing across multiple languages, using various encoders and decoders. We have found that said architectures are not adequate to model incrementality, at least in the absence of specific adaptations. Strongly incremental models with no delay yield accuracies about 10 points below competitive non-incremental baselines. While this gap narrows when adding a 2
\begin{table}
\begin{tabular}{c|c c c|c c c|c c} & \multicolumn{3}{c|}{**Fully incremental**} & \multicolumn{3}{c|}{**Incremental decoder**} & \multicolumn{1}{c}{**DM**} \\ & **f** & **non-f** & **th** & **f** & **non-f** & **th** & \(\leftrightarrow\) & \(
word lookahead, it is still about 5 points, contrasting with the situation in pre-deep-learning times, when incremental parsers were competitive (cf. (Zhang and Nivre, 2011)). The results suggest that much of the accuracy improvements in parsing obtained in recent years hinge on bidirectionality, deviating from human processing.
Accurate incremental parsing should in theory be possible (as the human example shows). Incremental processing is useful both for practical applications (Kohn, 2019), specially those involving real-time speech (Coman et al., 2019; Ekstedt and Skantze, 2021); as well as for cognitive modeling (Demberg and Keller, 2019; Stanojevic et al., 2021). Thus, we believe that designing architectures that work well in a strongly incremental setting is an important open challenge in NLP. In this respect, techniques like using tentative predictions of future words made by autoregressive language models as a substitute for delay (Madureira and Schlangen, 2020) might be helpful. It is also conceivable that accuracy losses might not be solvable by better unidirectional scoring systems, and thus alternatives such as better search or methods that revise earlier decisions are also worth exploring.
### Limitations
Limited physical resourcesWe have no access to large computing infrastructures or a budget to scale services in the cloud. We had access to a few internal servers, for a total of 6 NVIDIA GeForce RTX 3090 (24GB each), and temporally we also obtained access to a NVIDIA A100 GPU (80GB), as well as a workstation for quick debugging. This restricts the number and size of models that we can try. In particular, we could train in reasonable amounts of time the smallest BLOOM language model (560M parameters). It was possible for us to fit up to the 3B version on the A100 GPU with a minimal batch size, but the amount of time that it took to train a model made it unfeasible to carry out a multilingual study like the one proposed in this work. Still, preliminary results showed that these larger BLOOM models were not contributing to significantly improve the performance. In this respect, we know that scaling _a lot_ can play an important role, and that the standard BLOOM model is the 176B version. However, a model of that size is completely out of our economic and computing resources. Yet, we feel our study with smaller models is equally, or even more relevant, since it represents effectively the resources at hand for most companies and academic institutions.
Delay parameterIncremental parsers have a _delay_ parameter that models how far beyond word \(i\) the parser can access to generate a partial parse for \(w_{1}\dots w_{i}\). For our main experiment we set the delay to 0, although we also provide results with delay \(1\) and \(2\). If we aim for psycholinguistic plausibility, there cannot be a single one-size-fits-all value for the delay, as the time taken by humans to parse linguistic input can be influenced by various factors like language, word length, reading/speaking speed, language proficiency, etc.; so any choice of delay is necessarily a simplification. However, evidence seems to point to human parsing generally being very fast, with latencies in the range of 100-250 ms (see for example Pulvermuller et al. (2009); Bemis and Pylkkanen (2011)). Hence our choice of delay 0 as the safest option, and we also present experiments with \(1\) and \(2\) to show what happens when a small lag between the input and the parse is accounted for.
Scope of definitionOur definition of strong incrementality only applies to monotonic parsers. This is a deliberate choice: if we allowed non
Figure 1: Displacement performance (English above, Chinese below) for the (fully incremental) models specified in Table 1 with delay zero (solid lines) and two (dashed lines). Different symbols and colors denote forward-looking (), non-forward looking () and transition-based () decoders. DM\({}^{\otimes}\) performance is included with gray dotted lines.
monotonicity (i.e., removing or modifying dependencies from previous partial parses), then the definition would allow for a hypothetical parser that removes all partial output upon reading the last word and replaces it with a brand new parse generated with access to the whole sentence, which would be incremental in name only and render any comparison between incremental and non-incremental parsers moot.
While there might be alternative ways to restrict the definition to avoid this problem (e.g. restrict each step to be \(O(1)\)), these would come with their own limitations (e.g., excluding neural architectures where obtaining each word's vector representation is \(O(n)\), or transition-based parsers with quadratic complexities). Thus, we believe that our definition is a good compromise for our purposes, as it is simple, unambiguous and implementation-independent within the realm of monotonic parsing.
Comparing non-monotonic parsers is a different undertaking as it not only would require a different definition of incrementality, but also evaluation metrics focused on partial parse accuracy rather than final LAS/UAS. But that is orthogonal to comparing incremental to non-incremental parsers (as partial parse accuracy is not even well-defined for some non-incremental parsers that do not have intermediate states) and lies outside the scope of this paper.
Differences in incremental processing between humans and machinesCurrently, despite research efforts, a comprehensive understanding of why humans excel at incremental processing compared to machines remains elusive. This issue also constrains our options for analysis. In this regard, the proficiency of humans at incremental language processing likely stems from adaptation in the context of cognitive constraints, having to understand real-time input with limited working memory which forces eager processing (see e.g. Christiansen and Chater 2016). From a different perspective, Wilcox et al. (2021) showed that both humans and models exhibit increased processing difficulty in ungrammatical sentences. However, language models consistently underestimate the magnitude of this difficulty compared to humans, particularly in predicting longer reaction times for syntactic violations.
## Acknowledgments
We acknowledge the European Research Council (ERC), which has funded this research under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), Catedra CICAS (Singular, University of A Coruna), and Centro de Investigacion de Galicia "CITIC", funded by the Xunta de Galicia through the collaboration agreement between the Conselleria de Cultura, Educacion, Formacion Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS).
|
2309.06047 | Real-Time Semantic Segmentation: A Brief Survey & Comparative Study in
Remote Sensing | Real-time semantic segmentation of remote sensing imagery is a challenging
task that requires a trade-off between effectiveness and efficiency. It has
many applications including tracking forest fires, detecting changes in land
use and land cover, crop health monitoring, and so on. With the success of
efficient deep learning methods (i.e., efficient deep neural networks) for
real-time semantic segmentation in computer vision, researchers have adopted
these efficient deep neural networks in remote sensing image analysis. This
paper begins with a summary of the fundamental compression methods for
designing efficient deep neural networks and provides a brief but comprehensive
survey, outlining the recent developments in real-time semantic segmentation of
remote sensing imagery. We examine several seminal efficient deep learning
methods, placing them in a taxonomy based on the network architecture design
approach. Furthermore, we evaluate the quality and efficiency of some existing
efficient deep neural networks on a publicly available remote sensing semantic
segmentation benchmark dataset, the OpenEarthMap. The experimental results of
an extensive comparative study demonstrate that most of the existing efficient
deep neural networks have good segmentation quality, but they suffer low
inference speed (i.e., high latency rate), which may limit their capability of
deployment in real-time applications of remote sensing image segmentation. We
provide some insights into the current trend and future research directions for
real-time semantic segmentation of remote sensing imagery. | Clifford Broni-Bediako, Junshi Xia, Naoto Yokoya | 2023-09-12T08:30:48Z | http://arxiv.org/abs/2309.06047v1 | # Real-Time Semantic Segmentation: A Brief Survey & Comparative Study in Remote Sensing
###### Abstract
Real-time semantic segmentation of remote sensing imagery is a challenging task that requires a trade-off between effectiveness and efficiency. It has many applications including tracking forest fires, detecting changes in land use and land cover, crop health monitoring, and so on. With the success of efficient deep learning methods (i.e., efficient deep neural networks) for real-time semantic segmentation in computer vision, researchers have adopted these efficient deep neural networks in remote sensing image analysis. This paper begins with a summary of the fundamental compression methods for designing efficient deep neural networks and provides a brief but comprehensive survey, outlining the recent developments in real-time semantic segmentation of remote sensing imagery. We examine several seminal efficient deep learning methods, placing them in a taxonomy based on the network architecture design approach. Furthermore, we evaluate the quality and efficiency of some existing efficient deep neural networks on a publicly available remote sensing semantic segmentation benchmark dataset, the OpenEarthMap. The experimental results of an extensive comparative study demonstrate that most of the existing efficient deep neural networks have good segmentation quality, but they suffer low inference speed (i.e., high latency rate), which may limit their capability of deployment in real-time applications of remote sensing image segmentation. We provide some insights into the current trend and future research directions for real-time semantic segmentation of remote sensing imagery.
real-time semantic segmentation, efficient deep neural networks, remote sensing image analysis
## I Introduction
Semantic segmentation is a problem of labelling each pixel in an image with a class label to partition the image into semantically meaningful segments. It is a pixel-level classification problem as compared to image classification which assigns a class label to the entire image [1]. In recent years, efficient deep neural networks (DNNs) for real-time semantic segmentation have held a great position in the computer vision community [2, 3, 4, 5, 6]. Efficient DNNs are neural networks with low computational footprints and inference time [7]. The success of these methods has made a considerable impact in real-time applications in the fields of autonomous vehicles [8, 9], robot vision [10, 11], and medical image analysis [12, 13] which require high-end computer vision systems. This promising progress of the efficient DNNs in real-time applications of image semantic segmentation has sparked an interest in this topic in the remote sensing community [14, 15]. Because many real-world applications of remote sensing image semantic segmentation such as flood detection [16], burned area detection [17], monitoring of weeds in farmlands [18], and so on are required to operate in real time and more appropriately on resource-constrained devices. Nevertheless, most of the state-of-the-art DNNs for semantic segmentation of remote sensing imagery require high-powered general-purpose machines (e.g., GPUs) [19, 20], which greatly limits their applications in real time in a resource-constrained environment. Thus, it is important to improve the design of DNNs to achieve efficient networks that can enable the development of remote sensing image semantic segmentation methods for real-time applications.
In the literature, most of the surveys and reviews published on image semantic segmentation in the remote sensing community [19, 20, 21] (or in the computer vision [22, 23, 24, 25]) are about deep learning methods in general. Yuan _et al._[19] and Jiang _et al._[20] reviewed deep learning methods (such as fully convolutional networks [26, 27], feature pyramid networks [28], encoder-decoder networks [29, 30], etc.) for semantic segmentation of remote sensing imagery. Neupane _et al._[31] performed a meta-analysis to review and analyze DNNs-based semantic segmentation methods in remote sensing applica
Fig. 1: The performance comparison of some existing efficient deep neural networks for real-time semantic segmentation on the OpenEarthMap benchmark. The bubble size denotes the number of parameters (Param). The FLOPs and FPS (frames per second) were computed on 1024x1024 RGB input data. The inference speed was computed on a single NVIDIA Tesla P100 (DGX-1) with 16 GB memory based on an average runtime of 300 iterations with 10 iterations warm-up.
tions. In addition, Guo _et al._[32] summarized DNNs methods for semantic segmentation of remote sensing optical imagery and categorized them into region-based, fully convolutional-based and weakly supervised semantic segmentation methods. Recently, Gama _et al._[33] and Catalano _et al._[34] have reviewed meta-learning and few-shot learning methods [35, 36, 37, 38] for semantic segmentation. Very little work has sought to examine the efficient DNNs methods for real-time semantic segmentation [9, 39, 40]. The work in [39] and [40] reviewed real-time semantic segmentation in computer vision in general, while [9] examined the design of efficient DNNs for real-time semantic segmentation in the field of autonomous vehicles. Remote sensing applications require specialized algorithms and techniques to process large volumes of data collected by sensors capturing the earth's surface images at different wavelengths. Variability in environmental conditions poses a challenge for extracting meaningful information from the data. Integrating data from multiple sources can also be challenging, making remote sensing applications unique compared to other fields like autonomous driving. To the best of our knowledge, no work in the literature has examined the efficient DNNs methods for real-time semantic segmentation towards real-time applications in remote sensing image understanding. To this end, this paper aims to address this gap by summarising most of the state-of-the-art efficient DNNs methods in the literature that have been instrumental in real-time semantic segmentation of remote sensing imagery. Furthermore, to enable practitioners and researchers in the remote sensing community to adopt the best efficient DNNs for real-time applications of remote sensing image semantic segmentation, we present a comparative study of several established efficient deep neural networks for real-time semantic segmentation on OpenEarthMap [41] remote sensing image semantic segmentation benchmark. Fig. 1 shows the performance in terms of accuracy (mIoU), inference speed (FPS), the number of floating-point operations (FLOPs), and the number of parameters (Params) of the models used in the comparative study on the OpenEarthMap benchmark.
The closely related work is the comparative study conducted by Safavi and Rahnemoonfar [16], which evaluated several efficient DNNs methods for real-time semantic segmentation on the FloodNet dataset [42] of aerial imagery. The authors provided a detailed analysis of the segmentation accuracy and efficiency of the methods in a real-time post-disaster assessment of remote sensing. The contribution of this paper is different from [16] in the following perspectives:
1. This paper presents a comprehensive summary of compression techniques and efficiency metrics that are commonly used in designing efficient deep neural networks for real-time semantic segmentation.
2. It summarizes the most seminal literature on efficient deep learning methods for real-time semantic segmentation of remote sensing imagery.
3. Also, it provides a comparative study (in terms of quality-cost trade-off) of some of the existing efficient deep neural networks (not only handcrafted ones as in [16], but includes automated architecture-searched networks) for real-time semantic segmentation.
4. The study was also extended to investigate continent-wise domain generalisation semantic segmentation of the networks using the continent-wise domain adaptation settings in [41]. Here, both handcrafted and automated architecture-searched networks are compared as well.
5. Finally, it presents an insightful discussion on challenges in real-time semantic segmentation of remote sensing imagery.
The rest of this paper is organized as follows. Section II briefly summarizes the design approaches of efficient DNNs. The most seminal models are summarized in Section III. In Section IV, we present the settings for the comparative study of representative models on the OpenEarthMap benchmark. Then, the discussion of the findings of the study is presented in Section IV-D. Conclusion and some open challenges for a future investigation are made in Section IV-D.
## II Efficient DNNs Design Approach
There has been a great deal of research in recent years on efficient DNNs methods for real-time applications and resource-constrained platforms [43, 44, 45]. Here, we provide a brief but comprehensive overview of model compression techniques and efficiency metrics that have been proposed and extensively used in developing real-time semantic segmentation networks.
### _Compression Techniques_
As with any task deep learning algorithm is employed for, once the algorithm is trained, it is used to perform the task by using its weights learned during training to compute the output. This is referred to as _inference_. In the particular case of semantic segmentation tasks in real time, besides the output quality, the primary concern is efficient inference, which involves optimising the architecture of a model by reducing its computational complexity and memory footprint to speed up inference [46]. Several compression techniques [47] have been proposed to optimize an architecture of a model for one or more of the aforementioned efficiency metrics: number of parameters, memory consumption, and latency, to achieve efficient inference with a trade-off of the model's quality [46]. Here, we briefly summarize the commonly used model compression techniques and refer readers to [48, 49, 50] for a more detailed discussion on this subject.
**Compact architecture:** The fundamental approach to optimize a model for efficient inference is to develop it with building blocks that allow for greater efficiency in computation and memory footprints [46]. Convolutional layers are largely used building blocks [42, 53, 59, 60], thus, convolution operations contribute most of the computation and memory footprints of a model [61]. Reducing the number of FLOPs in the convolution operations of a model can significantly improve its efficiency [61]. Several handcrafted convolutional building blocks have been proposed to reduce the number of FLOPs in convolution operations, and are widely used to develop models for efficient inference [48, 62]. They include depthwise separable convolution [54], bottleneck convolution [51], inverted bottleneck convolution [53], dilated convolution
## 2 Related Work
Fig. 2: Illustration of the commonly used model compression techniques. (a)–(d) Some widely used handcrafted convolutional blocks: (a) Bottleneck convolution [51], (b) Grouped convolution [52], (c) Inverted bottleneck convolution [53], and (d) Depthwise separable convolution [54] (**FM** is feature map). (e)–(f) Some popular automatically learned cells: (e) NASNet [55] and (f) Auto-DeepLab [56]. (g)–(h) Pruning and sparsification techniques: (g) Weight pruning and (h) Neuron pruning. (i) Knowledge distillation technique [57]. (j)–(m) Quantisation and binarisation techniques: (j) Weights matrix, (k) Binary quantisation, (l) K-means-based quantisation, and (m) Linear quantisation. (n)–(o) Low-rank approximation techniques: (n) Low-rank matrix decomposition with \(k\times k\) kernel and (o) Low-rank tensor decomposition with \(k\times k\times k\) kernel. (Figures are adapted from [46, 48, 48, 48, 55, 56, 58]).
[63], grouped convolution [64, 52], and asymmetric convolution [65]. The handcrafted designs have achieved remarkable success, however, handcrafted engineering is laborious and extremely depends on human expertise [66]. Hence, recent years have seen a growing trend for automated machine learning (AutoML) and neural architecture search (NAS) [67, 68] for automatically searching for efficient cells (i.e., directed acyclic graph of convolutional layers) to develop efficient models [62, 69]. Fig. 2(a)-(d) illustrates some examples of handcrafted convolutional blocks and Fig. 2(e)-(f) shows some automated-learned cells commonly used to develop efficient deep neural networks to reduce the models' number of parameters and FLOPs.
**Pruning & sparsification:** These techniques are used to remove "unimportant" weights and neurons in a model to reduce its number of computation arithmetic operations and make it smaller [70, 71, 72, 73] (see Fig. 2(g)-(h)). This results in a reduction of memory access, thus, accelerating inference. Pruning algorithms have been applied at a different granularity of sparsity, from fine-grained (unstructured) to coarse-grained (structured) [74]. For fine-grained pruning, the individual weights or neurons of a model are removed [75, 76, 77], whereas coarse-grained pruning removes entire filters or channels [78, 79]. Unlike fine-grained, the coarse-grained pruning methods are flexible to implement efficiently on hardware but can degrade the model quality [80], however, the pruned model can be retrained to compensate the quality lost [81, 82]. AutoML has also been employed to automatically prune the weights and neurons of a model [83].
**Knowledge distillation:** The main idea is to compress a large model by transferring the knowledge of the large model, often called the teacher model, into a small model, also known as the student model [84, 57] (see Fig. 2(i)). Basically, training a small model using a large model as a supervisor to enable the small model, which has less memory and energy footprints, to mimic the quality of the large model for efficient inference [85, 86, 87]. The learning algorithms of knowledge distillation techniques in the literature [88, 89] can be grouped as self-distillation [90], online distillation [91], and offline distillation [92]. Recently, NAS has been adopted in knowledge distillation [93, 94]. We refer readers to [95] for a detailed discussion on this subject.
**Quantisation & binarisation:** While pruning reduces the number of weights and neurons in a model, quantisation methods aim to reduce the bit-width of the values of weights and neurons [48] (see Fig. 2(j)-(m)). This normally renders a model with low-bit precision operands [96, 97]. Quantisation methods do not reduce the number of computation arithmetic operations as in pruning but simplify the operations, which therefore can shrink the model size to reduce memory and energy footprints, and to speed up inference but with some sacrifice of model quality [98]. The commonly used methods in the literature [99, 100, 101] include linear quantisation methods [102], K-means methods [103], and binary/ternary methods [104, 105]. Besides these conventional methods, automated quantisation methods (i.e., quantisation with AutoML) have been proposed as well [106, 107]. It is not easy to implement quantisation in practice because it requires a good understanding of bitwise algorithm and hardware architecture [7].
**Low-rank approximation:** Here, the core idea is to reduce the computational complexity of a model by approximating the redundant weight matrices or tensors of a convolutional or fully connected layer using a linear combination of fewer weights [108, 109]. Compressing a model in this fashion results in a significant reduction in the model size, the model footprint, and the inference latency [110] (see Fig. 2(n)-(o)). Various methods adopted in the literature [111, 7, 112] include singular value decomposition (SVD) [113], and tensor decomposition methods such as Tucker decomposition [114, 115] and canonical polyadic (CP) decomposition [116].
### _Efficiency Metrics_
Other than the quality of segmentation results, an efficient model has three core characteristics: _smaller_, _faster_, and _greener_, which are respectively gauged on _storage_, _inference speed_, and _energy_[46, 117]. When designing or choosing a deep neural network architecture for semantic segmentation in real-time applications, one major consideration is the quality-cost trade-off [118]. Accuracy is used to measure the quality of model results. The most commonly used accuracy evaluation metric in semantic segmentation is the intersection over union (IoU), known as Jaccard index, and its variants mean IoU (mIoU) and frequency-weighted IoU (FwIoU) [25, 40] (see Equations 1, 2, and 3). The model footprint (i.e., computational cost), typically associated with the training and deploying of a deep learning model is a very important factor in considering a method for real-time semantic segmentation [46]. Most often, the more computational budget a model is given, the more quality its segmentation results and vice versa [119].
\[IoU=\frac{\sum_{j=1}^{k}n_{jj}}{\sum_{j=1}^{k}(n_{ij}+n_{ji}+n_{jj})},\qquad i\neq j \tag{1}\]
\[mIoU=\frac{1}{k}\sum_{j=1}^{k}\frac{n_{jj}}{(n_{ij}+n_{ji}+n_{jj})},\qquad i\neq j \tag{2}\]
\[FwIoU=\frac{1}{\sum_{j=1}^{k}t_{j}}\sum_{j=1}^{k}t_{j}\frac{n_{jj}}{(n_{ij}+ n_{ji}+n_{jj})},\quad i\neq j \tag{3}\]
where \(n_{jj}\) is the number of pixels labelled as class \(j\) and classified as class \(j\), \(n_{ij}\) is the number of pixels labelled as class \(i\) but classified as class \(j\) (false positive), and \(n_{ji}\) is the number of pixels labelled as class \(j\) but classified as class \(i\) (false negative). The FwIoU is an improved version of mIoU which weighs the importance of each class with \(t_{j}\) depending on its appearance frequency.
To achieve a better quality-cost trade-off, several efficiency metrics have been adopted in the literature of efficient deep learning computing and its applications such as real-time semantic segmentation for measuring model footprint. The efficiency metrics view the computational cost from different complexity perspectives, that is, _memory_ and _computation_. The memory-related efficiency metrics include the number of parameters, the model size, and the size of activations, which are normally used to measure storage [42, 46, 119, 120, 121]. The
number of parameters metric is the parameter count, that is, the number of elements in the weight tensors of the given model (see Table I). The model size metric is used to measure the storage for the weight tensors of a model. Generally, if the weight tensors of a model have the same data type (e.g., floating point), the model size metric is expressed as:
\[ModelSize=\#parameters\times bitwidth. \tag{4}\]
The common measurement units are kilobytes (KB) and megabytes (MB). For example, a model with 60M parameters stored in 32-bit, has a total storage, i.e., model size, of 240MB (60M \(\times\) 4 Bytes). The activations size metric measures the memory requirement for the feature maps or activations of a model during a forward pass (i.e., inference). Like the model size metric, the activations size metric is measured in KB or MB. With a batch size of 1, the size of activations of a model can be expressed as:
\[ActSize=[\textit{input-size}+\sum_{l=1}^{L}(\#\textit{neurons})^{l}]\times bitwidth, \tag{5}\]
where \(\#neurons\) is the number of neurons of layer \(l\) and \(L\) is the number of layers in the model. The \(\#neurons\) depends on the type of layer \(l\). For example, if \(l\) is a convolutional layer, the \(\#neurons\) is \(c_{o}\times\ h_{o}\times w_{o}\), where \(c_{o}\) is the output channels, \(h_{o}\) the output height, and \(w_{o}\) is the output width. And for a linear layer, \(\#neurons\) is simply the output channels \(c_{o}\).
Computation-related metrics are the number of multiply-accumulate (MAC) operations and the number of floating-point operations (FLOPs) [46, 119]. An operation that computes the product of two numbers and adds the result to an accumulator is considered one MAC operation. For example, the MACs of a matrix-vector product \(Ax\) is expressed as:
\[MACs=m\cdot n,\quad A\in\mathcal{R}^{m\times n},\quad x\in\mathcal{R}^{n}. \tag{6}\]
For matrix-matrix product \(AB\), the MACs is expressed as:
\[MACs=m\cdot n\cdot p,\quad A\in\mathcal{R}^{m\times n},\quad B\in\mathcal{R}^{ n\times p}. \tag{7}\]
See Table I for the MACs count of various neural network layers. In FLOPs, a multiply is one operation and an addition is also one operation, hence, one MAC operation is two FLOPs. For example, a model with 724M MACs has 1.4G FLOPs (724M \(\times\) 2). Furthermore, FLOPs per second (FLOPS) can be expressions:
\[FLOPS=\frac{FLOPs}{second}. \tag{8}\]
The inference speed and energy indicators are difficult to estimate compared to the storage indicators because they depend on both the model's architecture and the hardware platform the model is deployed on [7]. In many instances, the inference speed of a model is reported as throughput [7] and latency [142]. Throughput measures how much data can be processed or how many executions of a task can be completed in a given amount of time (i.e., images processed per second), whereas latency indicates the period of time between when input data arrives and when the result is generated by a model. Achieving high throughput and low latency is critical for real-time semantic segmentation applications. However, in some cases, it may be difficult to achieve high throughput and low latency simultaneously, e.g., increasing throughput via batching of multiple images will also increase the latency of a model [7]. From queuing theory [143], the relationship between throughput and latency can be expressed as:
\[Throughput=\frac{\#tasks}{Latency}, \tag{9}\]
where \(\#tasks\) is the number of input images in a batch and \(Latency\) is the latency of the model which is determined by:
\[Latency=max(\mathcal{T}_{computation},\mathcal{T}_{memory}), \tag{10}\]
\begin{table}
\begin{tabular}{l c c} \hline \hline Layer & \#Parameters & MACs \\ & (bias is ignored) & (batch size \(n=1\)) \\ \hline Linear & \(c_{o}\cdot c_{i}\) & \(c_{o}\cdot c_{i}\) \\ Convolution & \(c_{o}\cdot c_{i}\cdot k_{h}\cdot k_{w}\) & \(c_{o}\cdot c_{i}\cdot k_{h}\cdot k_{w}\cdot h_{o}\cdot w_{o}\) \\ Grouped Convolution & \(c_{o}\cdot c_{i}\cdot k_{h}\cdot k_{w}/g\) & \(c_{o}\cdot c_{i}\cdot k_{h}\cdot k_{w}\cdot h_{o}\cdot w_{o}/g\) \\ Depthwise Convolution & \(c_{o}\cdot k_{h}\cdot k_{w}\) & \(c_{o}\cdot k_{h}\cdot k_{w}\cdot h_{o}\cdot w_{o}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: The computation of the number of parameters and multiply-accumulate (MAC) operations of a neural network layer. The \(c_{i}\) is input channels, \(c_{o}\) is output channels, \(h_{o}\) is output height, \(w_{o}\) is output width, \(k_{h}\) is kernel height, \(k_{w}\) is kernel width, and \(g\) is split groups.
Fig. 4: The memory cost comparison under batch size 16 of ResNet-50 (larger model) and MobileNetV2-1.4 (compact model). A reduction in the number of parameters (model size) does not reduce the activation size, which is the main memory bottleneck. (Figure adapted from [122]).
Fig. 3: The energy cost for various arithmetic and memory access operations in a 45 nanometer (nm) process. Data movement consumes significantly higher energy than arithmetic operations. (Figure adapted from [58]).
where \(\mathcal{T}_{memory}\) is the number of operations in a model divided by the number of operations that a machine (e.g., GPU or CPU processor) can process per second, and \(\mathcal{T}_{memory}\) is the total data movement time of activations and weights into memory, which depends on the memory bandwidth of the processor. In Ma _et al._[59], the number of memory accesses is reported as a surrogate measure for model inference speed. On the other hand, power consumption [144] and carbon emission footprint [117] have been used to measure the energy efficiency of a model. Power consumption is used to determine the amount of energy a model consumes per unit of time. More data movement implied more memory access which leads to high energy consumption [145] (see Fig. 3). Real-time applications in a resource-constrained environment with limited power capacity require a model with low energy consumption (i.e., high energy efficiency). For inference, energy efficiency is typically reported as inferences per joule [7].
In the literature, there is a common misconception that the computational cost (efficiency) indicators are correlated. For example, fewer model parameters mean less memory footprint. Cai _et al._[122] demonstrated that the number of activations is the memory bottleneck during training or during inference, but not the number of parameters (see Fig. 4). Also, a reduction in FLOPs does not necessarily translate into a reduction in latency [146]. Thus, to devise or adopt a compact model for real-time semantic segmentation, it is proper to evaluate the efficiency using several computational cost indicators because there is no single efficiency metric that is sufficient [119]. We refer to [119, 120, 121] for a detailed discussion on this topic.
## III Real-Time Segmentation Models
This section summarizes the state-of-the-art efficient deep neural networks for real-time semantic segmentation of remote sensing imagery (see Table II). Most of the real-time semantic segmentation models adapt one of the widely used compact backbone networks, including MobileNet [53], SqueezeNet [147], ShuffleNet [52], and EfficientNet [148] which were designed for image classification tasks. Large-scale architectures such as ResNet [51], U-Net [149], VGG [150], and ViT [151] have also been compressed and adapted for real-time semantic segmentation. The segmentation tasks include building extraction [128], burned detection [17], weed mapping [124], cloud detection [127], and others [131, 125]. And most of the works were developed for unmanned aerial vehicle (UAV) platforms [18, 123]. Here, the presentation is grouped into models built with handcrafted architectures and the ones that were developed via automated NAS (AutoML).
### _Handcrafted Models_
Deng _et al._[18] introduced a lightweight weed mapping via real-time image processing onboard a UAV based on AlexNet [64]. The authors established a hardware environment for real-time image processing that integrates map visualization, flight control, and image collection onboard a UAV for time-efficient weed mapping. The BiSeNetV2 [152] and U-Net were employed as backbone networks in Lan _et al._[124] to build two identification models, MobileNetV2-UNet and FFB-BiSeNetV2, for real-time analysis of UAV low-altitude imagery for timely monitoring of rice weeks in farmland. BASNet [17] proposed an efficient burned area segmentation network with ResNet as a backbone network to improve the performance of UAV high-resolution image segmentation. In LWCDnet [127], a backbone of a ResNet-like style module was used to build a lightweight auto-encoder model for cloud detection using Landsat 8 and MODIS datasets.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Year & Method & Datasets & Application & Reference \\ \hline \multicolumn{5}{c}{_Handcrafted models_} \\ \hline
2019 & SegRBM-Net & UAV & Ground vehicle segmentation & [123] \\
2020 & AlexNet & UAV & Weed mapping & [18] \\
2021 & MobileNetV2-UNet/FFB-BiSeNetV2 & UAV & Weed monitoring & [124] \\
2021 & Context aggregation network & UAV & Semantic segmentation & [125] \\
2021 & ABCNet & Aerial images & Semantic segmentation & [126] \\
2022 & BASNet & UAV & Burned area segmentation & [17] \\
2022 & LWCDnet & Landsat/MODIS & Cloud detection & [127] \\
2022 & RSR-Net & Aerial images & Building segmentation & [128] \\
2022 & MSL-Net & Aerial images & Building segmentation & [129] \\
2022 & LightFGCNet & Aerial images & Building/Semantic segmentation & [130] \\
2022 & CF-Net & Aerial images & Semantic segmentation & [131] \\
2022 & SGBNet & Google earth images & Semantic segmentation & [132] \\
2022 & U-NetFormer & Aerial images & Semantic segmentation & [133] \\
2022 & LPASS-Net & Aerial images & Semantic segmentation & [14] \\
2022 & DSANet & Aerial images & Semantic segmentation & [134] \\
2022 & SegFormer & Aerial images & Semantic segmentation & [135] \\
2023 & LRAD-Net & Aerial images & Building segmentation & [136] \\ \hline \multicolumn{5}{c}{_NA3 models_} \\ \hline
2020 & NAS-HRIS & Gaofen multispectral/Aerial & Building/Semantic segmentation & [137] \\
2021 & SFPN & 3D point clouds & Semantic segmentation & [138] \\
2022 & DNAS & Gaofen multispectral & Semantic segmentation & [139] \\
2022 & U-Net-like search space & Werdview-2 & Semantic segmentation & [140] \\
2022 & Evolutionary NAS & Aerial images & Semantic segmentation & [141] \\ \hline \hline \end{tabular}
\end{table} TABLE II: List of efficient deep neural networks adopted in real-time semantic segmentation of remote sensing imagery. The networks are categorized into handcrafted designs and automated neural architecture search (NAS) of efficient deep neural networks in remote sensing.
Huang _et al._[128] introduced RSR-Net, a lightweight network based on U-Net and SqueezeNet, to extract buildings from remote sensing images of WHU building dataset [153]. In MSL-Net [129], MobileNet architectural module (depthwise separable convolution) with atrous spatial pyramid pooling (ASPP) [154] as a multiscale feature extractor was used to alleviate network performance degradation of building extraction. LRAD-Net [136] employed depthwise separable convolutions with ASPP and self-attention mechanism [155] to achieve state-of-the-art performance on WHU building datasets. Yang _et al._[125] introduced a context aggregation network, a dual-branch convolutional neural network based on MobileNet, which has the potential to capture both global aggregation and local contextual dependencies that are required for accurate semantic segmentation using low computational overheads. Chen _et al._[130] proposed a lightweight global context semantic segmentation network called LightFGCNet. The authors employed a U-Net-like framework to fully utilize the global context data and adapted the spatial pyramid pooling (SPP) [156] as a strategy to reduce the number of network parameters. CF-Net [131] adopted VGG and ResNet to develop a cross-fusion network for fast and effective extraction of multiscale semantic information, especially for small-scale semantic information. RT-SegRBM-Net [123], for semantic segmentation of ground vehicles from UAV-based thermal infrared imagery, the performance of deep learning combined model of SegNet [157] was improved using the Gaussian-Bernoulli restricted Boltzmann machine [158]. In Liu _et al._[159], EfficientNet was used as a backbone to build a lightweight model with fewer parameters for semantic segmentation of UAV remote sensing imagery. For real-time segmentation of land cover, SGBNet [132] employed a semantics-guided strategy with a bottleneck network to balance accuracy and inference speed.
Besides the aforementioned conventional convolutional neural networks, generative adversarial networks (GAN) [160] and Transformers [151] have been adapted for real-time semantic segmentation of remote sensing imagery. In [15], the challenge of post-processing semantic segmentation predictions of road surface area was tackled using a conditional GAN based on pix2pix [161] to obtain state-of-the-art performance. Wang _et al._[133], introduced a Transformer-based network constructed in a U-Net-like fashion (U-NetFormer) for segmenting urban scenes in real time. ABCNet [126] adopted the attention mechanism and followed the design concept of BiSeNet to build a lightweight model that retains rich spatial details and captures global contextual information. In [14], the authors proposed an end-to-end lightweight progressive attention semantic segmentation network (LPASS-Net) based on MobileNet with an attentional feature fusion network. LPASS-Net aims to solve the problem of computational cost reduction without sacrificing segmentation accuracy. DSANet [134] introduced an effective deep supervision-based attention network with spatial and enhancement loss functions for real-time semantic segmentation. Yan _et al._[135] adapted SegFormer [162] to develop an efficient depth fusion transformer network, which downsamples input with patch merging strategy and utilizes a depth-aware self-attention module for effective aerial image segmentation.
### _Neural Architecture Search Models_
The commonly used search strategies in the automated NAS literature [67] include reinforcement learning [66], differentiable search (gradient-based algorithm) [163], evolutionary algorithms [164], and Bayesian optimisation [165]. Most of the proposed NAS methods in remote sensing adopted the differentiable search strategy because it is faster by a large order of magnitude and uses less computational search time compared to reinforcement learning or evolutionary algorithms [166]. In [139], a hierarchical search space of three levels (path-level, connection-level, and cell-level) was designed to build a supernet space. A differentiable search strategy was designed to automatically search for lightweight networks for high-resolution remote sensing image semantic segmentation. de Paulo _et al._[140] used the algorithm in [139] to improve the design of a U-Net-like network search space. The authors replaced 3x3 convolution layers with parallel layers of different kernel sizes. Then they pruned the network with a scaled sigmoid strategy [167] for multispectral satellite image semantic segmentation. SFPN [138] also used a differentiable search strategy on a discrete search space to generate a feature pyramid supernet to search a feature pyramid network module for 3D point cloud semantic segmentation. NAS-HRIS [137] embeds a novel directed acyclic graph into the DARTS [163] search space to learn end-to-end cell modules, normal and reduction cells, using gradient descent optimization. The searched cells are used to design a semantic segmentation network for high-resolution satellite images. In [141], an evolutionary NAS method was introduced in the semantic segmentation of high-resolution aerial images. The authors leverage the complementary strengths of gene expression programming and cellular encoding to build a search space to evolve a modularized encoder-decoder network via an evolutionary process.
## IV Comparative Study on OpenEarthMap
### _OpenEarthMap Description_
The OpenEarthMap [41] is a sub-meter dataset for high-resolution land cover mapping at a worldwide scale. The dataset is made up of 5000 aerial and satellite images at a ground sample distance of 0.25-0.5m with hand-annotated 8-class land cover labels of 2.2 million segments. It covers 97 areas that are spread out across 44 countries on 6 continents.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Colour \\ (HEX) \\ \end{tabular} } & \multirow{2}{*}{Class} & \multicolumn{2}{c}{Pixels} & \multirow{2}{*}{Segments} \\ \cline{3-3} \cline{5-5} & & Count (M) & (\%) & (K) \\ \hline
**[**1600**]** & & 74 & 1.5 & 6.3 \\
**[**1600**]** & & 1130 & 22.9 & 459.4 \\
**[**1600**]** & & Developed space & 798 & 16.1 & 382.7 \\
**[**1677**]** & & Road & 331 & 6.7 & 27.9 \\
**[**1600**]** & & Tree & 996 & 20.2 & 902.9 \\
**[**1655**]** & & Water & 161 & 3.3 & 18.7 \\
**[**1600**]** & Agriculture land & 680 & 13.7 & 18.2 \\
**[**1600**]** & & Building & 770 & 15.6 & 389.3 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The number and proportion of pixels and the number of segments of the eight classes [41].
OpenEarthMap shows the diversity and intricacy of satellite and UAV segmentation. Table III displays the quantity and ratio of pixels labelled for each class and the number of segments identified for each category. It also reveals semantic segmentation feasibility issues under challenging scenarios. Following the experimental settings in [41], we evaluate representative efficient DNNs models on semantic segmentation and continent-wise domain generalisation tasks. The 5000 images were randomly divided into training, validation, and test sets with a ratio of 6:1:3 for each region, yielding 3000, 500, and 1500 images for the semantic segmentation task. For the continent-wise domain generalisation, the data for each continent is treated as a source domain, while the other continents are considered target domains.
### _Compared Methods_
This section details the efficient neural networks, as shown in Fig. 5, which we evaluate on the OpenEarthMap as part of this work. We consider both handcrafted and automated architecture-searched methods for real-time semantic segmentation. The following are the handcrafted networks we employ for the study: U-Net series, U-NetFormer series, FastSCNN, SegFormer, Segmenter, HRNet, STDC, and CGNet. On the side of automated architecture-searched methods, the following six networks are used: SqueezeNAS, BiX-NAS, MRF-UNets, DNAS, SparseMask, and FasterSeg. A brief introduction of the handcrafted networks is as follows:
**U-Net**[149] series: We develop U-Net baseline architectures identical to the original U-Net architecture. It consists of lightweight encoder blocks for downsampling, decoder blocks for upsampling, concatenation blocks that fuse low- and high-level features, and skip connections for improving accuracy. The lightweight encoder blocks also reduce computation cost and improve inference speed. Our main contribution is to benchmark the accuracy and efficiency of U-Net using various lightweight encoders. We design asymmetric U-Net baselines with lightweight encoders of MobileNet [53], MobileOne_S1[178], and EfficientNet-B0 and B1 [148].
**U-NetFormer**[133] series: The main novelty here is the structure of the decoder. The decoder is a transformer-based architecture that efficiently models global and local information. The decoder's global-local transformer block (GLTB) constructs two parallel branches to extract the global and local contexts (see Fig 6). The semantic features generated by the encoder are fused with the features from GLTB to produce generalized fusion features. In short, a transformer-based decoder captures global and local contexts at multiple scales and maintains high efficiency. For U-NetFormer, we also considered EfficientNet versions B0, B1, and B2 [148].
**FastSCNN**[171]: The FastSCNN is suitable for efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, the "learning to downsample" module, which computes low-level features for multiple resolution branches simultaneously is the main idea of FastSCNN (see Fig. 7). It combines high-resolution spatial details with deep features extracted at a lower resolution.
Fig. 5: The list of efficient deep neural networks we adopted for the comparative study on the OpenEarthMap benchmark. The \(*\) indicates automated architecture search networks and the boldface denotes the backbone networks of the handcrafted methods.
Fig. 6: The architecture overview the main idea of U-NetFormer. (Figure adapted from [133]).
Fig. 7: An illustration of the main idea of Fast-SCNN. It shares the computations between branches of encoders to build a segmentation network. (Figure adapted from [171]).
**SegFormer**[162]: The encoder comprises multiple mix transformer (MiT) encoders, i.e., MiT-B0-MiT-B5, allowing the encoder to obtain multilevel features to generate the segmentation mask. Furthermore, this structure enables the MLP decoder to combine local and global attention to generate effective representations. The hierarchical transformer encoder has a larger effective receptive field than conventional CNN encoders, allowing lightweight MLP as a decoder (see Fig. 8). This model enables the scalability of the network such that we change the transformer layers according to our resources. In this work, we avoid using large networks such as MiT-B1-MiT-B5, instead, we use MiT-B0 to improve efficiency in real-time applications.
**Segmenter**[174]: The Segmenter network relies on the output embeddings corresponding to image patches and obtains class labels from these embeddings with a point-wise linear decoder or a mask Transformer decoder (see Fig 9). This work considers the tiny Vision Transformer (ViT) [151].
**HRNet**[170]: The high-resolution net (HRNet) can maintain high-resolution representations throughout the whole process. HRNet starts from a high-resolution convolution stream, gradually adds high-to-low-resolution convolution streams one by one, and connects the multi-resolution streams in parallel (see Fig. 10). The resulting network consists of several stages (4 in the original paper), and the \(n\)th stage contains streams corresponding to resolutions. In this work, we use the HRNet-small version.
**STDC**[152]: The short-term dense concatenate network (STDC) is a novel and efficient structure for removing structure redundancy. Specifically, the network gradually reduces the dimension of feature maps and uses their aggregation for image representation, forming the STDC network's basic module. In the decoder, a detail aggregation module is proposed by integrating the learning of spatial information into low-level layers in a single-stream manner. Finally, the low-level features and deep features are fused to predict the final segmentation results (see Fig. 11).
**CGNet**[173]: It is a lightweight network for semantic segmentation. The CGNet uses a context-guided block to learn joint features of both local and surrounding contexts (see Fig. 12). Based on the context-guided block, CGNet can capture contextual information in all stages of the network, specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint.
We also present a brief introduction of the NAS methods that are employed for the study:
**SqueezeNAS**[169]: The SqueezeNAS is considered the first proxyless hardware-aware search which was targeted for dense semantic segmentation. Using a differentiable search strategy, it advances the state-of-the-art accuracy for latency
Fig. 11: The architecture overview of the STDC network. ARM is attention refine module and FFM is feature fusion module. The dashed blue box indicates the detail aggregation module. The dashed red box is the network proposed in STDC. (Figure adapted from [152]).
Fig. 8: The SegFormer framework consists of two modules: A hierarchical Transformer encoder which extracts coarse and fine features, and a lightweight MLP decoder which fuses multi-level features and predicts the semantic segmentation mask. The “FFN” indicates a feed-forward network (Figure adapted from [162]).
Fig. 10: Illustration of the main idea of the HRNet architecture. Consists of high-to-low resolution parallel sub-networks with multi-scale fusion. The horizontal direction indicates the depth of the network, and the vertical indicates the scale of the feature maps. (Figure adapted from [170]).
Fig. 9: An overview of the Segmenter framework. Encoder (left): image patches are projected to an embedding sequence, and then encoded with a transformer. Decoder (right): a mask transformer receives the output of the encoder and class embeddings as input to predict a segmentation mask. (Figure adapted from [174]).
optimized networks on the Cityscapes [179] semantic segmentation dataset via a search space similar to MobileNet. An overview of the SqueezeNAS architecture search path is shown in Fig. 13.
**Bix-NAS**[175]: The BiX-NAS is based on a multi-scale upgrade of a bi-directional skip-connected network (see Fig. 14). It uses a two-phase search algorithm, a differentiable search in Phase1, and an evolutionary search in Phase2. It reduces network computational costs by sifting out ineffective multi-scale features at different levels and iterations.
**MRF-UNets**[177]: It extends and improves the recent adaptive and optimal network width search (AOWS) [180] method with a more general Markov random field (MRF) framework, a diverse M-best loopy inference [181], and a differentiable parameter learning. This provides the necessary NAS framework to efficiently explore the architecture search space of a U-Net backbone (see Fig. 15).
**DNAS**[176]: The decoupling NAS (DNAS) employs a hierarchical search space with three levels: path-level, connection-level, and cell-level (see Fig. 16) to automatically design the network architecture via a differentiable search for HRSI semantic segmentation. In DNAS, the search optimization strategy consists of finding optimal path connections of a super-net for developing a lightweight network.
**SparseMask**[168]: The SparseMask creates a densely connected network with learnable connections which contains a large set of possible final connectivity structures as the search space (see Fig. 17). Then, a gradient-based search strategy is employed to search for optimal connectivity from the dense connections by making the connections to be sparse.
**FasterSeg**[172]: The FasterSeg is developed from a broader search space integrating multi-resolution branches as shown in Fig. 18, which is commonly used in manually designed segmentation models. To calibrate the balance between the goals of high accuracy and low latency, FasterSeg introduced a
Fig. 16: The decoupling neural architecture search framework. It consists of micro search space, macro search space (path-level, connection-level, and cell-level), and decoded architecture. (Figure adapted from [176]).
Fig. 12: The architecture overview of the context-guided block in CGNet. The structure consists of a local feature extractor _floc(\(*\))_, a surrounding context extractor _fsur(\(*\))_, a joint feature extractor _fioi(\(*\))_, and a global context extractor _floi(\(*\))_. (Figure adapted from [173]).
Fig. 13: An overview of an architecture path of a sampled architecture from a supernetwork. In the 1st superblock, candidate block 1 is selected, then the 2nd superblock selects candidate block 3, and the _N_th superblock selects candidate block 2. (Figure adapted from [169]).
Fig. 14: The BiX-NAS progressive evolutionary search overview. (a) Phase 1: searched supernet \(\mathcal{N}\) divided into a head and a tail networks. (b) Proposed forward and backward schemes. (c) Phase 2: searched skips at the Pareto front of \(\mathbf{P}\) are only retained. (Figure adapted from [175]).
Fig. 15: The architecture structure of MRF-UNets. The width ratios assigned to the original U-Net are the values inside rectangles. Rectangles with coloured dashed lines use 5\(\times\)5 kernels and the others use 3\(\times\)3 kernels. On the left, the grey dashed line represents an example of a bottleneck block in the encoder. On the right, the grey dashed line shows an example of an inverted bottleneck block in the decoder. (Figure adapted from [177]).
decoupled and fine-grained latency regularization to effectively overcome the phenomenons where the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models.
### _Experimental Settings_
Both the handcrafted and NAS methods we use for the experiments are PyTorch-based. For the handcrafted methods, the U-Net-based architectures are adopted from Yakubovskiy [182] and Wang _et al._[133], and the other architectures are from MMsegmentation [183]. The networks are trained on a single NVIDIA GPU DGX-1/DGX-2 with 16/32GB of RAM. The number of epochs is set to 200, and a batch size of 8 with an image input size of 512\(\times\)512 randomly cropped is employed. The cross-entropy (CE) loss is used in training all the networks. For the U-Net-based architectures, we use AdamW optimizer [184] with a learning rate of \(1\times 10^{-4}\) and weight decay of \(1\times 10^{-6}\). For the MMsegmentation-based architectures, we use the default settings of each method. We adopt stochastic gradient descent (SGD) optimizer with a learning rate of \(1\times 10^{-3}\), weight decay of \(5\times 10^{-4}\), and momentum of 0.9 for the HRNet networks. The rest of the networks use AdamW optimizer with a learning rate set as \(6\times 10^{-5}\), weight decay as 0.01, and betas parameters as 0.9 and 0.999. A polynomial learning rate decay with a factor of 1.0 and an initial linear warm-up of 1500 iterations is used. The backbones in all the handcrafted networks are pre-trained on the ImageNet dataset. No data augmentation is applied during training and testing for all networks. Following previous works [185, 186, 154], we use mIoU to assess the segmentation quality of all the networks. To assess the networks' suitability for real-time applications of remote sensing image semantic segmentation, we also evaluate their inference speed (FPS), i.e., _frames per second_, in addition to computational complexity (FLOPs) and the number of parameters. For the NAS methods, we adopt the code from the GitHub page of each method. With the exception of the evaluation metrics mentioned above, we follow the default experimental settings stated in the papers of each method to search and train the networks. All the NAS experiments are run on the same machines use for the handcrafted ones.
### _Results and Discussion_
The performance evaluation of the efficient deep neural networks on the OpenEarthMap dataset is presented in Table IV. It provides the class IoU for all eight classes of the OpenEarthMap and the overall segmentation quality in mIoU for each model. Also, it presents the number of trainable parameters (Params), the computational complexity (FLOPs), and the inference speed (FPS) to measure the efficiency of the models. The FLOPs and FPS were computed on 1024x1024 RGB input data. The inference speed was computed on a single NVIDIA Tesla P100 (DGX-1) with 16 GB memory based on an average time of 300 iterations with 10 iterations warm-up. In Fig. 19 and Fig. 20, the segmentation quality (mIoU) of the models in relation to their computational complexity (FLOPs) and inference speed (FPS) are shown, respectively; and Fig. 21 presents the visual comparison of their segmentation maps. The correlation between the efficiency indicators (i.e., FPS, FLOPs, and Params) of the models is presented in Fig. 22. Finally, Fig. 23 and Fig. 24 present the continent-wise domain generalisation results of the models.
suggest that combining a U-Net of Vision Transformer with an EfficientNet backbone is effective in generating local attention and increasing the effective receptive field size, ultimately leading to improved segmentation quality.
The U-Net-based efficient networks, U-Net-EfficientNet-B0 and U-Net-EfficientNet-B1, which are commonly used for real-time remote sensing applications achieved an accuracy rate of 63.63% mIoU and 63.81% mIoU with 6.3M and 8.8M parameters, respectively. When the EfficientNet-B1 backbone is replaced with MobileNetV2 lightweight architecture, as demonstrated in U-Net-MobileNet, the number of parameters is reduced to 6.9M but with a slight sacrifice, i.e., a decrease of approx. 1.1%, in the segmentation performance. This makes it a suitable option for applications in a low-power environment. To improve segmentation performance, we can slightly increase the number of trainable parameters of the backbone architecture. For example, increasing the number of parameters of the backbone to 9.6M by replacing MobileNetV2 with an enhanced version, MobileOne_S1, improved the segmentation performance by about 1%, i.e., from 62.50% mIoU to 63.41% mIoU. However, using 29% fewer parameters, the U-NetFormer-EfficientNet-B0 and U-NetFormer-EfficientNet-B1, achieved better accuracy (approx. 1% higher) than their U-Net-EfficientNet counterparts. This suggests that using U-NetFormer with EfficientNet lightweight backbone is more suitable for real-time semantic segmentation in remote sens
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{8}{c}{IoU (\%)} & \multicolumn{8}{c}{mIoU} & Params & FLOPs & Speed \\ \cline{3-13} & & \multicolumn{2}{c}{Barcaled} & \multicolumn{2}{c}{Rangeland} & \multicolumn{2}{c}{Developed} & \multicolumn{1}{c}{Road} & Tree & Water & Agriculture & Building & (\%) \(\uparrow\) & (M) \(\downarrow\) & (G) \(\downarrow\) & (FPS) \(\uparrow\) \\ \hline \multicolumn{13}{c}{_Handcrafted models_} \\ \hline U-Net & EfficientNet-B0\({}^{\star}\) & 41.89 & 55.65 & 52.61 & 60.31 & 70.11 & 79.45 & 72.07 & 76.99 & 63.63 & 6.30 & 41.55 & 33.9 \\ U-Net & EfficientNet-B1\({}^{\star}\) & 37.61 & 54.77 & 53.19 & 61.89 & 71.70 & 80.63 & 72.60 & **78.11** & 63.81 & 8.80 & 41.66 & 28.3 \\ U-Net & MobileNet & 39.45 & 54.09 & 52.34 & 59.85 & 70.05 & 77.08 & 70.85 & 76.30 & 62.50 & 6.90 & 55.49 & 44.3 \\ U-Net & MobileOne_S1 & 40.82 & 55.81 & 51.56 & 61.97 & 71.93 & 76.46 & 72.42 & 77.20 & 63.41 & 9.60 & 80.20 & 32.0 \\ Segformer & Mit-B0 & 28.54 & 52.31 & 46.24 & 51.34 & 68.10 & 73.35 & 67.06 & 69.69 & 57.08 & 3.72 & 25.56 & 42.4 \\ Segmenter & Tiny & 25.39 & 42.20 & 35.32 & 28.26 & 58.91 & 61.65 & 55.20 & 53.63 & 45.07 & 6.71 & 18.24 & 22.4 \\ HRNet & W18 & 32.98 & 53.55 & 48.17 & 57.45 & 69.41 & 75.48 & 70.01 & 73.28 & 60.04 & 9.68 & 76.32 & 14.8 \\ STDC & - & 34.79 & 53.28 & 48.21 & 57.91 & 68.99 & 79.38 & 70.09 & 71.99 & 60.58 & 8.53 & 35.12 & 78.3 \\ CGNet & - & 35.06 & 52.41 & 46.41 & 55.43 & 68.16 & 78.08 & 69.67 & 71.63 & 59.61 & 0.50 & 13.72 & 56.7 \\ FastSCNN & - & 34.75 & 53.27 & 48.31 & 55.62 & 68.83 & 78.94 & 70.43 & 71.90 & 60.26 & 1.46 & **3.68** & 11.01 \\ U-NetFormer & EfficientNet-B0\({}^{\star}\) & 41.88 & 55.17 & 52.69 & 60.89 & 70.92 & 82.03 & 73.78 & 76.72 & 64.26 & 4.11 & 15.78 & 13.1 \\ U-NetFormer & EfficientNet-B1\({}^{\star}\) & 40.39 & 55.61 & 52.29 & 61.61 & 70.84 & **82.85** & 73.15 & 77.22 & 64.24 & 6.62 & 19.61 & 35.3 \\ U-NetFormer & EfficientNet-B2\({}^{\star}\) & 39.17 & 56.07 & **53.35** & **62.10** & 71.25 & 82.43 & **74.28** & 77.83 & **64.56** & 8.91 & 31.21 & 33.2 \\ \hline \multicolumn{13}{c}{_NASA models_} \\ \hline MRF-UNets & - & 40.14 & **56.45** & 52.95 & 61.27 & **71.79** & 81.25 & 72.42 & 77.77 & 64.26 & 1.62 & 38.71 & 21.4 \\ BiX-NAS & - & 28.26 & 51.98 & 45.71 & 52.91 & 67.46 & 72.90 & 68.68 & 70.83 & 57.34 & **0.38** & 112.29 & 29.3 \\ SqueezeNAS & - & 40.90 & 55.35 & 50.27 & 58.70 & 69.88 & 82.08 & 73.75 & 74.49 & 63.18 & 1.82 & 11.36 & 81.6 \\ DNAS & - & 43.19 & 54.15 & 49.56 & 57.94 & 68.50 & 61.69 & 73.19 & 73.00 & 62.65 & 5.01 & 62.14 & 18.1 \\ FasterSeg & - & 34.50 & 51.27 & 45.27 & 55.94 & 66.62 & 74.71 & 70.05 & 69.73 & 58.51 & 3.47 & 15.37 & **17.13** \\ SparseMask & - & **46.15** & 51.88 & 44.01 & 43.64 & 65.20 & 77.41 & 71.48 & 66.11 & 58.23 & 2.96 & 10.28 & 51.2 \\ \hline \hline \end{tabular}
* The base architectures of these backbones were originally designed via a neural architecture search approach [148].
\end{table} TABLE IV: Semantic segmentation results of the baseline U-Net series models and the representative efficient DNNs handcrafted and automated neural architecture search (NAS) models on the test set of the OpenEarthMap benchmark. The best score for each metric is in **bold** and the second-best is underlined.
Fig. 19: Segmentation quality (i.e., the mIoU on the OpenEarthMap dataset) versus computational complexity (FLOPs) of each model. The bubble size denotes the number of learnable parameters of the models, which literally indicates the size of the models.
Fig. 20: Segmentation quality (i.e., the mIoU on the OpenEarthMap dataset) versus inference speed (FPS) of each model. The bubble size denotes the number of learnable parameters of the models, which literally indicates the size of the models.
## Appendix A
Fig. 21: Visual comparison of land cover mapping results among the models on (a) Kyoto_47 and (b) Tyrolw_9 image patches of the OpenEarthMap benchmark test set. The highlighted regions (yellow rectangles) indicate the regions with significant differences in the maps produced by the models with respect to the reference map.
ing compared to the conventional U-Net with EfficientNet lightweight backbone. Moreover, as clearly shown in Fig. 19 and Fig. 20, all the U-Net-based compact methods we studied outperformed the other handcrafted compact methods in terms of segmentation quality. The Segmenter-Tiny, the SegFormer-Mit-B0, and the CGNet yield accuracy rate below 60% mIoU, with the results of 45.07%, 57.08%, and 59.61% mIoUs, respectively. Other handcrafted compact methods, such as HRNet-W18, STDC, and FastSCNN achieved similar accuracy rates. FastSCNN maintains efficiency in terms of FLOPs and FPS and achieved an acceptable mIoU of 60.26%.
Most of the automated architecture search methods reduce the parameters of the network by compromising accuracy. For instance, BiX-NAS, FasterSeg and SparseMask, respectively with mIoUs of 57.34%, 58.51%, and 58.23% are among the least accurate methods. However, with approximately 82% fewer parameters, the MRF-UNets achieved 64.26% mIoU to compete with the best handcrafted compact methods, the U-NetFormer-EfficientNet series (see Table IV). This can clearly be shown in Fig. 19 and Fig. 20 as well. Furthermore, as shown in Table IV, the SqueezeNAS and the DNAS also use 81% and 27% fewer parameters, respectively, to achieve segmentation accuracy rates of 63.18% and 62.65% mIoUs, which compete with U-Net-EfficientNet and U-Net-Mobile series of the handcrafted compact methods. Although the NAS methods trailed behind (slightly in some cases) the handcrafted ones in terms of segmentation quality, they offer the benefit of reducing the storage size of the network. To summarize, the results of the semantic segmentation experiments on the OpenEarthMap remote sensing benchmark indicate that the U-NetFormer handcrafted architectures with EfficientNet lightweight backbones may serve as the best baselines for the OpenEarthMap, but comparatively with a large number of parameters and FLOPs (see Fig. 19) as well as low inference speed (see Fig. 20). In contrast, the architectures based on the NAS approach offer reduced-scale models with fewer FLOPs, however, they exhibit some sacrifice in segmentation quality.
In addition to the quantitative (mIoU) evaluation of the quality of the models' segmentation results (see Table IV), Fig. 21 provides a visual comparison of the segmentation maps of the models. The representative areas are Kyoto and Tyrolw image patches of the OpenEarthMap benchmark test set. The U-NetFormer-EfficientNet-B2 produces the best-detailed visualization results. As shown in Fig. 21(a), the _water_ area of the dam was misclassified as _rangeland_ and _agricultural land_ by U-Net-EfficientNet-B1, U-Net-MobileOne_S1, SegFormer-Mit-B0, U-NetFormer-EfficientNet-B1, MRF-UNets, BiX-NAS, FasterSeg and SparseMask, while the other methods correctly identified them. Only the U-Net-MobileNet and the U-NetFormer-EfficientNet-B1 are able to identify the _bareland_ nearby the river, while the other methods wrongly classified it as _development area_. With the exception of Segmenter-Tiny, the other methods can produce well-maintained boundaries of the roads and buildings.
In Fig. 21(b), the U-Net series and MRF-UNets can identify the tiny roads in the top-right parts of the image. The U-Net-EfficientNet-B1 and the U-NetFormer series recognize the _bareland_ along the rivers, while the other methods classified the area as _water_ and _developed space_. Visually, the accurate maps produced by the models contain tremendous structured characteristics like rivers (water), buildings, and agriculture. The _water_ and _bareland_ classes which score the highest and lowest accuracy rates, respectively, across all the models (see Table IV) is clearly demonstrated in the segmentation maps of the models. With the fragmented layouts and the different sizes, the models find it difficult to adequately define the limits of the roadways and the buildings. Because of the striking similarities in their spectra, _rangeland_, _agricultural land_ and _trees_ sometimes confused the models. And since parking lots and cover materials in certain rural regions are comparable, _roads_ are frequently misclassified as _developed space_.
#### V-B2 Efficiency Comparison
In Table IV, we also presented three efficiency indicators, that is, the number of parameters (Params), computational complexity (FLOPs), and inference time (FPS) to measure the models' efficiency with respect to storage (memory), computation, and latency, respectively. Fig. 19 and Fig. 20 also, respectively, provide FLOPs versus mIoU (segmentation quality) and FPS versus mIoU with respect to the Params of the models. For U-Net-EfficientNet and U-NetFormer-EfficientNet series, we used the U-Net-EfficientNet-B1 and the U-Net-EfficientNet-B2, respectively, for the efficiency comparison.
As shown in Fig. 19 and Fig. 20, most of the NAS compact models have fewer parameters compared to their handcrafted counterparts. This makes the NAS methods more efficient than the handcrafted compact methods in terms of storage (few parameters). For example, with 0.38\(M\) parameters stored in 32-bit width (4 bytes), the Bix-NAS needs only 1.5MB of memory. However, some handcrafted compact networks are efficient in terms of storage. The CGNet handcrafted model needs only 2.0MB of memory to store its 0.5\(M\) parameters in 32-bit width. The storage range for the NAS models is 1.5MB-20.1MB and for the handcrafted ones is 2.0MB-32.7MB based on 32-bit width. It must be noted that some of the models sacrificed segmentation quality for efficient storage. For example, CGNet, Bix-NAS, FasterSeg, SparseMask, and SegFormer achieved \(<60\%\) mIoUs with storage ranging from 1.5MB to 14.8MB. However, the MRF-UNets NAS model did well in a quality-storage trade-off, achieving 64.26% mIoU with only 6.5MB storage. Whereas, the U-Net-EfficientNet-B2 handcrafted model performed badly in a quality-storage trade-off, using a large storage of 35.6MB to achieve an accuracy rate of 64.56% mIoU, equivalent to that of MRF-UNets (64.26%) which needs only 6.5MB of memory. Hence, for a quality-storage trade-off, the MRF-UNets model is more desirable for real-time semantic segmentation in remote sensing applications.
On computation, it can be seen in Fig. 19 that the U-Net-based and the U-NetFormer-based handcrafted models comparatively have high computational complexity (FLOPs). These models performed well in segmentation quality (\(>62\%\) mIoUs) at the cost of high computation. SegFormer-Mit-B0, CGNet, and Segmenter-Tiny handcrafted models and NAS models like SparseMask and FasterSeg sacrificed segmentation quality (\(<60\%\) mIoUs) for low-cost computation (\(<25G\) FLOPs). The SqueezeNAS did best in the quality
computation trade-off, using only \(11.36G\) FLOPs to achieve \(<63.18\%\) mIoUs. The FastCNN handcrafted model has a fair quality-computation trade-off balance (achieving 60.26% mIoU with \(3.68G\) FLOPs). With \(112.29G\) FLOPs, BiX-NAS achieved 57.34% mIoU, which makes it the worst model in terms of quality-computation trade-off. The MRF-UNets (achieving 64.26% mIoU with \(38.71G\) FLOPs) trailed behind the U-NetFormer-EfficientNet-B2 (achieving 64.56% mIoU with \(8.91G\) FLOPs) in a quality-computation trade-off. This makes U-NetFormer-EfficientNet-B2 more attractive for real-time semantic segmentation in remote sensing with respect to a quality-computation trade-off.
In Fig. 20, we can see that most of the models we evaluated have low inference speed (FPS), hence, high latency. The FasterSeg has the highest inference speed (171.3 FPS), which is desirable for real-time semantic segmentation in remote sensing, but sacrifices segmentation quality (58.51% mIoU). FastCNN ranks second in speed but it has a better quality-speed trade-off (achieving 60.26% mIoU with 110.1 FPS) compared to FasterSeg (achieving 58.51% mIoU with 171.3 FPS). The U-Net-based and the U-NetFormer-based models achieved better segmentation accuracy rates (\(>62\%\) mIoUs) by sacrificing inference speed (\(<44.3\) FPS). However, here too, the MRF-UNets (achieving 64.26% mIoU with 21.4 FPS) trailed behind the U-NetFormer-EfficientNet-B2 (achieving 64.56% mIoU with 33.2 FPS) in a quality-speed trade-off. The SqueezeNAS has a fair quality-speed trade-off balance (achieving 63.18% mIoU with 81.6 FPS), which may be a good choice for real-time semantic segmentation in remote sensing applications. The worst model of quality-speed trade-off is the Segmenter-Tiny (achieving the lowest accuracy rate of 45.07% mIoU with 22.4 FPS).
Considering the overall quality-efficiency trade-off, U-NetFormer-EfficientNet-B2 (achieving 64.56% mIoU with \(8.91M\) Params, \(31.21G\) FLOPs, and 33.2 FPS), MRF-UNets (achieving 64.26% mIoU with \(1.62M\) Params, \(38.71G\) FLOPs, and 21.4 FPS), and SqueezeNAS (achieving 63.18% mIoU with \(1.82M\) Params, \(11.36G\) FLOPs, and 81.6 FPS), the SqueezeNAS would be recommended for real-time semantic segmentation application in remote sensing with a little bit sacrifice of quality. These findings demonstrate the trade-off between quality and efficiency in selecting real-time semantic segmentation methods for remote sensing applications.
#### V-B3 Correlation Between Efficiency Metrics
It is commonly assumed that the efficiency indicators of the models are correlated (e.g., fewer parameters translate to lower computational complexity or higher inference speed). As shown in Fig. 22, this assumption might not necessarily be true, which is also observed by Cai _et al._[122]. For example, among all the models we evaluated, the BiX-NAS is the smallest with only \(0.38M\) parameters; however, it has the largest FLOPs (\(112.29G\)) and low inference speed (29.3 FPS). Whereas STDC has more parameters (\(8.53M\)) than CGNet (\(0.5M\)), SparseMask (\(2.96M\)), and MRF-UNets (\(1.62M\)), the inference speed of STDC (78.3 FPS) is higher than that of CGNet (56.7 FPS), SparseMask (51.2 FPS), and MRF-UNets (21.4 FPS). Also, the FasterSeg which has the highest inference speed (171.3 FPS), has \(3.09M\) and \(2.97M\) more parameters than BiX-NAS (\(0.38M\)) and CGNet (\(0.5M\)), respectively. Furthermore, as shown in Fig. 22, most of the models we evaluated have a comparatively smaller number of FLOPs but with lower inference speeds. This also demonstrates that smaller FLOPs do not necessarily translate to higher inference speed. We found that models with architectures of more multi-branch connections tend to have a lower inference speed (e.g., SparseMask, MFR-UNets, HRNet, and BiX-NAS). These findings suggest that in selecting real-time semantic segmentation methods for remote sensing applications, it is not sufficient to rely on a single efficiency indicator as this can be misleading. Moreover, network architecture with minimal multi-branches should be adopted for high inference speed (i.e., low latency) in real-time applications of remote sensing semantic segmentation.
#### V-B4 Continent-Wise Generalisation
Here, we investigated the continent-wise domain gap on the OpenEarthMap dataset. Fig. 23 presents continent-wise generalisation semantic segmentation results of the models we evaluated. The results are mIoUs of the models based on the continent-wise domain adaptation settings in [41]. For the U-Net-EfficientNet and the U-NetFormer-EfficientNet series, we used U-Net-EfficientNet-B1 and U-Net-EfficientNet-B2, respectively, for the evaluation. In general, the U-Net-based handcrafted models obtain better results than other methods in most cases, especially the U-NetFormer-EfficientNet-B2, which is quite similar to the results in the traditional semantic segmentation shown in Table IV. The NAS models performed significantly lower (\(<40\%\) mIoU) in most cases of the continent-wise domain generalisation settings. Even the MRF-UNets and the SqueezeNAS obtained comparatively worse results than the U-Net series, which differs from traditional semantic segmentation settings. The limited data of the Oceania (OC) continent led to the lowest transfer generalisation results
Fig. 22: Correlation between the efficiency indicators of the models: inference speed (FPS) versus computational complexity (FLOPs) versus the number of learnable parameters (Params). The bubble size denotes the number of learnable parameters of the models, which literally indicates the size of the models.
Fig. 23: Continent-wise domain generalisation results of the models. The results are mIoUs of the models based on the continent-wise domain adaptation settings in [41]. Asia: AS, Europe: EU, Africa: AF, North America: NA, South America: SA, and Oceania: OC. Note, for the U-Net-EfficientNet and the U-NetFormer-EfficientNet series, we only used U-Net-EfficientNet-B1 and U-Net-EfficientNet-B2, respectively, for this evaluation.
Fig. 24: Visual comparisons of continent-wise generalisation results of U-NetFormer-EfficientNet-B2 model. Asia: AS, Europe: EU, Africa: AF, North America: NA, South America: SA. The Mahe is from Africa (AF2AF performs best) and the Palu is from Asia (AS2AS performs best). AS2AF denotes AS as the source domain and AF as the target domain. The highlighted regions (yellow rectangles) indicate the regions with significant differences in the segmentation maps produced by the models with respect to the reference map.
Fig. 23: Continent-wise domain generalisation results of the models. The results are mIoUs of the models based on the continent-wise domain adaptation settings in [41]. Asia: AS, Europe: EU, Africa: AF, North America: NA, South America: SA, and Oceania: OC. Note, for the U-Net-EfficientNet and the U-NetFormer-EfficientNet series, we only used U-Net-EfficientNet-B1 and U-Net-EfficientNet-B2, respectively, for this evaluation.
when OC is treated as the source domain. In contrast, the performance with OC as the target domain is better than in settings of other continents. With the exception of the OC continent, most of the methods indicated two minor domain gaps: Europe-to-North America (EU2NA) and Asia-to-North America (AS2NA). The most prominent domain gaps the real-time semantic segmentation models reveal are Africa-to-Europe (AF2EU) and North America-to-Africa (NA2AF).
In Fig. 24, we present two examples of the continent-wise domain gap based on the U-NetFormer-EfficientNet-B2 continent-wise domain generalisation segmentation map. When the Africa (AF) continent is used as the target domain (e.g., Mahe) and the other continents are used as the source domains, the segmentation quality significantly decreases compared to when the African continent is used as both source and target domains. In most cases, the _rangeland_ is misclassified as _agricultural land_ and _water_. And some part of the ocean is wrongly classified as _rangeland_ and _agricultural land_. Also, as shown in Fig. 24, using Asia (AS) continent as the target domain (e.g., Palu) and the other continents as the source domains, Africa-to-Asia (AF2AS), Europe-to-Asia (EU2AS), and North America-to-Asia (NA2AS) produced bad segmentation in _trees_ and _bareland_, and wrongly classified them as _rangeland_, _developed space_ and _agricultural land_. The _bareland_ and _water_ in SA2AS are wrong identified to _developed space_ and _agricultural land_. We suggest a combination of state-of-the-art unsupervised domain adaptation (UDA) techniques with efficient semantic segmentation methods to develop advanced compact UDA models suitable for continent-wise domain generalisation and continent-wise UDA tasks.
## V Conclusion and Open Challenges
In this study, we discussed current developments in the literature regarding real-time semantic segmentation methods in remote sensing image analysis. We discussed network compression techniques and efficiency indicators of efficient deep neural networks for real-time semantic segmentation, and summarized the notable works proposed to address the problem of semantic segmentation in real-time applications of remote sensing on resource-constrained platforms. Furthermore, we performed extensive experiments with some existing real-time semantic segmentation methods, which include 13 handcrafted and 6 automated architecture-searched methods, on the OpenEarthMap remote sensing semantic segmentation benchmark to measure their quality-efficiency trade-off in real-world applications of remote sensing semantic segmentation. It appears to be the first time these models have been benchmarked on this particular dataset. We found that while generally there is a trade-off between segmentation quality and efficiency indicators (such as inference speed, computational complexity, and the number of parameters), most of the efficient deep neural networks we evaluated did achieve near state-of-the-art quality results but not with relatively fewer parameters capable of high inference speed. We also found that a model with very few parameters or less computational complexity is not necessarily fast during inference time. Overall, this study provides comprehensive insights into the developments in real-time semantic segmentation methods for remote sensing applications and highlights their strengths and weaknesses. The findings can inform future research in this field and help practitioners and researchers develop more efficient and accurate models for remote sensing applications. Here, we discuss some open challenges that can be adopted in future research efforts towards improving the quality-efficiency performance of real-time semantic segmentation deep learning methods in remote sensing image analysis.
* How to design a specialized efficient neural network backbone to improve the balance between accuracy and speed for accurate-fast segmentation of remotely-sensed images in real time? Exploring and adapting existing lightweight networks like MobileNet or EfficientNet via an automated architecture search approach for more suitable real-time segmentation methods in remote sensing may be an interesting challenge.
* How to improve the domain generalisation performance as well as unsupervised domain adaptation of real-time semantic segmentation in remote sensing? The segmentation quality of the continent-wise domain generalisation is far behind that of the supervised semantic segmentation. Future work may adopt strategies in related tasks that have been proven successful such as self-training and curriculum learning to improve the accuracy of real-time continent-wise domain generalisation and continent-wise unsupervised domain adaptation.
* How to solve the variations in resolution and different sensor types problem of remote sensing imagery? Remote sensing imagery often comes from different sensors (optical or SAR) at different resolutions (meter or sub-meter), which can make it difficult for the algorithm to differentiate between different land cover classes. Exploring novel approaches to process images from different sensors at various resolutions in real-time segmentation represents an intriguing research direction in remote sensing image understanding.
* How to improve the robustness of real-time semantic segmentation algorithms to different imaging conditions such as weather conditions, seasonality, and temporal changes? Remote sensing images can be affected by various environmental factors, and developing efficient semantic segmentation networks that can adapt to these factors with a better balance between accuracy and speed in real time is an important challenge.
* How to improve the scalability of real-time semantic segmentation algorithms for large-scale remote sensing applications? Remote sensing images can cover vast areas, and processing such large datasets (e.g., county-wide or province-wide) in real-time could be a significant challenge. Developing efficient-scalable algorithms that can segment large-scale remote sensing data in real-time is essential for province-wide or county-wide remote sensing applications such as forest fire monitoring.
|
2309.09374 | Fully Convolutional Generative Machine Learning Method for Accelerating
Non-Equilibrium Greens Function Simulations | This work describes a novel simulation approach that combines machine
learning and device modelling simulations. The device simulations are based on
the quantum mechanical non-equilibrium Greens function (NEGF) approach and the
machine learning method is an extension to a convolutional generative network.
We have named our new simulation approach ML-NEGF and we have implemented it in
our in-house simulator called NESS (nano-electronics simulations software). The
reported results demonstrate the improved convergence speed of the ML-NEGF
method in comparison to the standard NEGF approach. The trained ML model
effectively learns the underlying physics of nano-sheet transistor behaviour,
resulting in faster convergence of the coupled Poisson-NEGF simulations.
Quantitatively, our ML- NEGF approach achieves an average convergence
acceleration of 60%, substantially reducing the computational time while
maintaining the same accuracy. | Preslav Aleksandrov, Ali Rezaei, Nikolas Xeni, Tapas Dutta, Asen Asenov, Vihar Georgiev | 2023-09-17T20:43:54Z | http://arxiv.org/abs/2309.09374v1 | Fully Convolutional Generative Machine Learning Method for Accelerating Non-Equilibrium Green's Function Simulations
###### Abstract
This work describes a novel simulation approach that combines machine learning and device modeling simulations. The device simulations are based on the quantum mechanical non-equilibrium Green's function (NEGF) approach and the machine learning method is an extension to a convolutional generative network. We have named our new simulation approach ML-NEGF and we have implemented it in our in-house simulator called NESS (nano-electronics simulations software). The reported results demonstrate the improved convergence speed of the ML-NEGF method in comparison to the'standard' NEGF approach. The trained ML model effectively learns the underlying physics of nano-sheet transistor behaviour, resulting in faster convergence of the coupled Poisson-NEGF simulations. Quantitatively, our ML-NEGF approach achieves an average convergence acceleration of 60%, substantially reducing the computational time while maintaining the same accuracy.
machine learning, neural network, autoencoder, device simulations and modeling, non-equilibrium Green's function (NEGF), TCAD device modeling, nanowires.
## I Introduction
The silicon nanowire and nanosheet transistors have a wide spectrum of promising applications [1], such as current field-effect transistors [2] and photovoltaics [3]. Moreover, the state-of-the-art CMOS technologies are based on single or stacked configurations of nanosheet or nanowire architectures [4]. However, despite the recent advances in technology, there is still room to improve the fabrication process and to optimise device performance, for example, by reducing power consumption and reducing device-to-device variability during the fabrication process.
From a practical point of view, simulations and modelling transistors are the most time-efficient and cost-effective approach to evaluate the performance and the output characteristics of transistors. The aim is to have a simulation platform that is fast, accurate, and reliable in order to aid the improvement of device design, predict device performance (current-voltage characteristics) and extract important Figures of Merit (FoM).
The main aim of this work is to investigate the possibility of significantly improving or even replacing numerical Technology Computer-Aided Design (TCAD) device simulations with a convolutional autoencoder (CAE) [5][6][7]. To test our idea, we have developed a new simulation approach based on the combination of TCAD and machine learning methods. The current state of the art of TCAD simulations is based on the Non-equilibrium Green's Function (NEGF) formalism that can capture the quantum mechanical physical effects, such as confinement and carrier tunnelling in ultra-scaled transistor (with channel lengths that are shorter than 10 nm). To enhance the capabilities of the NEGF method and decrease the computational time of our in-house Nano Electronics Simulation Software (NESS) [8], we combine machine learning with our existing NEGF simulator implemented in NESS. We have called the new simulation approach ML-NEGF.
Our results show the potential of using the ML-NEGF methodology to significantly reduce the device simulation computational cost without compromising the accuracy of
Fig. 1: Diagram of the n-type silicon (Si) nanosheet transistor with channel cross section of 3 nm x 12 nm (a) and length of full device 22 nm (b). Gate length is 16 nm and the source/drain have length of 3 nm each. The channel doping is 1e16 cm\({}^{3}\) and the contacts (source and drain have) are 1e20 cm\({}^{3}\). The oxide is SiO; with thickness of 1nm everywhere around the device.
physical results deriving from the'standard TCAD' simulations.
## II Device Structure
To test our new simulation ML-NEGF methodology we have designed a device transistor structure that corresponds to the most advanced technologies of 3 nm node and beyond. Fig. 1 shows the nanosheet transistor geometry created using the NESS structure generator. The gate is all around the channel and the channel length is \(\text{L}_{\text{Ch}}=16\) nm, with source and drain lengths of 3 nm each, hence the total length of the device is 22 nm. The channel cross-section is rectangular with dimensions of 3 nm x 12 nm, the oxide material is SiO\({}_{2}\) and its thickness is 1 nm. The channel doping is 1e15 cm\({}^{3}\) and the source and drain regions have doping of 1e20 cm\({}^{3}\).
## III Simulations Methodology
All numerical simulations in the work are performed by utilising the NEGF simulator implemented in our in-house code NESS [8]. The NEGF implementation is based on the effective mass approximation. Our NEGF solver can compute ballistic and scattering transport in various devices and materials. In this paper, we have used the ballistic version of the NEGF solver to test our idea. However, we would like to emphasise that our methodology is valid even if we use the simulations that include the electron-phonon and surface roughness scattering mechanisms in the active region of the device.
The NEGF solver is linked to a 3D Poisson solver and both solvers are connected in a self-consistent loop. The effective-mass Hamiltonian, and correspondingly the NEGF solver, requires potential as an input that is provided by the Poisson solver. Correspondingly the Poisson solver requires a charge that is provided by the NEGF solver. Also, the NEGF formalism allows to compute device characteristics, such as current-voltage curves (\(\text{I}_{\text{D}}\)-\(\text{V}_{\text{G}}\) and \(\text{I}_{\text{D}}\)-\(\text{V}_{\text{D}}\)). From the \(\text{I}_{\text{D}}\)-\(\text{V}_{\text{G}}\) curve, we can extract important FoM, such as OFF-current (\(\text{I}_{\text{OFF}}\)) and ON-current (\(\text{I}_{\text{ON}}\)), subthreshold slope (SS) and voltage threshold (\(\text{V}_{\text{TI}}\)). In previous papers, we have shown that it is possible to train a neural network (NN) by using as input date key figures of merit such as subthreshold slope, drive current, leakages current to predict another key parameter such as voltage threshold [9].
In this paper, we have utilised a machine-learning model inspired by denoising autoencoders. The ML model is shown in Fig. 2. The model's architecture is based on a convolutional denoising autoencoder network augmented by methods stemming from transformer networks. The basic structure was chosen to be fully convolutional as this guarantees model generality and improves robustness to different device geometries. The augmentations, borrowed from transformer networks, are the inclusion of location encodings in the initial input. A residual connection between the input and output of the model was also introduced to reduce the solution domain to the change between initial and final NEGF-Poisson iterations. The output of the models consists of a single channel matrix, which represents a normalised field of potential or charge. The output is assumed to be normalised with respect to the mean and deviation of the input.
Fig. 4: Validation Loss of the model and a function of the Epochs (steps) of the convolutional autoencoder.
Fig. 3: Location maps in \(X\) and \(Y\) directions. \(X\) is the transport direction with length of 22 nm and \(Y\) is the longer cross-section with is 14 nm long. The colour map shows the location of the kernel in relation to the model. Numbers and lines show the rough splitting of data.
Fig. 2: A diagram of the convolutional autoencoder structure. It consists of \(N\) encoder blocks and N-1 decoder blocks and a final convolutional layer.
The model can be constructed using \(N\) number of encoder and decoder blocks, to extract a latent space representation of the input and apply relevant transformations in the decoder section. Each block consists of a convolution, batch normalisation, dropout and activation function. We chose the LeakyRelu function as the activation function in the encoder section as it has a high gradient. One could also introduce residual connection within the network to counteract the diminishing gradient problem. However, due to the small depth of the network, \(N\)=3, this was not implemented.
A set of location matrices, presented in Fig. 3, was added to hint the location of the kernel to the model. The location encodings were artificially generated, by making a gradient map between 0 and 1 in each of the basis directions (X, Y, Z). The range between 0 and 1 was chosen to maintain generality.
The main aim is to train an ML model to predict the difference between the 3D spatial charge and the potential distributions of the first and final Poisson-NEGF self-consistency iterations inside the whole of the device. As an input to the autoencoder-accelerated ML-NEGF method, we provided the charge and potential obtained from the initial ballistic Poisson-NEGF iteration of the self-consistent loop. The input of the model is a 7-channel image generated from information produced by an initial NEGF iteration. The first two channels are a normalised 2D image slice of potential and of charge in logarithm scale. The next two channels are the drain and gate voltages. The final 3 channels, two of which are shown in Fig. 3, are the location maps used by the kernel. Each square represents the chosen value for a specific device location and the colour bar gives the heat map of those values.
The model output can then be used as an input to the'standard' NEGF simulations to reduce the number of self-consistent Poisson-NEGF iterations, which leads to a significant reduction of the simulation time. The fully trained model can be examined as a kernel-based analytical representation of the NEGF solver. The solution of which is the forward pass of the ML model. Computationally, the cost of this is negligible if compared to the cost of utilising the NEGF solver. Therefore, this method, once trained, is computationally efficient and can be used to accelerate NEGF simulations.
## IV Simulation Results
In order to validate our ML-NEGF model, input and target data was generated by our'standard' NEGF simulation. The obtained data was divided into two sets: training and testing. The training set is 70%, and the testing set is 30% of the full data. The training set is used to train the ML model. The loss characteristic obtained from the training process is shown in Fig. 4.
Fig. 4 shows the evolution of the mean square error (MSE) as a function of the epochs (training steps). The model was trained for 500 epochs, where it reached a point of
Fig. 5: Comparison of charge distribution (top row) and potential distribution (bottom row) in a XY plane along the transport direction, between the ML-NEGF and ‘standard’ NEGF simulations. The charge has the highest value in the source and drain region and the highest potential in the middle of the channel.
Fig. 6: Comparison of the current voltage characteristic (\(I_{D}\)-\(V_{G}\) curves) for both the ML-NEGF and NEGF methods, as a function of the gate bias (\(I_{\text{CS}}\)) at fix drain bias (Low \(-\)0.05\(V\) and High \(-\) 0.71).
Fig. 7: Comparison of the simulations of self-consistent iterations between ML-NEGF and NEGF (ballistic), as a function of the gate bias (\(I_{\text{CS}}\)).
saturation. The number of 500 epochs was discovered empirically. The training loss follows a typical trend, where it shows significant oscillation and an exponential decrease in around 100 epochs, followed by a reduction in training speed between 100 and 400 epochs. After 400 epochs the MSE value has saturated (0.02 MSE).
Fig. 5 shows the comparison between the 2D charge and potential distribution in the middle of the channel, along the transport direction, for the ML-NEGF simulations and the target scalar fields produced by the'standard' NEGF method. Consistent with the device structure and the doping profile along the device, the charge value is the highest in the source and the drain region and the smallest in the channel section of the device. Also, consistent with the device physics, at low gate biases voltages, below 0.4V, the potential has the highest value in the middle of the channel. From the results in Fig. 5, it can be concluded that the charge and the potential distribution obtained from the ML-NEGF method are identical to those extracted from NEGF. Hence, it can be concluded that our convolutional NN is indeed well trained.
In Fig. 6, we have plotted and compared the I\({}_{\text{D}}\)-V\({}_{\text{G}}\) curves for both the ML-NEGF and'standard' NEGF approach, at low (V\({}_{\text{D}}\)=0.05V) and high (V\({}_{\text{D}}\)=0.7V) drain bias. From the results in Fig. 6, it is evident that both methods produced identical I\({}_{\text{D}}\)-V\({}_{\text{G}}\) curves and, hence, the FoMs, that can be extracted, will be also identical. The results in Fig. 5 and Fig. 6 show that our NN used in the ML-NEGF method can reproduce not only physical properties but also key device characteristics.
Once the ML model is trained, we wanted to evaluate and compare the convergence behaviour for both cases. Fig. 7 shows the number of self-consistent interactions as a function of gate voltage (V\({}_{\text{GS}}\)), at low (V\({}_{\text{D}}\)=0.05V) and high (V\({}_{\text{D}}\)=0.7V) drain bias. From the data in Fig. 7, it can be concluded that overall, the ML-NEGF method requires a smaller number of iterations in comparison to the NEGF method. Specifically, at up to 0.2 V\({}_{\text{GS}}\) both methods have almost identical iterations. However, when the V\({}_{\text{GS}}\) has values above 0.2V, the ML-NEGF simulations (see the red and orange curves in Fig. 7) show a consistently lower iterations number than the'standard' NEGF method. For example, the difference between both methods is well pronounced at V\({}_{\text{GS}}\)=0.8V. At low drain bias (V\({}_{\text{D}}\)=0.05V), ML-NEGF (orange curve) converges in 10 iterations, while the conventional NEGF method (blue curve) needs 18 iterations. At high (V\({}_{\text{D}}\)=0.05V) drain bias, ML-NEGF (red curve) requires 7 steps and the NEGF method (green curve) converges after 15 steps. Hence, in both cases the ML-NEGF approach achieves an average convergence acceleration of 60%, substantially reducing the computational time, while maintaining the same accuracy.
## V Conclusions
In this work, we have reported a combined machine learning and device simulation computational approach that allows us to simulate the device characteristics (current-voltage) of Si nanosheet transistors. Our machine learning method is based on a convolutional neural network and autoencoder architecture. Results obtained from the ML-NEGF approach led to the following conclusions.
Firstly, using the autoencoder-accelerated ML-NEGF method instead of the standard TCAD (NEGF) simulations in principle could significantly decrease the computational time and shorten the research and development process. For example, the ML-NEGF approach achieves an average convergence acceleration of 60%, while maintaining the same accuracy. Secondly, our autoencoder-accelerated ML-NEGF method can reproduce not only the device characteristics but also 3D charge density and potential distribution in the whole device. Lastly, similar ML based approach can be used to describe material properties, such as resistance in metal nanowires that cannot be described by non-parametric methods, such as a general linear model. However, it needs to be noted that the predictivity of the ML-NEGF method can be improved even further by providing more data, using different pre-processing schemes and attempting alternative network architectures. Indeed, all these options are currently under investigation.
## Acknowledgment
This research was funded by the Engineering and Physical Sciences Research Council (EPSRC), through Grant No. EP/S001131/1 and EP/P009972/1. This project has also received funding from the EPSRC Impact Acceleration Account scheme under Grant Agreement No. EP/R511705/1 (Nano-Electronic Simulation Software (NESS)--creating the first open source TCAD platform in the world and Fast Track - Development boost for the Device Modelling group opensource NESS computational framework).
|
2309.03117 | Quantum Character Theory | We develop a $\mathtt{q}$-analogue of the theory of conjugation equivariant
$\mathcal D$-modules on a complex reductive group $G$. In particular, we define
quantum Hotta-Kashiwara modules and compute their endomorphism algebras. We use
the Schur-Weyl functor of the second author, and develop tools from the
corresponding double affine Hecke algebra to study this category in the cases
$G=GL_N$ and $SL_N$. Our results also have an interpretation in skein theory
(explored further in a sequel paper), namely a computation of the $GL_N$ and
$SL_N$-skein algebra of the 2-torus. | Sam Gunningham, David Jordan, Monica Vazirani | 2023-09-06T15:54:34Z | http://arxiv.org/abs/2309.03117v1 | # Quantum Character Theory
###### Abstract.
We develop a \(\mathfrak{q}\)-analogue of the theory of conjugation equivariant \(\mathcal{D}\)-modules on a complex reductive group \(G\). In particular, we define quantum Hotta-Kashiwara modules and compute their endomorphism algebras. We use the Schur-Weyl functor of the second author, and develop tools from the corresponding double affine Hecke algebra to study this category in the cases \(G=\mathrm{GL}_{N}\) and \(\mathrm{SL}_{N}\). Our results also have an interpretation in skein theory (explored further in a sequel paper), namely a computation of the \(\mathrm{GL}_{N}\) and \(\mathrm{SL}_{N}\)-skein algebra of the \(2\)-torus.
###### Contents
* 1 Introduction
* 1.1 Classical character theory
* 1.2 Quantum character theory
* 1.3 Results of [GJVY] pertaining to \(\mathfrak{q}\)-character theory
* 1.4 Characters of classical and quantum Harish-Chandra bimodules
* 1.5 Quantum Springer theory
* 1.6 Skeins dictionary
* 1.7 Structure of the paper
* 1.8 Acknowledgments and dedication
* 2 Quantum groups, their differential operators and Hotta-Kashiwara modules
* 2.1 The quantum groups \(U_{\mathfrak{q}}(\mathfrak{gl}_{N}),U_{\mathfrak{q}}(\mathfrak{sl}_{N})\)
* 2.2 The vector and fundamental representations
* 2.3 The quantum coordinate algebra
* 2.4 The quantum Harish-Chandra category \(\mathrm{HC}_{\mathfrak{q}}(G)\)
* 2.5 Quantum differential operators
* 2.6 Quantum Hotta-Kashiwara modules
* 3 Double affine Hecke algebras
* 3.1 \(GL\) DAHA
* 3.2 \(SL\) DAHA
* 3.3 Intertwiners
* 3.4 Induced modules
* 4 Endomorphisms of HK via elliptic Schur-Weyl duality
ontents
* 1 Introduction
* 1.1 Classical character theory.
* 2.1 Classical character theory.
* 3.1 Classical character theory.
* 4.1 Classical character theory.
* 4.2 Compatibility of \(\operatorname{SW}\) with bimodule actions
* 5.1 The shift isomorphism
* 5.2 Proof of Theorem 1.2 via the shift isomorphism
* 5.3 Proof of Theorem 1.3 via the shift isomorphism.
* 6.1 Poles and zeros of intertwiners
* 6.2 Normal orderings
* 6.3 Proof of Theorem 1.2 via intertwiners
* 6.4 Proof of Theorem 1.4 via intertwiners
## 1. Introduction
### Classical character theory.
Conjugation equivariant \(\mathcal{D}\)-modules on a reductive group and on its Lie algebra have a rich history in representation theory. Harish-Chandra observed that the distributional characters of irreducible unitary representations of semisimple groups satisfy a holonomic system of differential equations, which he used to initiate the character theory of unitary representations [10]. Hotta and Kashiwara defined and studied the corresponding \(\mathcal{D}\)-module, revealing a striking connection with Springer theory - the study of representations of the Weyl group on certain cohomology groups of Springer fibers [12]. These ideas have since come to be understood in a wider context including Lusztig's generalized Springer correspondence and his theory of character sheaves [13, 14], as interpreted in the \(\mathcal{D}\)-module setting by Ginzburg [15], and further developed in a categorical context by Ben-Zvi and Nadler [1] and the first author [16].
Let us recall some of the basic objects from this theory. Given a connected complex reductive group \(G\) with Lie algebra \(\mathfrak{g}\), maximal torus \(H\) and corresponding Cartan subalgebra \(\mathfrak{g}\), we have the ring of algebraic differential operators \(\mathcal{D}(\mathfrak{g})\) and the category of strongly equivariant modules (for the adjoint action) \(\mathcal{D}(\mathfrak{g})\)-\(\operatorname{mod}^{G}\). The prototypical example of a strongly equivariant \(\mathcal{D}(\mathfrak{g})\)-module is the _universal Hotta-Kashiwara module_,
\[\operatorname{HK}^{\mathfrak{c1}}=\mathcal{D}(\mathfrak{g})/\mathcal{D}( \mathfrak{g})\operatorname{ad}(\mathfrak{g}).\]
The endomorphism algebra of \(\operatorname{HK}^{\mathfrak{c1}}\) is naturally identified with its \(G\)-invariants; this is known as the quantum Hamiltonian reduction:
\[\mathcal{D}(\mathfrak{g})/\!/G=\operatorname{End}(\operatorname{HK}^{ \mathfrak{c1}})\cong\left(\operatorname{HK}^{\mathfrak{c1}}\right)^{G}.\]
A well-known result of Levasseur and Stafford [14, 15] provides an isomorphism \(\mathcal{D}(\mathfrak{g})/\!\!/G\cong\mathcal{D}(\mathfrak{h})^{W}\).
Note that we have an embedding \(\mathbb{C}[\mathfrak{h}^{*}]^{W}\cong\operatorname{Sym}(\mathfrak{g})^{G} \hookrightarrow\mathcal{D}(\mathfrak{g})\). A strongly equivariant \(\mathcal{D}(\mathfrak{g})\)-module is called _admissible_ if this subalgebra acts locally finitely (see [11]). The primary examples of admissible modules are the Hotta-Kashiwara modules associated to a fixed character \(\zeta\in\mathfrak{h}/W\):
\[\operatorname{HK}^{\mathfrak{c}\mathfrak{1}}(\zeta)=\mathcal{D}(\mathfrak{g} )\Bigg{/}\Big{(}\mathcal{D}(\mathfrak{g})\operatorname{ad}(\mathfrak{g})+ \sum_{y\in\mathbb{C}[\mathfrak{g}]^{G}}\mathcal{D}(\mathfrak{g})(y-\zeta(y)) \Big{)}\cdot\]
One of the main results of [13] is to identify the endomorphism algebra of these modules with the group algebra of the subgroup \(W_{\zeta}\) of the Weyl group \(W\) corresponding to the stabilizer of a lift of \(\zeta\) to \(\mathfrak{h}^{*}\), giving a \(\mathcal{D}\)-module interpretation of the Springer representations.
### Quantum character theory
The goal of this paper is to \(\mathfrak{q}\)-deform these notions using the representation theory of quantum groups. Let \(\mathcal{K}\) denote a ground field containing the quantum parameter \(\mathfrak{q}\), which we assume is not a root of unity (see Notation 2.1).
Given a reductive group \(G\) with Lie algebra \(\mathfrak{g}\), maximal torus \(H\) and corresponding Cartan subalgebra \(\mathfrak{g}\), we have the ring of quantum differential operators \(\mathcal{D}_{\mathfrak{q}}(G)\) and the category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\) modules (for the quantum adjoint action) \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\), determined by a quantum moment map \(\operatorname{ad}:\mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{D}_{\mathfrak{q}}(G)\) constructed in [16], and further studied in [10],[1],[12]. The prototypical example of a strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module is the _universal Hotta-Kashiwara module_,
\[\operatorname{HK}=\mathcal{D}_{\mathfrak{q}}(G)/\mathcal{D}_{\mathfrak{q}}(G )C(\operatorname{ad}),\]
where \(C(\operatorname{ad})=\{\operatorname{ad}(\ell)-\epsilon(\ell)\mid\ell\in \mathcal{O}_{\mathfrak{q}}(G)\}\) and \(\epsilon:\mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{K}\) is the counit. As in the classical case, the endomorphism algebra of \(\operatorname{HK}\) is naturally identified with its \(G\)-invariants; this is known as the quantum multiplicative Hamiltonian reduction:
\[\mathcal{D}_{\mathfrak{q}}(G)/\!\!/G=\operatorname{End}(\operatorname{HK}) \cong\left(\operatorname{HK}\right)^{G}.\]
We have an embedding \(\mathcal{O}(H)^{W}\cong\mathcal{O}_{\mathfrak{q}}(G)^{G}\hookrightarrow \mathcal{D}_{\mathfrak{q}}(G)\). A strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module is called _admissible_ (or a _\(\mathfrak{q}\)-character sheaf_) if this subalgebra acts locally finitely. The primary examples of admissible modules are the Hotta-Kashiwara modules associated to a fixed character \(\chi\in H/W\):
\[\operatorname{HK}(\chi)=\mathcal{D}_{\mathfrak{q}}(G)\Bigg{/}\Big{(} \mathcal{D}_{\mathfrak{q}}(G)C(\operatorname{ad})+\sum_{y\in\mathcal{O}_{ \mathfrak{q}}(G)^{G}}\mathcal{D}_{\mathfrak{q}}(G)(y-\chi(y))\Big{)} \tag{1.1}\]
We will typically focus our attention on reductive groups of type \(A\), specifically \(G=\operatorname{SL}_{N}\) and \(G=\operatorname{GL}_{N}\). This is because our main technical tool is the Schur-Weyl functor \(F_{N}\) defined by the second author [10], which relates strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules with modules for a certain specialization of the double affine Hecke algebra (DAHA)
(see Definition 3.8). Our first main result computes the endomorphism algebra of HK in these cases.
**Theorem 1.1**.: _Let \(G=\operatorname{GL}_{N}\) or \(\operatorname{SL}_{N}\). We have a canonical isomorphism,_
\[\operatorname{End}(\operatorname{HK})\cong\operatorname{e}\cdot\mathbb{H}_{ \operatorname{N}}^{\operatorname{op}}\cdot\operatorname{e}\]
Here, \(\operatorname{e}\in\mathbb{H}_{\operatorname{N}}\) is the sign idempotent. We refer to \(\operatorname{e}\cdot\mathbb{H}_{\operatorname{N}}^{\operatorname{op}} \cdot\operatorname{e}\) as the antispherical DAHA. As a special case of the **shift isomorphism** (see Section 5.1), we can identify the righthand side of Theorem 1.1 with the algebra \(\mathcal{D}_{\mathfrak{q}}(H)^{\mathfrak{S}_{N}}\) of \(\mathfrak{S}_{N}\)-invariant differential operators, and hence we interpret Theorem 1.1 as giving a quantum Levasseur-Stafford isomorphism:
\[\mathcal{D}_{\mathfrak{q}}(G)//G\cong\mathcal{D}_{\mathfrak{q}}(H)^{\mathfrak{ S}_{N}}. \tag{1.2}\]
Recall that a key result of Hotta and Kashiwara was an identification of \(\operatorname{HK}^{\mathfrak{c}1}(0)\) with the pushforward of the constant sheaf along the Springer resolution, implying in particular an isomorphism between its endomorphism algebra and the group algebra of the Weyl group. The quantum analogue of \(\operatorname{HK}^{\mathfrak{c}1}(0)\) is \(\operatorname{HK}(\epsilon)\), where \(\epsilon:\mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{K}\) denotes the counit character, corresponding classically to the identity element in \(G\). Our second main result may be regarded as a \(\mathfrak{q}\)-deformed Springer correspondence a la Hotta-Kashiwara.
**Theorem 1.2**.: _Let \(G=\operatorname{GL}_{N}\) or \(\operatorname{SL}_{N}\). We have a canonical isomorphism_
\[\operatorname{End}(\operatorname{HK}(\epsilon))\cong\mathcal{K}[\mathfrak{S}_ {N}]^{\operatorname{op}}.\]
**Theorem 1.3**.: _Consider the resulting decomposition_
\[\operatorname{HK}(\epsilon)=\bigoplus_{\lambda}M_{\lambda}\boxtimes S^{ \lambda},\]
_as \(\mathcal{D}_{\mathfrak{q}}(G)\)-\(\mathcal{K}[\mathfrak{S}_{N}]\) bimodules where the direct sum is indexed by partitions \(\lambda\) of \(N\) and \(S^{\lambda}\) is the corresponding irreducible for \(\mathfrak{S}_{N}\). Then the factors \(M_{\lambda}\) appearing are distinct indecomposable unipotent \(\mathfrak{q}\)-character sheaves._
More generally we compute the endomorphism algebra of the modules \(\operatorname{HK}(\chi)\), under mild a combinatorial assumption (see Section 5 for details) on the character \(\chi\). It similarly leads to a decomposition as bimodules.
**Theorem 1.4**.: _Let \(\chi\in H/W\) and pick a lift \(\bar{\chi}\in H\). Then \(\bar{\chi}\) has stabilizer \(\sigma W_{J}\sigma^{-1}\) for some \(\sigma\in\widehat{\mathfrak{S}}_{n}\), where \(W_{J}\subset W=\mathfrak{S}_{N}\) is a standard parabolic subgroup. Suppose that \(\chi\) is in transverse position, i.e., that \(W\cap\sigma W_{J}\sigma^{-1}=\{\operatorname{Id}\}\). Then we have an isomorphism,_
\[\operatorname{End}(\operatorname{HK}(\chi))\cong\mathcal{K}[W_{J}]^{op}.\]
_Consider the resulting decomposition_
\[\operatorname{HK}(\chi)=\bigoplus_{\lambda}M_{\lambda}\boxtimes S^{\lambda},\]
as \(\mathcal{D}_{\mathfrak{q}}(G)\)-\(W_{J}\) bimodules, where the direct sum is indexed by multi-partitions \(\underline{\lambda}\) refining the composition corresponding to \(J\) and of total size \(N\), and \(S^{\underline{\lambda}}\) is the corresponding irreducible for \(W_{J}\). Then the factors \(M_{\underline{\lambda}}\) appearing are distinct indecomposable \(\mathfrak{q}\)-character sheaves._
**Remark 1.5**.: Let us emphasize that the category \(\mathcal{D}_{\mathfrak{q}}(G)\)-mod\({}^{G}\) is not semi-simple, and so the semi-simplicity of endomorphism algebras as in Theorems 1.2 and 1.4 is not automatic. In Example 6.19 we illustrate a \(\chi\) not in transverse position, such that End(HK\((\chi)\)) is not semisimple.
### Results of [GJvy] pertaining to \(\mathfrak{q}\)-character theory
In our forthcoming paper [1], with Haiping Yang, we prove the following results, which we state here for context.1
Footnote 1: We have chosen to emphasize the applications to skein theory in [1] and the applications to \(\mathfrak{q}\)-character theory in the present paper.
**Theorem 1.6**.: _Let \(G=\mathrm{GL}_{N}\). The object HK is a (compact, projective) generator of \(\mathcal{D}_{\mathfrak{q}}(G)\)-mod\({}^{G}\). In particular, there is an equivalence of categories_
\[\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\simeq\mathrm{e}\cdot\mathbb{H}_ {N}\cdot\mathrm{e}\text{-mod}\simeq\mathcal{D}_{q}(H)^{W}\text{-mod}\]
In fact, we also prove that \(\mathrm{e}\cdot\mathbb{H}_{N}\) defines a Morita equivalence between \(\mathbb{H}_{N}\) and \(\mathrm{e}\cdot\mathbb{H}_{N}\cdot\mathrm{e}\), so one may replace \(\mathrm{e}\cdot\mathbb{H}_{N}\cdot\mathrm{e}\) by \(\mathbb{H}_{N}\) in Theorem 1.6. Using a similar, well-known Morita equivalence for the symmetric idempotent \(\mathrm{e}_{+}\), one may replace \(\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}\) by \(\mathcal{D}_{q}(H)\#\mathfrak{S}_{N}\). It is somewhat more involved to state the \(\mathrm{SL}_{N}\) analogue of these results. We will give a full statement in [1], and state here the case of \(\mathrm{SL}_{2}\).
**Theorem 1.7**.: _Let \(G=\mathrm{SL}_{2}\). There is an equivalence of categories_
\[\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}=\mathcal{D}_{q}(H)^{\mathfrak{S}_ {2}}\text{-mod}\oplus\mathrm{Vect}_{\mathcal{K}}^{\oplus 4}\]
**Remark 1.8**.: The four simple objects appearing above may be understood as cuspidal \(\mathfrak{q}\)-character sheaves (see the discussion in Section 1.5).
As a consequence of Theorem 1.6, we are able to show that the modules \(M_{\lambda}\) and \(M_{\underline{\lambda}}\) are irreducible, not only indecomposable, and that in the \(\mathrm{GL}_{N}\) case they exhaust all irreducible unipotent \(\mathfrak{q}\)-character sheaves.
### Characters of classical and quantum Harish-Chandra bimodules
Replacing \(\mathfrak{g}\) with \(G\), one may also consider the category \(\mathcal{D}(G)\)-mod\({}^{G}\) of strongly equivariant \(\mathcal{D}(G)\)-modules. The similarly defined category of strongly equivariant \(\mathcal{D}(G)\)-modules may be identified with the categorical trace of the monoidal category of Harish-Chandra bimodules \(\mathrm{HC}(G)\), that is, \(U(\mathfrak{g})\)-bimodules with an action of \(G\) integrating the diagonal \(\mathfrak{g}\)-action (see [1] for a proof in the derived setting). Modules for \(\mathrm{HC}(G)\) are sometimes known as categorical \(G\)-representations (more formally, these are the weak invariants of a categorical \(G\)-representation, namely a module category for the categorical convolution algebra
\((\mathcal{D}(G),\ast)\); see [1] for further details). This leads to an interpretation of strongly equivariant \(\mathcal{D}(G)\)-modules as characters of categorical representations.
The formalism of factorization homology gives us a satisfying parallel to this classical story in the quantum setting. Whereas the category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules ("quantum character sheaves") is attached by factorization homology to the two-torus, the same theory attaches to the circle a monoidal category \(\operatorname{HC}_{\mathfrak{q}}(G)\) of _quantum Harish-Chandra_ bimodules (see Section [14] for discussion).
The fact that characters of quantum Harish-Chandra bimodules are indeed strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules is therefore simply a formal consequence of the fact that the two-torus is obtained from the circle by taking the Cartesian product with another circle, and that in topological field theory crossing with a circle produces the appropriate form of Hochschild homology, i.e. the universal receptacle for characters. For example, one may interpret the module \(\operatorname{HK}(\chi)\) as coming from the solid torus \(S^{1}\times D^{2}\), with a line defect along \(S^{1}\times\{0\}\) labeling the conjugacy class. This is precisely the character \(\mathcal{D}_{\mathfrak{q}}(G)\)-module of the object of \(\operatorname{HC}_{\mathfrak{q}}\) which is \(\mathcal{O}_{\mathfrak{q}}(G)\otimes_{\mathcal{O}_{\mathfrak{q}}(G)^{G}}\chi\), the quantization of the conjugacy class determined by \(\chi\).
This approach to quantum character theory strongly echoes the ideas of the classical character field theory papers [1] by Ben-Zvi and Nadler, and subsequently in their work [1] with the first author.
### Quantum Springer theory
Recall that the main objects of study in this paper, namely the category \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\) and \(\operatorname{HK}\), may be defined for any connected reductive group \(G\) (together with the additional data required to defined the corresponding ribbon category of representations of the quantum group). Although our techniques in this paper are rooted in the type \(A\)-specific philosophy of Schur-Weyl duality, combining our results with the analogous results in the \(\mathcal{D}\)-module setting leads us to the following natural conjecture.
**Conjecture 1.9**.: _The isomorphism (1.2) and the results of Theorems 1.2 and 1.4 all hold for an arbitrary connected reductive group \(G\) (replacing the symmetric group \(\mathfrak{S}_{N}\) with the Weyl group \(W\) of \(G\))._
Proving these conjectures would require a deeper understanding of \(\mathfrak{q}\)-deformed Springer theory, as outside of type \(A\) we lack elliptic Schur-Weyl duality, hence the reformulation via intertwiners for the double affine Hecke algebra, which are our main tools in proving Theorem 1.2.
We believe that the correct approach to these conjectures is to develop a "quantum Springer theory", which is to say a theory of parabolic induction and restriction functors for categories of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules. The required functors should arise in the factorization homology framework by considering the domain wall given by \(\operatorname{Rep}_{\mathfrak{q}}B\), placed at the surface \(T^{2}\times\{\frac{1}{2}\}\) inside the \(3\)-manifold \(T^{2}\times I\). We plan to return to this construction in future work.
Given such a formalism, one can the define the notion of a _cuspidal_\(\mathcal{D}_{\mathfrak{q}}(G)\)-module: one killed by parabolic restriction for every proper parabolic subgroup. The generalization of Theorems 1.6 and 1.7 for other groups \(G\) requires the following more general notion: a _\(\mathfrak{q}\)-cuspidal datum_ is defined to be a \(G\)-conjugacy class of pairs \((L,C)\), where \(L\) is an elliptic-pseudo Levi subgroup of \(G\) (in the sense of [12]) and \(C\) is a simple unipotent cuspidal \(\mathcal{D}_{\mathfrak{q}}(L)\)-module. For example, if \(L=C\), a \(\mathfrak{q}\)-cuspidal datum is the same thing as a unipotent simple cuspidal \(\mathcal{D}_{\mathfrak{q}}(G)\)-module. At the other extreme, for any \(G\), there is a canonical cuspidal datum \(Spr=(H,\mathcal{O}_{\mathfrak{q}}(H))\), where \(H\) is the maximal torus of \(G\), and \(\mathcal{O}_{\mathfrak{q}}(H)\) is the canonical module. The following statement is the \(\mathfrak{q}\)-analogue of Theorem A of [13].
**Conjecture 1.10**.: _For each connected reductive group \(G\), there is a finite block decomposition of the category \(\mathcal{D}_{\mathfrak{q}}(G)\)-mod\({}^{G}\) indexed by the set of \(\mathfrak{q}\)-cuspidal data. Moreover, the block corresponding to a cuspidal datum \((L,C)\) is equivalent to the category of modules for the smash product of \(\mathcal{D}_{\mathfrak{q}}(Z(L)^{\circ})\) by a twisted group algebra of a certain finite group._
In particular, there is always at least one block in the above decomposition, namely the _Springer block_, corresponding to the Springer cuspidal datum. We additionally conjecture that this block is generated by the universal Hotta-Kashiwara module \(\mathrm{HK}\). According to Theorem 1.6, this is the only block in the case \(G=GL_{N}\). On the other hand, by Theorem 1.7, there are five blocks: the Springer block, and four blocks corresponding to the four distinct simple cuspidal modules. More generally, we expect the cuspidal \(\mathcal{D}_{\mathfrak{q}}(SL_{N})\)-modules to be those whose central character \(\overline{n}\in\mathbb{Z}/N\mathbb{Z}=Z(SL_{N})^{\vee}\) satisfies \(\gcd(n,N)=1\). Applying the Schur-Weyl functor \(F_{n}\) to such modules produces finite-dimensional modules for the corresponding DAHA in parallel with the results of [1] in the rational case.
Finally, we note that the category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules is the category appearing on the \(B\)-side in the quantum Betti geometric Langlands conjectures for a \(2\)-torus [15]. One may also wonder how this category is related to the corresponding category in the more traditional de Rham quantum geometric Langlands program. In forthcoming work of the first author, an analogous generalized Springer decomposition is proved for generically twisted \(\mathcal{D}\)-modules on the moduli stack \(\mathrm{Bun}_{G}(E)\) of \(G\)-bundles on an elliptic curve \(E\) (see [15] and [12] for some context). In particular, there is an analogous set \(\mathrm{Cusp}_{\mathrm{ell}}(G)\) of _elliptic cuspidal data_ indexing the blocks. Although the categorical structure of the individual blocks is quite different in the elliptic and the \(\mathfrak{q}\)-cases, one still expects the following comparison.
**Conjecture 1.11**.: _There is a natural bijection between the set of simple cuspidal \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules and simple cuspidal generically twisted \(\mathcal{D}\)-modules on \(\mathrm{Bun}_{G}^{ss,0}(E)\)._
Note that the cuspidal \(\mathcal{D}(\mathfrak{g})\)-modules have been explicitly determined for each simply connected quasi-simple group \(G\) (see the table on the second page of [16]). As part of ongoing work of the first author with Penghui Li and Dragos Fratila, we will describe how the set of \(\mathfrak{q}\)-cuspidal data (or equivalently, elliptic cuspidal data) for any given group
\(G\) can be explicitly determined from Lusztig's table, giving a complete description of the category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules.
### Skeins dictionary
Theorem 1.1 and its corollaries are of independent interest in the study of skein algebras. The works [1, 1, 10, 11] establish, for any group \(G\), a canonical equivalence of categories,
\[\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\simeq\text{SkCat}(T^{2})\text{- mod},\]
between the category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules and the free co-completion \(\text{SkCat}_{G}(T^{2})\)-mod of the \(G\)-skein category of the two-torus.
Under this equivalence, HK maps to the empty skein object: indeed, the presentation we gave for HK is precisely the presentation of the distinguished object in factorization homology via quantum Hamiltonian reduction given in [1], which is shown in [1] to coincide with the empty skein object. We immediately obtain an isomorphism between the _skein algebra_\(\text{SkAlg}(T^{2})\) of the torus \(T^{2}\) - i.e. the endomorphism algebra in the skein category of the empty object - and the endomorphism algebra \(\text{End}(\text{HK})\) within strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules. To summarize:
**Corollary 1.12**.: _We have isomorphisms,_
\[\text{SkAlg}_{G}(T^{2})\cong\text{End}(\text{HK})\cong\mathrm{e}\cdot\mathbb{ H}_{\mathrm{N}}{}^{op}\cdot\mathrm{e}\cong(\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}) ^{op},\]
Recall that the case \(G=\text{SL}_{2}\) of this isomorphism is a fundamental computation in skein theory due to Frohman and Gelca [12]. Hence, we may regard Corollary 1.12 as a generalization of Frohman and Gelca's result from \(G=\text{SL}_{2}\) to the groups \(G=\text{SL}_{N},\text{GL}_{N}\). As justification for the categorical approach taken in this paper, we note that a completely elementary/computational proof of this corollary appears intractable due to the well-known combinatorial complexity of a generators-and-relations presentation of \(\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}\). In the sequel paper [1], we extend the isomorphism to all of \(\mathbb{H}_{N}\), and apply this isomorphism to compute dimensions of \(\text{GL}_{N}\)- and \(\text{SL}_{N}\)-skein modules of the three-torus \(T^{3}\).
In this formulation, Corollary 1.12 may also be compared to a theorem of Morton and Samuelson [13], where analogous descriptions are obtained relating the Homflypt skein algebra to the \(q=t\) specialization of the elliptic Hall algebra.
It should be possible to recover Morton and Samuelson's result from ours, by simultaneously regarding the Homflypt skein as a "limit" (polynomial interpolation) of the categories \(\text{Rep}_{\mathfrak{q}}(\text{GL}_{N})\), and likewise the elliptic Hall algebra as an inverse limit of spherical double affine Hecke algebras. It does not seem possible to reverse this process to obtain our result as a consequence of theirs, and indeed understanding the relation at finite rank rather than in the limit was a primary motivation for the present paper.
In the forthcoming paper [1], we prove a stronger result (using the Morita equivalence discussed in Theorem 1.6), which identifies certain relative skein algebras with (full rather than anti-spherical) double affine Hecke algebras. That result bears an analogous relation to recent work [1], where the double affine Hecke algebra was realised as a relative skein algebra for the Homflypt skein relations.
### Structure of the paper
The paper is laid out as follows. In Section 2 we give the basic definitions relating to quantum groups and \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules. Many of these are recollections, but some are new definitions. In Section 3 we recall the basic definitions for double affine Hecke algebras attached to GL and SL. In Section 4 we prove Theorem 1.1, by showing that a quantum spherical Schur-Weyl duality homomorphism constructed there is an isomorphism. In Section 5, we apply the well-known shift isomorphism to compute endomorphism algebras of quantum Hotta-Kashiwara modules \(\operatorname{HK}(\epsilon)\) and \(\operatorname{HK}(\chi)\), proving Theorems 1.2, 1.3, and 1.4. The paper ends with Section 6, which can be read independent of the other results in the paper. It proves via the method of intertwiners a statement purely about certain representations of double affine Hecke algebras, which taken together with Theorem 1.6 provides an alternative proof of Theorems 1.2, 1.3, and 1.4. We include it in the present paper rather than the forthcoming paper as it is closer in spirit to the results obtained here, despite the fact that its application depends on the results from [10].
**Remark 1.13**.: On the eve of posting this paper, we received a beautiful pre-print [23] from Joshua Wen, which establishes a quantum Harish-Chandra isomorphism similar to our Theorem 1.1 for quantum multiplicative quiver varieties using the radial parts construction.
### Acknowledgments and dedication
We have benefited during this work from insightful conversations with David Ben-Zvi, Pavel Safronov, Peter Samuelson, and Jose Simental,. The main idea of this paper was suggested to us by Tom Nevins, who outlined to the second author in 2012 the idea for a quantum Springer theory approached via quantum-Hotta Kashiwara modules. Tom was a generous and inspiring mathematician, and a kind soul; we dedicate this paper in honour of his memory.
The first author was partially supported by NSF grant DMS-2202363. The second author was partially supported by ERC Starting Grant no. 637618, and by the Simons Foundation award 888988 as part of the Simons Collaboration on Global Categorical Symmetry. The third author was partially supported by Simons Foundation Collaboration Grants 319233 and 707426. The authors are grateful to the International Centre for Mathematical Sciences Research in Groups Programme, and to Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, who hosted research visits during which parts of this work was undertaken.
## 2. Quantum groups, their differential operators and Hotta-Kashiwara modules
In this section we recall basic definitions about quantum groups, braided quantum coordinate algebras, and quantum differential operators. Finally, we recall the notion of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules, and we give the construction of the Hotta-Kashiwara modules \(\operatorname{HK}(\chi)\).
**Notation 2.1**.: Fix a positive integer \(N\). In this section, we let \(\mathcal{R}\) denote a \(\mathbb{Q}\)-algebra that is an integral domain together with an element \(\mathfrak{q}^{\frac{1}{N}}\in\mathcal{R}^{\times}\) such that \(\frac{\mathfrak{q}^{m}-1}{\mathfrak{q}-1}\) is invertible for all \(m\geq 1\), and let \(\mathcal{K}\) denote its field of fractions. Our main example for \(\mathcal{R}\) is the local ring \(\mathcal{R}=\mathbb{Q}[\mathfrak{q}^{\frac{1}{N}}]_{(\mathfrak{q}-1)}\) at \(\mathfrak{q}=1\), in which case \(\mathcal{K}=\mathbb{Q}(\mathfrak{q}^{\frac{1}{N}})\). Unless otherwise specified, everything in this section will be defined over the base ring \(\mathcal{R}\).
**Remark 2.2**.: To straddle the various conventions in the literature, we have had to introduce different fonts for our parameters: (teletype) \(\mathfrak{q}\) for quantum groups of Section 2, (Roman) \(q\) (respectively bold Roman \(\boldsymbol{q}\)) for the loop parameter of the DAHA of type GL in Section 3.1 (resp. type SL in Section 3.2). As we shall see below, the quadratic parameter in the Hecke algebras is usually denoted \(t\), and for Schur-Weyl considerations we later specialize \(t=\mathfrak{q}\). All of the parameters come together in Section 4.
**Notation 2.3**.: We will have occasion to use the quantum integers denoted \([k]_{r}\) for \(k\in\mathbb{Z}\) and for various elements \(r\in\mathcal{R}\). We take the (unbalanced) convention \([k]_{r}=\frac{r^{k}-1}{r-1}\) and \([n]_{r}!=[n]_{r}\cdots[2]_{r}[1]_{r}\). As usual \(\genfrac{[}{]}{0.0pt}{}{n}{k}_{r}=\frac{[n]_{r}[n-1]_{r}\cdots[n-k+1]_{r}}{[k ]_{r}\ [k-1]_{r}\ \cdots\ [1]_{r}}\).
### The quantum groups \(U_{\mathfrak{q}}(\mathfrak{gl}_{N}),U_{\mathfrak{q}}(\mathfrak{sl}_{N})\)
We may work over Lusztig's integral form of the quantum group over \(\mathcal{R}\), but as \([n]_{\mathfrak{q}}\) is a unit in \(\mathcal{R}\), there is no appreciable difference working over this integral form as compared to other conventions. We refer to [10] for detailed definitions, in particular the Serre presentation of the quantum groups \(U_{\mathfrak{q}}(\mathfrak{gl}_{N})\) and \(U_{\mathfrak{q}}(\mathfrak{sl}_{N})\), the formulas for \(R\)-matrices, and the Peter-Weyl theorem.
We will consider the \(\mathcal{R}\)-linear braided tensor categories \(\operatorname{Rep}_{\mathfrak{q}}(\operatorname{GL}_{N})\) and \(\operatorname{Rep}_{\mathfrak{q}}(\operatorname{SL}_{N})\) of integrable \(U_{\mathfrak{q}}(\mathfrak{gl}_{N})\)-modules (resp. \(U_{\mathfrak{q}}(\mathfrak{sl}_{N})\)-modules). Note that our hypotheses on the ring \(\mathcal{R}\) mean that the Weyl modules \(V(\lambda)\) form a collection of compact projective generators as \(\lambda\) ranges over the set of integral dominant weights. There are no nontrivial homomorphisms nor extensions between different Weyl modules. Thus every object of \(\operatorname{Rep}_{\mathfrak{q}}(G)\) may be written as a direct sum
\[V\cong\bigoplus_{\lambda}V(\lambda)\otimes_{\mathcal{R}}M_{\lambda} \tag{2.1}\]
where \(M_{\lambda}\) is an \(\mathcal{R}\)-module. The braiding is specified by an \(R\)-matrix over the ring \(\mathcal{R}\).
### The vector and fundamental representations
The vector representation \(\mathcal{R}^{N}\) for either \(U_{\mathfrak{q}}(\mathfrak{gl}_{N})\) and \(U_{\mathfrak{q}}(\mathfrak{sl}_{N})\) will be simply denoted \(V\). We fix \(v_{1},\ldots,v_{N}\) to be the standard basis for \(V\).
Let \(\tau:V\otimes V\) denote the tensor flip, \(\tau(v\otimes w)=w\otimes v\). The braiding, \(\sigma_{V,V}=\tau\circ R\), where \(R=R_{V,V}\) denotes the \(R\)-matrix for \(U_{\mathfrak{q}}(\mathfrak{gl}_{N})\), satisfies a Hecke relation
\[(\sigma_{V,V}-\mathfrak{q})(\sigma_{V,V}+\mathfrak{q}^{-1})=0.\]
The \(\operatorname{SL}_{N}\) braiding on the vector representation is equal to the \(\operatorname{GL}_{N}\) braiding, multiplied by a factor of \(\mathfrak{q}^{-\frac{1}{N}}\). To avoid confusion, we will reserve the notation \(R\) and \(\sigma_{V,V}\) for the
\(\mathrm{GL}_{N}\)\(R\)-matrix and braiding, and write \(\mathsf{q}^{-\frac{1}{N}}R\) and \(\mathsf{q}^{-\frac{1}{N}}\sigma_{V,V}\) to reference the \(\mathrm{SL}_{N}\)\(R\)-matrix and braiding.
The operators \(T_{i}=\mathrm{Id}_{V^{\otimes(i-1)}}\otimes\sigma_{V,V}\otimes\mathrm{Id}_{V^{ \otimes n-i-1}}\) (in both the \(\mathrm{GL}_{N}\) and \(\mathrm{SL}_{N}\) case) define an action of the finite Hecke algebra \(\mathrm{H}_{n}^{\mathrm{fin}}\) on \(V^{\otimes n}\) with quadratic parameter \(\mathsf{q}\). We denote by \(\Lambda^{k}V:=\bigwedge_{\mathsf{q}}^{k}(V)\) the \(k\)th fundamental representation of \(U_{\mathsf{q}}(\mathfrak{gl}_{N})\). In particular \(\Lambda^{N}V\) is the one-dimensional determinant representation. We can embed \(\Lambda^{N}V\) into \(V^{\otimes N}\) as the span of
\[z=\mathrm{e}_{N}\cdot v_{1}\otimes v_{2}\otimes\cdots\otimes v_{N}=\frac{1}{[ N]_{\mathsf{q}^{-2}}!}\sum_{\sigma\in\mathfrak{S}_{N}}(-\mathsf{q}^{-1})^{ \ell en(\sigma)}v_{\sigma(1)}\otimes v_{\sigma(2)}\otimes\cdots\otimes v_{ \sigma(N)}. \tag{2.2}\]
In the case of \(\mathfrak{sl}_{N}\) the vector \(z\) defined by (2.2) is now an invariant vector. Note as in (3.1) below for \(\mathsf{q}=t\), that \(\mathrm{e}_{N}\) is the sign idempotent in \(\mathrm{H}_{N}^{\mathrm{fin}}\).
### The quantum coordinate algebra
**Definition 2.4**.: The reflection equation algebra of type \(\mathrm{GL}_{N}\), denoted \(\mathcal{O}_{\mathsf{q}}(\mathrm{Mat}_{\mathrm{N}})\), is the \(\mathcal{R}\)-algebra generated by symbols \(\ell_{j}^{i}\), for \(i,j=1,\ldots N\), subject to the relations,
\[R_{21}L_{1}R_{12}L_{2}=L_{2}R_{21}L_{1}R_{12},\]
where \(L:=\sum_{i,j}\ell_{j}^{i}E_{i}^{j}\) is a matrix with entries the generators \(\ell_{j}^{i}\), and for a matrix \(X\), we write \(X_{1}=X\otimes\mathrm{Id}_{V}\), \(X_{2}=\mathrm{Id}_{V}\otimes X\), so that the matrix equation above is equivalent to the list of relations, for \(i,j,n,r\in\{1,\ldots,N\}\):
\[\sum_{k,l,m,p}R_{kl}^{ij}\ell_{m}^{l}R_{np}^{mk}\ell_{r}^{p}=\sum_{s,t,u,v}\ell _{s}^{i}R_{tu}^{sj}\ell_{v}^{u}R_{nr}^{vs}. \tag{2.3}\]
**Remark 2.5**.: We note that, since \(R\) appears quadratically on both sides of the defining relation, the relations are unchanged by replacing \(R\leadsto\mathsf{q}^{-\frac{1}{N}}R\).
**Proposition 2.6** ([15]).: _The element,_
\[\det_{\mathsf{q}}(L):=\sum_{\sigma\in\mathfrak{S}_{N}}(-\mathsf{q})^{\ell en( \sigma)}\cdot\mathsf{q}^{e(\sigma)}\ell_{\sigma(1)}^{1}\cdots\ell_{\sigma(N)} ^{N},\]
_is central in \(\mathcal{O}_{\mathsf{q}}(\mathrm{Mat}_{\mathrm{N}})\)._
Here \(\ell en(\sigma)\) denotes the length, i.e. the number of pairs \(i<j\) such that \(\sigma(i)>\sigma(j)\), and \(e(\sigma)\) denotes the excedence, i.e. the number of elements \(i\) such that \(\sigma(i)>i\).
**Definition 2.7**.: The quantum coordinate algebras \(\mathcal{O}_{\mathsf{q}}(\mathrm{GL}_{N})\) and \(\mathcal{O}_{\mathsf{q}}(\mathrm{SL}_{N})\) are the algebras obtained from \(\mathcal{O}_{\mathsf{q}}(\mathrm{Mat}_{\mathrm{N}})\), by inverting, respectively specializing to one, the central element \(\det_{\mathsf{q}}(L)\). That is,
\[\mathcal{O}_{\mathsf{q}}(\mathrm{GL}_{N})=\mathcal{O}_{\mathsf{q}}(\mathrm{ Mat}_{\mathrm{N}})[\det_{\mathsf{q}}(L)^{-1}],\qquad\mathcal{O}_{\mathsf{q}}( \mathrm{SL}_{N})=\mathcal{O}_{\mathsf{q}}(\mathrm{Mat}_{\mathrm{N}})/\langle \det_{\mathsf{q}}(L)-1\rangle.\]
The \(\mathcal{R}\)-algebra \(\mathcal{O}_{\mathfrak{q}}(G)\) is naturally an algebra object in the tensor category \(\operatorname{Rep}_{\mathfrak{q}}(G)\), in other words it is a locally finite \(U_{\mathfrak{q}}(\mathfrak{g})\)-module algebra. We recall the quantum Harish-Chandra isomorphism,
\[\mathcal{O}_{\mathfrak{q}}(G)^{G}\cong\mathcal{R}[H]^{W},\]
which holds for generic \(\mathfrak{q}\), and identifies the algebra \(\mathcal{O}_{\mathfrak{q}}(G)^{G}\) of \(U_{\mathfrak{q}}(\mathfrak{g})\)-invariant elements of \(\mathcal{O}_{\mathfrak{q}}(G)\) with the \(W\)-invariant functions on the maximal torus \(H\).
**Notation 2.8**.: We denote by \(c_{1},\dots,c_{N}\) the canonical generators - "quantum Casimirs" - which are obtained as the quantum trace of the \(k\)th fundamental representation \(\Lambda^{k}V\) (see (4.7) below for a depiction). In particular, we have \(c_{N}=\det_{\mathfrak{q}}(L)\). Working over \(\mathcal{K}\) the elements \(c_{k}\) generate the center of \(\mathcal{O}_{\mathfrak{q}}(G)\).
### The quantum Harish-Chandra category \(\operatorname{HC}_{\mathfrak{q}}(G)\)
The discussion in this subsection is not strictly necessary to establish the results in this paper, but is meant to aid the reader in interpreting those results.
Let \(A\) be some algebra object in the category \(\operatorname{Rep}_{\mathfrak{q}}(G)\). Recall that this means \(A\) is an object of \(\operatorname{Rep}_{\mathfrak{q}}\), with an associative product \(m:A\otimes A\to A\) internal to \(\operatorname{Rep}_{\mathfrak{q}}(G)\), i.e. \(m\) is a homomorphism of \(U_{\mathfrak{q}}(\mathfrak{g})\)-modules.
**Definition 2.9**.: The category of **weakly equivariant** (left) \(A\)-modules consists of objects \(M\in\operatorname{Rep}_{\mathfrak{q}}(G)\), equipped with associative action maps \(\mathcal{D}_{\mathfrak{q}}(G)\otimes M\to M\) internal to \(\operatorname{Rep}_{\mathfrak{q}}(G)\).
The quantum Harish-Chandra category \(\operatorname{HC}_{\mathfrak{q}}(G)\) is the category of \(\mathcal{O}_{\mathfrak{q}}(G)\)-modules internal to the category \(\operatorname{Rep}_{\mathfrak{q}}G\). It carries the cp-rigid monoidal structure of relative tensor product. See [1, 10] for further details.
**Remark 2.10**.: Specializing the category \(\operatorname{HC}_{\mathfrak{q}}(G)\) at \(\mathfrak{q}=1\) (that is, working over the ring \(\kappa=\mathcal{R}/(\mathfrak{q}-1)\)), we obtain the category \(\operatorname{QCoh}([G/G])\) of quasi-coherent sheaves on the quotient stack \([G/G]\). On the other hand, as explained in [10], the quasi-classical degeneration \(\mathfrak{q}\to 1\) of \(\operatorname{HC}_{\mathfrak{q}}(G)\) becomes the classical category of Harish-Chandra \(U(\mathfrak{g})\)-bimodules.
### Quantum differential operators
The algebra of quantum differential operators on \(G\), which we denote by \(\mathcal{D}_{\mathfrak{q}}(G)\), was studied in many different settings. The presentation below as a twisted tensor product is adapted from the paper [13] (see also [1]), and hence matches the conventions of [11] (see footnote 3 there, however) and [12].
**Definition 2.11**.: For \(G=\operatorname{GL}_{N}\), or \(\operatorname{SL}_{N}\), the algebra \(\mathcal{D}_{\mathfrak{q}}(G)\) is the twisted tensor product,
\[\mathcal{D}_{\mathfrak{q}}(G)=\mathcal{O}_{\mathfrak{q}}(G)\stackrel{{ \sim}}{{\otimes}}\mathcal{O}_{\mathfrak{q}}(G). \tag{2.4}\]
Denoting \(a_{j}^{i}\) and \(b_{j}^{i}\) to be the generators of the first and second factors (henceforth denoted \(\mathcal{A}\) and \(\mathcal{B}\)), the cross relations are given in matrix form by:
\[B_{2}R_{21}A_{1} =R_{21}A_{1}R_{12}B_{2}R_{21}, \text{if }G=\operatorname{GL}_{N} \tag{2.6}\] \[B_{2}R_{21}A_{1} =R_{21}A_{1}R_{12}B_{2}R_{21}\mathfrak{q}^{-2/N}, \text{if }G=\operatorname{SL}_{N} \tag{2.5}\]
where \(A=\sum_{i,j}a_{j}^{i}E_{i}^{j}\) and \(B=\sum_{i,j}b_{j}^{i}E_{i}^{j}\).
**Remark 2.12**.: The matrices \(A\) and \(B\) here are defined using the defining representation \(V\) of \(U_{\mathfrak{q}}(\mathfrak{g})\), and its dual \(V^{*}\). We note for later reference that in fact for any representation \(X\) we can define a matrix \(A_{X}\in\mathcal{O}_{\mathfrak{q}}(G)\otimes\operatorname{End}_{\mathcal{R} }(X)\), which will satisfy the relations of (2.3), with \(R\) replaced by \(R_{X,X}\). Similarly, in the algebra \(\mathcal{D}_{\mathfrak{q}}(G)\) we can define matrices \(A_{X}\) and \(B_{Y}\) for any representations \(X\) and \(Y\), and their cross relations will be as in (2.5), with \(R\) replaced by \(R_{X,Y}\). We refer to [15] for details about these matrices and the resulting diagrammatic calculus for computing with \(\mathcal{D}_{\mathfrak{q}}(G)\).
**Notation 2.13**.: We denote by \(i_{\mathcal{A}}\) and \(i_{\mathcal{B}}\) the inclusions into the first and second tensor factor of (2.4). We denote by \(i_{\widetilde{\mathcal{B}}}\) the inclusion of \(\mathcal{O}_{\mathfrak{q}}(G)^{G}\) into \(\mathcal{O}_{\mathfrak{q}}(G)\) followed by \(i_{\mathcal{B}}\), whose image is denoted \(\mathcal{B}^{G}\).
Observe that \(i_{\mathcal{A}}\otimes i_{\mathcal{B}}:\mathcal{O}_{\mathfrak{q}}(G)\otimes \mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{D}_{\mathfrak{q}}(G)\) is the tautological isomorphism of \(U_{\mathfrak{q}}(\mathfrak{g})\)-modules (however it is not an algebra homomorphism).
**Definition 2.14**.: The quantum moment map,
\[\operatorname{ad}:\mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{D}_{\mathfrak{q}} (G)\]
is defined by the matrix formulas by \(\operatorname{ad}(L)=B^{-1}ABA^{-1}\).
We have the Rosso homomorphism \(\mathfrak{R}:\mathcal{O}_{\mathfrak{q}}(G)\to U_{\mathfrak{q}}(\mathfrak{g})\). Let \(\rhd\) denote the quantum adjoint action of \(U_{\mathfrak{q}}(\mathfrak{g})\) on \(\mathcal{D}_{\mathfrak{q}}(G)\). Then the homomorphism \(\operatorname{ad}\) satisfies the quantum moment map equation,
\[\operatorname{ad}(\ell)y=(\mathfrak{R}(\ell)_{(1)}\rhd y)\operatorname{ad}( \mathfrak{R}(\ell)_{(2)}),\qquad\text{for }\ell\in\mathcal{O}_{\mathfrak{q}}(G),\,y\in \mathcal{D}_{\mathfrak{q}}(G). \tag{2.7}\]
We refer to [16] for a more in-depth discussion of quantum moment maps.
**Remark 2.15**.: If we work over \(\mathcal{K}\), the Rosso homomoprhism \(\mathfrak{R}\) is an algebra embedding, and pulling back along \(\mathfrak{R}\) defines a fully faithful functor \(\operatorname{Rep}_{\mathfrak{q}}(G)\to\mathcal{O}_{\mathfrak{q}}(G)\text{-mod}\).
**Notation 2.16**.: We require notation for the following \(\mathcal{R}\)-submodules of \(\mathcal{D}_{\mathfrak{q}}(G)\):
\[C(\operatorname{ad}) :=\{\operatorname{ad}(\ell)-\epsilon(\ell)\ |\ \ell\in\mathcal{O}_{ \mathfrak{q}}(G)\},\] \[C(i_{\mathcal{A}}) :=\{i_{\mathcal{A}}(\ell)-\widetilde{\epsilon}(\ell)\ |\ \ell\in \mathcal{O}_{\mathfrak{q}}(G)\},\] \[C(\chi) :=\{i_{\mathcal{B}}(y)-\chi(y)\ |\ y\in\mathcal{O}_{\mathfrak{q}}(G)^{G}\}\]
where \(\chi\) is an algebra homomorphism from \(\mathcal{O}_{\mathfrak{q}}(G)^{G}\) to \(\mathcal{R}\), i.e., a character.
Here, we denote by \(\widetilde{\epsilon}\) the linear morphism \(\mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{R}\) determined on matrix coefficients \(W^{*}\otimes W\) for a Weyl module \(W\) by the formula,
\[W^{*}\otimes W\xrightarrow{D\otimes 1}W^{*}\otimes W\xrightarrow{\epsilon} \mathcal{R},\]
where \(D\) denotes the natural transformation of \(\operatorname{Id}_{\operatorname{Rep}_{\mathfrak{q}}(G)}\) depicted on the left below. Hence \(\tilde{\epsilon}\) is denoted on the right below:
**Proposition 2.17**.: _We have a containment of right ideals,_
\[C(\operatorname{ad})\cdot\mathcal{D}_{\mathfrak{q}}(G)\subset C(i_{\mathcal{A} })\cdot\mathcal{D}_{\mathfrak{q}}(G).\]
_Equivalently the surjective homomorphism of right \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules,_
\[\mathcal{D}_{\mathfrak{q}}(G)\to(C(i_{\mathcal{A}})\cdot\mathcal{D}_{ \mathfrak{q}}(G))\backslash\mathcal{D}_{\mathfrak{q}}(G),\]
_factors canonically to a surjective homomorphism,_
\[(C(\operatorname{ad})\cdot\mathcal{D}_{\mathfrak{q}}(G))\backslash\mathcal{D }_{\mathfrak{q}}(G)\to(C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G)) \backslash\mathcal{D}_{\mathfrak{q}}(G),\]
_making the following diagram commute:_
(2.8)
Proof.: It will be convenient to use the matrix notation from Definition 2.14. Since the matrices \(A\) and \(B\) are invertible (with inverse given by the antipode), we may rewrite \(C(\operatorname{ad})\cdot\mathcal{D}_{\mathfrak{q}}(G)=C^{\prime}( \operatorname{ad})\cdot\mathcal{D}_{\mathfrak{q}}(G)\), where
\[C^{\prime}(\operatorname{ad})=\{\text{matrix coefficients of the matrix }B^{-1}A-AB^{-1}\}.\]
It will suffice to show that \(C^{\prime}(\operatorname{ad})\) is already zero in \((C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G))\backslash\mathcal{D}_{ \mathfrak{q}}(G)\). Clearly the matrix \(AB^{-1}\) reduces to \(B^{-1}\) modulo \(C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G)\). In order to evaluate \(B^{-1}A\), we must first PBW order it so that the \(A\)'s are to the left, and then apply the coinvariant relation to the \(\mathcal{A}\) factor. We compute this in Figure 1, showing that \(B^{-1}A\) also reduces modulo \(C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G)\) to \(B^{-1}\), hence finally that \(C^{\prime}(\operatorname{ad})\) is zero in \((C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G))\backslash\mathcal{D}_{ \mathfrak{q}}(G)\), as required.
**Notation 2.18**.: We let \(\mathcal{D}_{\mathfrak{q}}(G)^{G}\) (resp. \(\mathcal{A}^{G}\), \(\mathcal{B}^{G}\)) denote the invariant subalgebra of \(\mathcal{D}_{\mathfrak{q}}(G)\) (resp. \(\mathcal{A}^{G}\), \(\mathcal{B}^{G}\)) with respect to the quantum adjoint action \(\rhd\).
**Definition 2.19**.: A weakly equivariant left (respectively, right) \(\mathcal{D}_{\mathfrak{q}}(G)\)-module \(M\) is **strongly equivariant** if the equation
\[\operatorname{ad}(\ell)\cdot m=\mathfrak{R}(\ell)\cdot m,\qquad\text{respectively} \qquad m\cdot\operatorname{ad}(\ell)=S(\mathfrak{R}(\ell)).m\]
holds for all \(\ell\in\mathcal{O}_{\mathfrak{q}}(G)\) and \(m\in M\). The category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules is denoted \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\).
If we work over a base field \(\mathcal{K}\) (so that we treat the quantum parameter \(\mathfrak{q}\) as generic), then there we have the following more elementary formulation of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules.
**Definition 2.20**.: Let \(A\) be an \(\mathcal{K}\)-algebra and \(M\) an \(A\)-module. We say \(M\) is **locally finite** if, for all \(m\in M\), we have \(\dim_{\mathcal{K}}(A\cdot m)<\infty\).
**Proposition 2.21**.: _Suppose that the base ring is the field \(\mathcal{K}\). Then the category of strongly equivariant left (respectively, right) \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules is equivalent to the full subcategory of left (respectively, right) \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules whose restriction to \(\mathcal{O}_{\mathfrak{q}}(G)\) via \(\operatorname{ad}\) is locally finite._
**Proposition 2.22**.: _Suppose that the base ring is \(\mathcal{K}\), that \(M\) (respectively \(N\)) is a strongly equivariant left (resp. right) \(\mathcal{D}_{\mathfrak{q}}(G)\)-module. Then we have isomorphisms of vector spaces,_
\[C(\operatorname{ad})\cdot M\Big{\backslash}M\,\cong M_{G}\cong M^{G}\quad\text{ and}\quad N\Big{/}N\cdot\,C(\operatorname{ad})\,\cong N_{G}\cong N^{G},\]
_where \(M_{G}\) and \(M^{G}\) (resp. \(N_{G}\) and \(N^{G}\)) denote the coinvariants and invariants._
### Quantum Hotta-Kashiwara modules
**Notation 2.23**.: Given a character \(\beta\) of an algebra, we write \(\mathfrak{r}(\beta)\) for the corresponding \(1\)-dimensional module.
Given algebras \(R\) and \(S\), an algebra homomorphism \(\phi:R\to S\), a right \(S\)-module \(N\), and a left \(R\)-module \(M\), we will denote \(N\underset{\phi}{\otimes}M\) as a shorthand to denote the relative tensor product \(\phi^{*}(N)\underset{R}{\otimes}M\). In the case \(N\) is a \(S\)-\(S\) bimodule, \(N\underset{\phi}{\otimes}M\) inherits the structure of a left \(S\)-module.
**Definition 2.24**.: Let HK denote the **universal quantum Hotta-Kashiwara module**,
\[\operatorname{HK}=\mathcal{D}_{\mathfrak{q}}(G)\underset{\operatorname{ad}}{ \otimes}\mathfrak{r}(\epsilon),\]
where \(\epsilon\) denotes the counit on \(\mathcal{O}_{\mathfrak{q}}(G)\), i.e. \(\epsilon(\ell^{i}_{j})=\delta_{i,j}\).
**Remark 2.25**.: Alternatively, HK is the quotient of \(\mathcal{D}_{\mathfrak{q}}(G)\) by the left ideal \(J\) generated by elements of \(C(\operatorname{ad})\). Since \(\mathcal{D}_{\mathfrak{q}}(G)^{G}\) commutes with \(\operatorname{ad}(\ell)-\epsilon(\ell)\) for any \(\ell\in\mathcal{O}_{\mathfrak{q}}(G)\), it follows that the regular \(\mathcal{D}_{\mathfrak{q}}(G)\)-\(\mathcal{D}_{\mathfrak{q}}(G)^{G}\)-bimodule action on \(\mathcal{D}_{\mathfrak{q}}(G)\) descends through the quotient, making HK a \(\mathcal{D}_{\mathfrak{q}}(G)\)-\(\mathcal{D}_{\mathfrak{q}}(G)^{G}\)-bimodule.
**Proposition 2.26**.: _Let \(M\) be a strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module. Then_
\[\operatorname{Hom}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}}( \operatorname{HK},M)\cong\operatorname{Hom}_{\operatorname{Rep}_{\mathfrak{q}} (G)}(\mathbf{1},M)=:M^{G},\]
_where \(\mathbf{1}\) denotes the trivial representation. In particular, \(\operatorname{HK}\) is a projective object in \(\mathcal{D}_{\mathfrak{q}}(G)\)-mod\({}^{G}\)._
Proof.: We have
\[\operatorname{Hom}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}}( \operatorname{HK},M)\cong\operatorname{Hom}_{\mathcal{O}_{\mathfrak{q}}(G) \text{-mod}_{\operatorname{Rep}_{\mathfrak{q}}(G)}}(\mathbf{1},M),\]
where \(\mathcal{O}_{\mathfrak{q}}(G)\)-mod\({}_{\operatorname{Rep}_{\mathfrak{q}}(G)}\) denotes the category of \(\mathcal{O}_{\mathfrak{q}}(G)\)-modules internal to \(\operatorname{Rep}_{\mathfrak{q}}(G)\). As \(M\) is assumed to be strongly equivariant, this is the just the same as \(\operatorname{Hom}_{\operatorname{Rep}_{\mathfrak{q}}}(\mathbf{1},M)=M^{G}\) as required.
**Remark 2.27**.: More generally, we have
\[\operatorname{Hom}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}}( \operatorname{Dist}(W),M)=\operatorname{Hom}_{\operatorname{Rep}_{\mathfrak{q} }(G)}(W,M)\]
where for an object \(W\in\operatorname{Rep}_{\mathfrak{q}}\), we denote the strongly equivariant module
\[\operatorname{Dist}(W)=\mathcal{D}_{\mathfrak{q}}(G)\otimes_{ \operatorname{ad}}\mathfrak{R}^{*}(W).\]
**Definition 2.28**.: Fix a character \(\chi\) of \(\mathcal{O}_{\mathfrak{q}}(G)^{G}\). We define the \(\mathcal{D}_{\mathfrak{q}}(G)\)-module
\[\operatorname{HK}(\chi)=\operatorname{HK}\underset{\mathcal{B}^{G}}{\otimes} \mathfrak{r}(\chi),\]
which we call a **quantum Hotta-Kashiwara module**.
**Remark 2.29**.: Unwinding the definitions one sees that the quantum Hotta-Kashiwara module may be presented as
\[\operatorname{HK}(\chi)=\mathcal{D}_{\mathfrak{q}}(G)\Big{/}(\mathcal{D}_{ \mathfrak{q}}(G)\cdot C(\operatorname{ad})+\mathcal{D}_{\mathfrak{q}}(G) \cdot C(\chi)),\]
as in equation (1.1) from the introduction.
**Remark 2.30**.: We note that \(\operatorname{HK}\) and each \(\operatorname{HK}(\chi)\) are strongly equivariant as \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules.
**Proposition 2.31**.: _Suppose that the base ring is \(\mathcal{K}\), the natural inclusion \(i_{\mathcal{B}^{G}}:\mathcal{O}_{\mathfrak{q}}(G)^{G}\to\mathcal{D}_{ \mathfrak{q}}(G)\) descends to an isomorphism of vector spaces,_
\[\mathcal{O}_{\mathfrak{q}}(G)^{G}\cong C(i_{\mathcal{A}})\cdot\operatorname{ HK}\Big{\backslash}\operatorname{HK},\]
Proof.: We compute,
\[C(i_{\mathcal{A}})\cdot\operatorname{HK}\Big{\backslash} \operatorname{HK} =C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G)\Big{\backslash} \Big{(}\mathcal{D}_{\mathfrak{q}}(G)\Big{/}\mathcal{D}_{\mathfrak{q}}(G)\cdot C (\operatorname{ad})\Big{)}\] \[=\Big{(}C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G) \Big{\backslash}\mathcal{D}_{\mathfrak{q}}(G)\Big{)}\Big{/}\mathcal{D}_{ \mathfrak{q}}(G)\cdot C(\operatorname{ad})\] \[\cong\mathcal{O}_{\mathfrak{q}}(G)^{G}\]
where the penultimate isomorphism follows from Proposition 2.22, and the last isomorphism follows from the PBW property for \(\mathcal{D}_{\mathfrak{q}}(G)\). The isomorphism is compatible with the inclusion \(i_{\mathcal{B}}:\mathcal{O}_{\mathfrak{q}}(G)\to\mathcal{D}_{\mathfrak{q}}(G)\), and hence the claim follows.
**Corollary 2.32**.: _Suppose that the base ring is \(\mathcal{K}\). For any character \(\chi\) of \(\mathcal{O}_{\mathfrak{q}}(G)^{G}\), the image of \(1\in\mathcal{D}_{\mathfrak{q}}(G)\) in the quantum Hotta-Kashiwara module \(\operatorname{HK}(\chi)\) is nonzero._
Proof.: We compute,
\[\operatorname{HK}(\chi)^{G}\cong C(\operatorname{ad})\cdot\mathcal{D}_{ \mathfrak{q}}(G)\Big{\backslash}\mathcal{D}_{\mathfrak{q}}(G)\Big{/}( \mathcal{D}_{\mathfrak{q}}(G)\cdot C(\operatorname{ad})+\mathcal{D}_{ \mathfrak{q}}(G)\cdot C(\chi))\cdot\]
By Proposition 2.17, this surjects onto
\[C(i_{\mathcal{A}})\cdot\mathcal{D}_{\mathfrak{q}}(G)\Big{\backslash}\mathcal{ D}_{\mathfrak{q}}(G)\Big{/}(\mathcal{D}_{\mathfrak{q}}(G)\cdot C(\operatorname{ ad})+\mathcal{D}_{\mathfrak{q}}(G)\cdot C(\chi));\]
which is one-dimensional by Proposition 2.31, and spanned by the image of \(1\in\mathcal{D}_{\mathfrak{q}}(G)\).
The following definition is a \(\mathfrak{q}\)-analogue of the corresponding notion for \(\mathcal{D}(G)\)-modules defined by Ginzburg [10].
**Definition 2.33**.: A strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module \(M\) is called _admissible_ if it is \(\mathcal{B}^{G}\)-locally finite. We write \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{\,\text{adm}}\) for the full subcategory of \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{\,G}\) consisting of admissible modules.
In analogy with the \(\mathcal{D}(G)\) and \(\mathcal{D}(\mathfrak{g})\) setting, we will use the term \(\mathfrak{q}\)-character sheaf for an admissible strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module.
**Definition 2.34**.: Let \(C\) denote an orbit for the action of \(W^{\text{\it aff}}\) on the torus \(H\). We say that an admissible \(\mathcal{D}_{\mathfrak{q}}(G)\)-module \(M\) has _infinitesimal character_\(C\) if, for any simultaneous eigenvalue \(\chi:\mathcal{B}^{G}\to\mathcal{K}\) for the action of \(\mathcal{B}^{G}\) on \(M\), the character \(\chi\) maps to \(C\) under the projection \(\text{Spec}(\mathcal{B}^{G})\cong H/W\to H/W^{\text{\it aff}}\). We write \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}_{C}^{\,\text{adm}}\) for the subcategory of admissible modules with infinitesimal character \(C\). We call an admissible module _unipotent_ if it has infinitesimal character in the \(W^{\text{\it aff}}\)-orbit of \(1\in H\).
**Remark 2.35**.: Just as in the classical case [10, Theorem 1.3.2], one can show that there is an orthogonal decomposition
\[\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{\,\text{adm}}=\bigoplus_{C\in H/W^{ \text{\it aff}}}\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}_{C}^{\,\text{adm}}.\]
We will not need the result in this paper, so we omit the proof. In the case \(G=\text{GL}_{N}\), this fact follows readily by considering the functor \(\Upsilon\) from Section 5.1, which is known to be an equivalence after the results of [11].
## 3. Double affine Hecke algebras
**Notation 3.1**.: When discussing the finite and affine Hecke algebra we will use the following notation. Fix a \(\mathbb{Q}\)-algebra \(\mathcal{R}\) which is an integral domain and contains fixed element \(t\in\mathcal{R}^{\times}\), such that \(\frac{t^{m}-1}{t-1}\) is invertible for all \(m\neq 0\). Let \(\mathcal{K}\) denote the field of fractions of \(\mathcal{R}\). Unless stated otherwise everything will be defined with base ring \(\mathcal{R}\).
**Definition 3.2**.: The finite Hecke algebra \(\text{H}_{n}^{\text{fin}}(t)\) is the \(\mathcal{R}\)-algebra with generators \(T_{1},\dots T_{n-1}\), and relations:
\[T_{i}T_{i+1}T_{i}=T_{i+1}T_{i}T_{i+1},\text{ for }i=1,\dots,n-2,\]
\[T_{i}T_{j}=T_{j}T_{i},\text{ if }|i-j|\geq 2\,\qquad(T_{i}-t)(T_{i}+t^{-1})=0, \text{ for }i=1,\dots n-1.\]
Given a reduced word \(w=s_{i_{1}}\cdots s_{i_{m}}\in\mathfrak{S}_{n}\) we have a corresponding well-defined element \(T_{w}=T_{i_{1}}\cdots T_{i_{m}}\in\text{H}_{n}^{\text{fin}}(t).\) We will write \(\ell en(w)=m\) for the length of \(w\). By convention \(T_{\text{ld}}=1.\) Note that the set \(\{T_{w}|w\in\mathfrak{S}_{n}\}\) forms a basis of \(\text{H}_{n}^{\text{fin}}(t)\).
The finite Hecke algebra has two one-dimensional representations. Corresponding to the trivial and the sign representation, respectively, we have the idempotents
\[\mathrm{e}_{n}^{+}=\frac{1}{[n]_{t^{2}}!}\sum_{w\in\mathfrak{S}_{n}}t^{\ell en(w )}T_{w},\qquad\mathrm{e}_{n}=\frac{1}{[n]_{t^{-2}}!}\sum_{w\in\mathfrak{S}_{n}} (-t^{-1})^{\ell en(w)}T_{w}. \tag{3.1}\]
These satisfy \((T_{i}-t)\mathrm{e}_{n}^{+}=0\) and \((T_{i}+t^{-1})\mathrm{e}_{n}=0\), for \(1\leq i<n\).
**Definition 3.3**.: The extended affine Hecke algebra \(\mathrm{H}_{n}(t)\) is generated by the algebras \(\mathcal{R}[Y_{1}^{\pm 1},\dots Y_{n}^{\pm 1}]\) and \(\mathrm{H}_{n}^{\mathrm{fin}}(t)\), with relations:
\[T_{i}Y_{i}T_{i}=Y_{i+1},\mathrm{for}\ i=1,\dots,n-1,\quad T_{i}Y_{j}=Y_{j}T_{i },\ \mathrm{for}\ j\neq i,i+1\]
**Remark 3.4**.: A common alternative presentation imposes instead the relations \(T_{i}Y_{i}^{-1}T_{i}=Y_{i+1}^{-1}\). An isomorphism between the presentations may be given by inverting \(Y_{i}\).
We denote by \(\mathcal{R}[\mathcal{Y}_{n}]\) the \(\mathcal{R}\)-subalgebra generated by \(Y_{1}^{\pm 1},\dots Y_{n}^{\pm 1}\).
**Notation 3.5**.: We denote by \(S(\mathcal{Y}_{n})=\mathcal{R}[\mathcal{Y}_{n}]^{\mathfrak{S}_{n}}\) the space of symmetric Laurent polynomials, which is also the center of \(\mathrm{H}_{n}(t)\). To simplify notation, we will write the parameters \(t\) and \(n\) only when required, and otherwise abbreviate \(\mathrm{H}_{n}(t)\) by either \(\mathrm{H}_{n}\) or simply \(\mathrm{H}\). Similarly for \(\mathrm{H}^{\mathrm{fin}}\), \(\mathcal{R}[\mathcal{Y}]\), and \(S(\mathcal{Y})\).
### \(Gl\) Daha
**Remark 3.6**.: Throughout most of the paper, statements and proofs will be made using \(G=\mathrm{GL}_{N}\) notation. We will warn the reader in cases where statements or proofs need modification to be correct for \(\mathrm{SL}_{N}\), with the symbol \((\diamond)\).
**Notation 3.7**.: When discussing the \(GL\)-DAHA we will use the following notation. Fix a \(\mathbb{Q}\)-algebra \(\mathcal{R}\) which is an integral domain and contains fixed elements \(q,t\in\mathcal{R}^{\times}\), such that \(\frac{t^{m}-1}{t-1}\) and \(\frac{q^{m}-1}{q-1}\) are invertible for all \(m\neq 0\). Let \(\mathcal{K}\) denote the field of fractions of \(\mathcal{R}\). Unless stated otherwise everything will be defined with base ring \(\mathcal{R}\).
**Definition 3.8**.: The \(\mathrm{GL}_{n}\) double affine Hecke algebra \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\) is the \(\mathcal{R}\)-algebra presented by generators:
\[T_{0},T_{1},\dots T_{n-1},\pi^{\pm 1},Y_{1}^{\pm 1},\dots,Y_{n}^{\pm 1},\]
subject to relations2:
Footnote 2: As with \(\widehat{\mathfrak{S}}_{n}\), we drop the relations on the second line when \(n=2\).
\[(T_{i}-t)(T_{i}+t^{-1})=0\quad(i=0,\ldots,n-1), \tag{3.3}\] \[T_{i}T_{j}T_{i}=T_{j}T_{i}T_{j}\quad(j\equiv i\pm 1\bmod n), T_{i}T_{j}=T_{j}T_{i}\quad(\text{otherwise}),\] \[\pi T_{i}\pi^{-1}=T_{i+1}\quad(i=0,\ldots,n-2), \pi T_{n-1}\pi^{-1}=T_{0},\] \[T_{i}Y_{i}T_{i}=Y_{i+1}\quad(i=1,\ldots,n-1), T_{0}Y_{n}T_{0}=q^{-1}Y_{1}\] \[T_{i}Y_{j}=Y_{j}T_{i}\quad(j\not\equiv i,i+1\bmod n),\] \[\pi Y_{i}\pi^{-1}=Y_{i+1}\quad(i=1,\ldots,n-1), \pi Y_{n}\pi^{-1}=q^{-1}Y_{1} \tag{3.2}\]
We recall that \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\) has basis \(\{T_{w}Y^{\beta}\mid w\in\widehat{\mathfrak{S}}_{n},\beta\in\mathbb{Z}^{n}\}\) where we identify \(\pi\) with \(T_{\pi}\).
An alternate presentation of \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\) includes generators \(X_{i}^{\pm 1},1\leq i\leq n\), which are related to the generators above via \(X_{1}=\pi T_{n-1}^{-1}\cdots T_{1}^{-1}\) and \(X_{i+1}=T_{i}X_{i}T_{i}\). Similar to above, we may write \(\mathcal{R}[\mathcal{X}]=\mathcal{R}[X_{1}^{\pm 1},\cdots,X_{n}^{\pm 1}]\).
**Notation 3.9**.: We denote by \(\mathrm{H}(\mathcal{Y})\) and \(\mathrm{H}(\mathcal{X})\), respectively, the subalgebras of \(\mathbb{H}\) generated by \(\mathrm{H}^{\mathrm{fin}}\) and \(Y_{i}^{\pm}\)'s (resp, \(X_{i}^{\pm}\)'s). Note \(\mathrm{H}(\mathcal{X})\) is also generated by \(\mathrm{H}^{\mathrm{fin}}\) and \(\pi^{\pm 1}\). Each subalgebra identifies with \(\mathrm{H}\) as an abstract algebra.
### \(Sl\) Daha
**Notation 3.10**.: When discussing the \(SL\) DAHA we will use the following notation. Fix a \(\mathbb{Q}\)-algebra \(\mathcal{R}\) which is an integral domain containing fixed elements \(\mathbf{Z}\), \(\mathbf{q},t\in\mathcal{R}^{\times}\), such that \(\frac{\mathbf{Z}^{m}-1}{\mathbf{Z}-1}\), \(\frac{\mathbf{q}^{m}-1}{\mathbf{q}-1}\) and \(\frac{t^{m}-1}{t-1}\) are invertible. Let \(\mathcal{K}\) denote the field of fractions of \(\mathcal{R}\). Unless otherwise specified, all definitions will have base ring \(\mathcal{R}\).
**Definition 3.11**.: The \(\mathrm{SL}_{n}\) double affine Hecke algebra \(\mathbb{H}_{n}^{\mathrm{SL}}(\mathbf{q},t)\) is presented by generators:
\[T_{0},T_{1},\ldots T_{n-1},\overline{\pi},Z_{1},\ldots,Z_{n},\]
subject to relations:
\[(T_{i}-t)(T_{i}+t^{-1})=0\quad(i=0,\ldots,n-1),\] \[T_{i}T_{j}T_{i}=T_{j}T_{i}T_{j}\quad(j\equiv i\pm 1\bmod n), T_{i}T_{j}=T_{j}T_{i}\quad(\text{otherwise}),\] \[\overline{\pi}T_{i}=T_{i+1}\overline{\pi}\quad(i=0,\ldots,n-2), \overline{\pi}T_{n-1}=T_{0}\overline{\pi},\] \[T_{i}Z_{i}T_{i}=Z_{i+1}\quad(i=1,\ldots,n-1), T_{i}Z_{j}=Z_{j}T_{i}\quad(j\not\equiv i,i+1\bmod n),\;\;T_{0}Z_{n}T_{0}= \mathbf{q}^{2n}Z_{1},\] \[\overline{\pi}Z_{i}=\mathbf{q}^{-2}Z_{i+1}\overline{\pi}\quad(i=1, \ldots,n-1), \overline{\pi}Z_{n}=\mathbf{q}^{2n-2}Z_{1}\overline{\pi},\] \[Z_{1}Z_{2}\cdots Z_{n}=\mathbf{Z}, \overline{\pi}^{n}=1.\]
We define the generators \(X_{i}\), and the subalgebras \(\mathcal{R}[\mathcal{X}]\), \(\mathcal{R}[\mathcal{Y}]\), \(\mathrm{H}(\mathcal{X})\), \(\mathrm{H}(\mathcal{Y})\) of \(\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)\) in the same way as for \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\), noting however that the further relations \(Z_{1}\cdots Z_{n}=\boldsymbol{Z}\) and \(X_{1}\cdots X_{n}=1\) now apply.
**Notation 3.12**.: \((\diamond)\) By abuse of notation \(\mathcal{R}[\mathcal{Y}]\) refers to the \(\mathcal{R}\)-algebra generated by the \(Z_{i}\), likewise for the other symbols introduced in Notations 3.5 and 3.9; this will be an added convenience for making uniform statements for \(G=\mathrm{GL}\) or \(\mathrm{SL}\) below.
**Remark 3.13**.: \((\diamond)\) Note that any representation of \(\mathrm{H}(\mathcal{Y})\subseteq\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)\) lifts to a representation of \(\mathrm{H}(\mathcal{Y})\subseteq\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\). Likewise any representation of the latter on which \(Y_{1}Y_{2}\cdots Y_{n}-\boldsymbol{Z}\) vanishes can be seen as a representation of the former.
### Intertwiners
Cherednik's theory of intertwiners plays an important role in Section 6. Readers interested only in Section 5 may safely skip the current section.
Recall the standard left action of the extended affine symmetric group \(\widehat{\mathfrak{S}}_{n}\) on \(\mathbb{Z}^{n}\), which depends on an integer \(p\):
\[w\cdot(b_{1},\ldots,b_{n}) =(b_{w^{-1}(1)},\ldots,b_{w^{-1}(n)}),\quad\text{for $w\in \mathfrak{S}_{n}$}, \tag{3.5}\] \[\pi\cdot(b_{1},b_{2},\cdots b_{n}) =(b_{n}+p,b_{1},b_{2},\ldots,b_{n-1}),\] (3.6) \[s_{0}\cdot(b_{1},\ldots,b_{n}) =(b_{n}+p,b_{2},\ldots,b_{n-1},b_{1}-p). \tag{3.4}\]
We take \(p=1\) here, but the action is valid for any dilation. Although we use the symbol \(\cdot\) to denote the action, this is not to be confused with the _dot_ action on weights, which will not (explicitly) appear in this paper.
It is notationally convenient to identify \(\mathbb{Z}^{n}\) with the set of quasi-periodic sequences in \(\mathbb{Z}^{\mathbb{Z}}\), as those sequences satisfying \(b_{i+mn}=b_{i}-mp\). Under this identification, we have:
\[\pi\cdot(b_{1},\ldots,b_{n}) =(b_{0},b_{1},\ldots,b_{n-1})\] \[s_{0}\cdot(b_{1},\ldots,b_{n}) =(b_{0},b_{2},\ldots,b_{n-1},b_{n+1}),\]
and so (3.4) now holds for all \(w\in\widehat{\mathfrak{S}}_{n}\), not just for \(w\in\mathfrak{S}_{n}\). We will call elements of \(\widehat{\mathfrak{S}}_{n}\) affine permutations.
Given an element \(\beta\in\mathbb{Z}^{n}\), **translation** by \(\beta\) is the affine permutation \(\mathrm{tr}(\beta)\) sending \(i\mapsto i+n\beta_{i\bmod n}\). The translation then acts on \(\mathbb{Z}^{n}\) via \(\mathrm{tr}(\beta)\cdot(b_{1},\ldots,b_{n})=(b_{1}+p\beta_{1},\ldots,b_{n}+p \beta_{n})=(b_{1}+\beta_{1},\ldots,b_{n}+\beta_{n})\), given we have taken \(p=1\) above. The translations form an abelian normal subgroup of \(\widehat{\mathfrak{S}}_{n}\).
Similarly we may define an action of \(\widehat{\mathfrak{S}}_{n}\) on \(\mathcal{R}^{n}\). The subgroup \(\mathfrak{S}_{n}\) acts by permuting coordinates as in (3.4), but now:
\[\pi\cdot(b_{1},b_{2},\cdots b_{n}) =(qb_{n},b_{1},b_{2},\ldots,b_{n-1}),\] \[s_{0}\cdot(b_{1},\ldots,b_{n}) =(qb_{n},b_{2},\ldots,b_{n-1},q^{-1}b_{1}),\]
and so we identify \(b_{i+mn}=q^{-m}b_{i}\).
When quotienting \(\widehat{\mathfrak{S}}_{n}\) by the subgroup generated by \(\pi^{n}\) we have to modify \((\diamond)\) the action of \(\pi\) to \(\overline{\pi}\) as follows
\[\overline{\pi}\cdot(b_{1},b_{2},\cdots b_{n})=(\boldsymbol{q}^{-2n}\boldsymbol{ q}^{2}b_{n},\boldsymbol{q}^{2}b_{1},\boldsymbol{q}^{2}b_{2},\ldots,\boldsymbol{q}^{2 }b_{n-1}). \tag{3.7}\]
and then can compute \(s_{0}=\overline{\pi}^{-1}s_{1}\overline{\pi}\).
Let us extend the subscripts of \(T\), \(Y\), and \(Z\) to take arbitrary integer value, by letting
\[T_{j+mn} :=T_{j},\text{ for }j\in\{0,\ldots,n-1\}\text{ and }m\in\mathbb{Z} \tag{3.9}\] \[Y_{i+mn} :=q^{-m}Y_{i},\text{ for }i\in\{1,\ldots,n\}\text{ and }m\in\mathbb{Z}.\] (3.10) \[Z_{i+mn} :=\boldsymbol{q}^{2nm}Z_{i},\text{ for }i\in\{1,\ldots,n\}\text{ and }m\in\mathbb{Z}. \tag{3.8}\]
\((\diamond)\) The following operators and relations are written in \(G=\operatorname{GL}\) notation, but obvious modifications can adapt them to \(G=\operatorname{SL}\).
**Definition 3.14**.: We let \(\mathcal{R}[\widetilde{\mathcal{V}}]\) and \(\widetilde{\mathbb{H}}\), respectively, denote the Ore localization of \(\mathcal{R}[\mathcal{V}]\) (respectively, \(\mathbb{H}\)) at the set of \(\{f_{i,j}\}\) for integers \(i,j\in\mathbb{Z}\) with \(i\not\equiv j\bmod n\), where
\[f_{i,j}:=tY_{i}-t^{-1}Y_{j}.\]
**Definition 3.15**.: For each integer \(i\), we recall **Cherednik's intertwiners**:
\[\varphi_{i}:=T_{i}Y_{i}-Y_{i}T_{i}=T(Y_{i}-Y_{i+1})+(t-t^{-1})Y_{i+1}.\]
The **renormalized intertwiners** are
\[\nu_{i}:=\varphi_{i}(f_{i,i+1})^{-1}\in\widetilde{\mathbb{H}}.\]
We recall the following well-known intertwining relations for Cherednik's and the renormalized intertwiners:
\[Y_{j}\varphi_{i} =\varphi_{i}Y_{s_{i}(j)} \pi\varphi_{i} =\varphi_{i+1}\pi, \tag{3.12}\] \[Y_{j}\nu_{i} =\nu_{i}Y_{s_{i}(j)} \pi\nu_{i} =\nu_{i+1}\pi. \tag{3.11}\]
We additionally recall the following braid and almost-quadratic relations for Cherednik's intertwiners:
\[\varphi_{i}\varphi_{j}\varphi_{i} =\varphi_{j}\varphi_{i}\varphi_{j}, \text{ if }i\equiv j\pm 1\bmod n \tag{3.14}\] \[\varphi_{i}\varphi_{j} =\varphi_{j}\varphi_{i}, \text{ if }i\not\equiv j\pm 1\bmod n\] (3.15) \[\varphi_{i}^{2} =f_{i,i+1}f_{i+1,i}. \tag{3.13}\]
These easily imply braid and quadratic relations for renormalized intertwiners:
\[\nu_{i}\nu_{j}\nu_{i} =\nu_{j}\nu_{i}\nu_{j}, \text{ if }i\equiv j\pm 1\bmod n \tag{3.17}\] \[\nu_{i}\nu_{j} =\nu_{j}\nu_{i}, \text{ if }i\not\equiv j\pm 1\bmod n\] (3.18) \[\nu_{i}^{2} =1. \tag{3.16}\]
Finally, although we will not make use of them, we record the following mixed braid relations:
\[\nu_{i}T_{j}\nu_{i} =\nu_{j}T_{i}\nu_{j}, \text{if }i\equiv j\pm 1\bmod n\] \[\nu_{i}\nu_{j}T_{i} =T_{j}\nu_{i}\nu_{j}, \text{if }i\equiv j\pm 1\bmod n\] \[T_{i}\nu_{j} =\nu_{j}T_{i}, \text{if }i\not\equiv j-1,j,j+1\bmod n.\]
(\(\diamond\)) For the SL version of the above definitions and relations, using \(Z_{i}\) in place \(Y_{i}\), the only relation we need modify is equation (3.11) to \(\overline{\pi}\varphi_{i}=\boldsymbol{q}^{-2}\varphi_{i+1}\overline{\pi}\).
Let us also adopt the convention that \(\varphi_{\pi}=\pi\) and \(\nu_{\pi}=\pi\). Given an affine permutation \(w\) with reduced word decomposition \(s_{i_{1}}\dots s_{i_{m}}\), we define \(\varphi_{w}=\varphi_{i_{1}}\cdots\varphi_{i_{m}}\), and \(\nu_{w}=\nu_{i_{1}}\cdots\nu_{i_{m}}\). The braid relations (3.13),(3.14),(3.16), as well as (3.11) and (3.12), ensure that \(\varphi_{w}\) and \(\nu_{w}\) are well-defined independent of the choice of reduced word.
Given an affine permutation \(w\), we let \(\operatorname{Inv}(w)\) denote the set of its **inversions**
\[\operatorname{Inv}(w):=\{(i,j)\in\{1,\dots n\}\times\mathbb{Z},\ |\ i<j\text{ and }w(i)>w(j)\}.\]
Then we have the following formula relating \(\varphi_{w}\) and \(\nu_{w}\):
\[\varphi_{w}=\nu_{s_{i_{1}}}f_{i_{1},i_{1}+1}^{-1}\cdots\nu_{s_{i_{m}}}f_{i_{m },i_{m}+1}^{-1}=\nu_{w}\cdot q^{\alpha}\prod_{(i,j)\in\operatorname{Inv}(w)}f_ {i,j} \tag{3.19}\]
which follows immediately from the intertwining relations. The exponent \(\alpha\) on \(q^{\alpha}\) arises from applying the relation \(f_{i,j}=qf_{i+n,j+n}\) in order to enforce our convention that \(1\leq i\leq n\) for \((i,j)\in\operatorname{Inv}(w)\).
**Example 3.16**.: For example, let \(w=s_{1}s_{2}s_{0}s_{1}s_{2}s_{0}s_{1}s_{2}\in\widehat{\mathfrak{S}}_{3}\). While the natural subscripts \((i,j)\) on the product \(\prod f_{i,j}\) from (3.19) are
\[\{(-2,9),(-1,9),(-2,6),(-1,6),(1,6),(-1,3),(1,3),(2,3)\}\]
we define
\[\operatorname{Inv}(w)=\{(1,12),(2,12),(1,9),(2,9),(1,6),(2,6),(1,3),(2,3)\}.\]
Thus \(\alpha=5\) in (3.19) for this \(w\).
### Induced modules
For the remainder of this section, we work over \(\mathcal{K}\). In other words, we take \(\mathcal{R}\) to already be a field and so \(\mathcal{R}=\mathcal{K}\).
**Definition 3.17**.: Let \(\mathcal{S}H_{n}\) denote the subalgebra of \(\operatorname{H}_{n}(\mathcal{Y})\) generated by \(\operatorname{H}_{n}^{\operatorname{fin}}\) and \(S(\mathcal{Y}_{n})\).
We note the decomposition \(\mathcal{S}H_{n}=S(\mathcal{Y}_{n})\otimes_{\mathcal{K}}\operatorname{H}_{n}^ {\operatorname{fin}}\) as algebras. Just as \(\mathcal{K}[\mathcal{Y}_{n}]\) is a free \(S(\mathcal{Y}_{n})\)-module of rank \(n!\), \(\operatorname{H}_{n}(\mathcal{Y})\) is a free \(\mathcal{S}H_{n}\)-module of rank \(n!\). Recall \(\operatorname{H}_{n}(\mathcal{Y})\) is also a free \(\mathcal{K}[\mathcal{Y}_{n}]\)-module of rank \(n!\). When \(n\) is understood, we sometimes merely write \(\mathcal{S}H\), as we may write \(\mathcal{Y}\) for \(\mathcal{Y}_{n}\).
**Notation 3.18**.: Write \(\underline{\mathbf{a}}=(a_{1},\cdots,a_{n})\in(\mathcal{K}^{\times})^{n}\) for the one dimensional \(\mathcal{K}[\mathcal{Y}]\)-module on which all \(Y_{i}-a_{i}\) vanish. Let \(\underline{v}\) be a basis of this one-dimensional vector space so that \(\underline{\mathbf{a}}=\mathcal{K}\underline{v}\). Write \(\{\underline{\mathbf{a}}\}\) for the one-dimensional \(S(\mathcal{Y})\)-module obtained as the restriction of \(\underline{\mathbf{a}}\). In other words, upon which \(\sum_{i=1}^{n}(Y_{i}^{m}-a_{i}^{m})\) vanish for all \(m\). Alternatively, one may parameterize the module by the \(\mathfrak{S}_{n}\)-orbit of \(\underline{\mathbf{a}}\), which is the unordered collection or multiset of the \(a_{i}\), hence our notation.Write \(\{\underline{\mathbf{a}}\}\boxtimes\operatorname{sgn}\) for the \(\mathcal{S}H\)-module, \(1\)-dimensional over \(\mathcal{K}\), on which \(S(\mathcal{Y})\) acts as \(\{\underline{\mathbf{a}}\}\) and \(T_{i}+t^{-1}\) vanishes for all \(1\leq i<n\). Let us write \(\underline{u}\) for a basis of this one-dimensional vector space so that \(\{\underline{\mathbf{a}}\}\boxtimes\operatorname{sgn}=\mathcal{K}\underline{u}\).
**Remark 3.19**.: The above construction makes sense for any character of \(S(\mathcal{Y})\), and it is just for convenience that we have chosen one which is the restriction of a character for \(\mathcal{Y}\).
(\(\diamond\)) In the case of \(G=\operatorname{SL}_{n}\), we will also require \(\prod_{i}a_{i}=\boldsymbol{Z}\), and replace \(t\) with \(t^{1/N}\) in defining descending, but no other modifications are required. See also Remark 6.16 below.
**Definition 3.20**.: We call an \(n\)-tuple \(\underline{\mathbf{a}}=(a_{1},\cdots,a_{n})\in(\mathcal{K}^{\times})^{n}\)**descending** if \(a_{i}/a_{j}=t^{2z}\) with \(z\in\mathbb{Z}\) and \(i<j\) implies \(z\geq 0\).
**Remark 3.21**.: If \(\underline{\mathbf{a}}\) is descending then \(f_{i,j}\otimes\underline{v}\neq 0\) for all \(1\leq i<j\leq n\), i.e., \(ta_{i}-t^{-1}a_{j}\neq 0\) when \(i<j\).
Then we have the following isomorphism of \(n!\)-dimensional \(\operatorname{H}_{n}(\mathcal{Y})\)-modules. To lighten notation below we merely write \(\mathcal{Y}\) for \(\mathcal{K}[\mathcal{Y}]\).
**Theorem 3.22**.: _Suppose that \(\underline{\mathbf{a}}\) is descending. Then the map_
\[\operatorname{Ind}_{\mathcal{S}H}^{\operatorname{H}(\mathcal{Y})} \{\underline{\mathbf{a}}\}\boxtimes\operatorname{sgn} \to \operatorname{Ind}_{\mathcal{Y}}^{\operatorname{H}(\mathcal{Y})} \underline{\mathbf{a}}\] \[1\otimes\underline{u} \mapsto \operatorname{e}_{n}\otimes\underline{v}\]
_is an isomorphism._
Proof.: Observe that \(\sum_{i=1}^{n}(Y_{i}^{m}-a_{i}^{m})T_{j}\otimes\underline{v}=T_{j}\sum_{i=1}^{ n}(Y_{i}^{m}-a_{i}^{m})\otimes\underline{v}=0\) for \(1\leq j<n\) so the map
\[A:\operatorname{Ind}_{\mathcal{S}H}^{\operatorname{H}(\mathcal{Y})}\{ \underline{\mathbf{a}}\}\boxtimes\operatorname{sgn}\to\operatorname{Ind}_{ \mathcal{Y}}^{\operatorname{H}(\mathcal{Y})}\underline{\mathbf{a}}\]
determined by \(A(1\otimes u)=\operatorname{e}_{n}\otimes\underline{v}\) is a well-defined \(\operatorname{H}(\mathcal{Y})\)-module map. Due to the well-known equality
\[\dim_{\mathcal{K}}\operatorname{Ind}_{\mathcal{S}H}^{\operatorname{H}( \mathcal{Y})}\{\underline{\mathbf{a}}\}\boxtimes\operatorname{sgn}=\dim_{ \mathcal{K}}\operatorname{Ind}_{\mathcal{Y}}^{\operatorname{H}(\mathcal{Y})} \underline{\mathbf{a}}=n!,\]
it suffices to show \(A\) is surjective.
An easy calculation yields
\[(Y_{i}-a_{i+1})\frac{1}{t+t^{-1}}(t-T_{i})\otimes\underline{v}=1\otimes\frac{ta _{i}-t^{-1}a_{i+1}}{t+t^{-1}}\underline{v}\]
which agrees with \(\frac{1}{t+t^{-1}}f_{i,i+1}\otimes\underline{v}=\frac{t}{t^{2}+1}f_{i,i+1} \otimes\underline{v}\). In particular, by Remark 3.21 and our assumption that \(\mathbf{a}\) is descending, this is nonzero. This is the heart of the \(n=2\) case. More
generally, let \(g(Y)=\prod_{1\leq i<j\leq n}(Y_{i}-a_{j}).\) One can show
\[g(Y)\mathrm{e}_{n}\otimes\underline{v}=\frac{t^{\binom{n}{2}}}{[n]_{t^{2}}!}\prod _{1\leq i<j\leq n}(ta_{i}-t^{-1}a_{j})\otimes\underline{v}=\frac{t^{\binom{n}{2 }}}{[n]_{t^{2}}!}\prod_{1\leq i<j\leq n}f_{i,j}\otimes\underline{v}.\]
As \(\underline{\mathbf{a}}\) is descending, the above expression evaluates at \(\underline{\mathbf{a}}\) to a nonzero scalar \(\alpha\) times \(1\otimes\underline{v}\), and so \(\mathrm{e}_{n}\otimes\underline{v}\) generates \(\mathrm{Ind}_{\mathcal{Y}}^{\mathrm{H}(\mathcal{Y})}\,\underline{\mathbf{a}}\) as an \(\mathrm{H}(\mathcal{Y})\)-module. Thus \(A\) is a surjection. This completes the proof.
Observe, as \(g(Y)\otimes\underline{u}\) is a \(\mathcal{Y}\)-weight vector of weight \(\underline{\mathbf{a}}\), the map
\[B:\mathrm{Ind}_{\mathcal{Y}}^{\mathrm{H}(\mathcal{Y})}\,\underline{\mathbf{a }} \to\mathrm{Ind}_{\mathcal{SH}}^{\mathrm{H}(\mathcal{Y})}\{\underline{ \mathbf{a}}\}\boxtimes\mathrm{sgn}\] \[1\otimes\underline{v} \mapsto\frac{1}{\alpha}g(Y)\otimes\underline{u}\]
is inverse to the map \(A\).
## 4. Endomorphisms of HK via elliptic Schur-Weyl duality
In this section, we recall the elliptic Schur-Weyl duality functor from [10], and observe that it intertwines the action of the algebra \(\mathcal{B}^{G}\cong\mathcal{O}_{\mathfrak{q}}(G)^{G}\) of Casimirs, as appears in the definition of \(\mathrm{HK}(\chi)\), with the natural action by the distinguished subalgebra \(S(\mathcal{Y})\subset\mathbb{H}_{\mathrm{N}}\). Exploiting this compatibility we prove several fundamental properties of Hotta-Kashiwara modules.
**Notation 4.1**.: In this section, we need to unite our many parameters \(t,\mathfrak{q},q,\boldsymbol{q},\boldsymbol{Z}\) as mentioned in Remark 2.2 into a single ground ring. We fix a natural number \(N\) and we let \(\mathcal{R}\) be local ring \(\mathbb{Q}[\mathfrak{q}^{\frac{1}{N}}]_{(\mathfrak{q}-1)}\), and so that \(\mathcal{K}=\mathbb{Q}(\mathfrak{q}^{\frac{1}{N}})\) is its field of fractions, and we let \(\kappa=\mathcal{R}/(\mathfrak{q}-1)\) denote the residue field at \(\mathfrak{q}=1\). The other parameters appear as follows:
* The quantum group parameter is \(\mathfrak{q}=(\mathfrak{q}^{\frac{1}{N}})^{N}\).
* The quadratic parameter in Hecke algebras is \(t=\mathfrak{q}=(\mathfrak{q}^{\frac{1}{N}})^{N}\). Taking \(t=\mathfrak{q}\) ensures compatibility with Schur-Weyl duality.
* The loop parameter for the rank \(n\) GL DAHA \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\) is specialized \(q=t^{-2n/N}=\mathfrak{q}^{-2n/N}\). Henceforward, we lighten notation and write \(\mathbb{H}_{\mathrm{n}}\) for this specialization.
* The loop parameter for the rank \(n\) SL DAHA \(\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)\) is specialized \(\boldsymbol{q}=t^{1/N}=\mathfrak{q}^{1/N}\), and further we take \(\boldsymbol{Z}=\boldsymbol{q}^{n(n-N^{2})}=t^{n(n/N-N)}\). Henceforward, we lighten notation and also write \(\mathbb{H}_{\mathrm{n}}\) for this specialization.
We will have need to discuss various \(\mathcal{R}\)-modules, denoted \(M_{\mathcal{R}}\), their localization to \(\mathcal{K}\), denoted \(M_{\mathcal{K}}\), and their specialization at \(\mathfrak{q}=1\) denoted \(M_{\kappa}\).
### The elliptic Schur-Weyl duality functor
Let us fix \(G=\mathrm{GL}_{N}\) for some integer \(N\). We will discuss the modification of statements and their proofs for \(\mathrm{SL}_{N}\) at the end of this section, indicated by \((\diamond)\). Let \(M\) be a strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module, and recall
\(V\) denotes the defining representation of \(U_{\mathfrak{q}}(\mathfrak{g})\). For each non-negative integer \(n\) we have a functor [10],
\[F_{n}:\mathcal{D}_{\mathfrak{q}}(G)\text{-}\mathrm{mod}^{G} \longrightarrow\mathbb{H}_{\text{n}}\text{-}\mathrm{mod},\] \[M \mapsto \quad\mathrm{Hom}_{U_{\mathfrak{q}}(\mathfrak{g})}((\Lambda^{N}V )^{\otimes n/N},V^{\otimes n}\otimes M).\]
**Remark 4.2**.: Unwinding the definitions, we see that
\[F_{n}(M)\cong\mathrm{Hom}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-}\mathrm{mod}^ {G}}(\mathrm{Dist}((V^{*})^{\otimes n}\otimes(\Lambda^{N}V)^{\otimes n/N}),M)\]
(see Remark 2.27).
In the case \(G=\mathrm{GL}_{N}\), strong equivariance implies \(F_{n}(M)=0\) unless \(n/N\in\mathbb{Z}_{>0}\). For \(G=\mathrm{SL}_{N}\) we have no such restriction as \(\Lambda^{N}V\) is trivial.
**Notation 4.3**.: Recall from (2.2) that \(z\in V^{\otimes N}\) denotes the sign-like element. Denoting by \(1\) the distinguished cyclic generator of HK, we will regard \(z\otimes 1\) as an element of \(F_{N}(\mathrm{HK})\) which by Proposition 2.31 is nonzero.
**Notation 4.4**.: We will re-use the notation \(c_{k}\) from Notation 2.8 now to refer to the image of the quantum Casimir elements in \(\mathcal{B}^{G}\) under the embedding \(i_{\mathcal{B}}\) or \(i_{\tilde{\mathcal{B}}}\).
**Remark 4.5**.: We note that the elements \(c_{k}\in\mathcal{D}_{\mathfrak{q}}(G)^{G}\) are not central in \(\mathcal{D}_{\mathfrak{q}}(G)\). However, they commute with the image of the moment map: indeed we have that for any \(\ell\in\mathcal{O}_{\mathfrak{q}}(G)\) and for \(y\in\mathcal{D}_{\mathfrak{q}}(G)^{G}\), we have, in Sweedler notation,
\[\operatorname{ad}(\ell)\,y=(\mathfrak{R}(\ell)_{(1)}\rhd y)\operatorname{ad} (\ell_{(2)})=y\,\operatorname{ad}(\ell).\]
It will be convenient to re-express the functor \(F_{N}\) in terms of a Schur-Weyl duality homomorphism from the double affine Hecke algebra to a certain Hom space in the category of strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-modules. We consider three homomorphisms, \(\mathrm{\tilde{SW}}\), \(\mathrm{\tilde{SW}e}_{N}\), and \(\mathrm{e}_{N}\mathrm{\tilde{SW}e}_{N}\), the latter two of which we consider in this paper.
We have the following three \(\mathcal{R}\)-module homomorphisms, of which the first and third (4.1), (4.3) are algebra homomorphisms and the second (4.2) is a \(\mathbb{H}_{\text{N}}\)-module homomorphism.
\[\mathrm{\tilde{SW}}:\mathbb{H}_{\text{N}} \longrightarrow\mathrm{Hom}(\mathrm{Dist}(V^{\otimes N}), \mathrm{Dist}(V^{\otimes N}))=F_{N}(\mathrm{Dist}(V^{\otimes N})) \tag{4.2}\] \[\mathrm{\tilde{SW}e}_{N}:\mathbb{H}_{\text{N}}\cdot\mathrm{e}_{N} \longrightarrow\mathrm{Hom}(\mathrm{Dist}(V^{\otimes N}),\mathrm{HK})=F_{N}( \mathrm{HK}),\] (4.3) \[\mathrm{e}_{N}\mathrm{\tilde{SW}e}_{N}:\mathrm{e}_{N}\cdot\mathbb{H }_{\text{N}}\cdot\mathrm{e}_{N} \longrightarrow\mathrm{Hom}(\mathrm{HK},\mathrm{HK})=\mathrm{HK}^{G} \tag{4.1}\]
In this section, we prove that \(\mathrm{e}_{N}\mathrm{\tilde{SW}e}_{N}\) is an isomorphism over \(\mathcal{K}\). In the forthcoming paper [1] we prove moreover that \(\mathrm{\tilde{SW}}\) and \(\mathrm{\tilde{SW}e}_{N}\) are also isomorphisms over \(\mathcal{K}\).
As with other notation, we may lighten this to \(\mathrm{\tilde{SW}e}\) and \(\mathrm{e}\mathrm{\tilde{SW}e}\) when \(N\) is understood.
### Compatibility of \(\operatorname{\operatorname{SW}}\) with bimodule actions
We equip \(\mathbb{H}_{\operatorname{N}}\cdot\mathrm{e}_{N}\) with the structure of a \(\mathbb{H}_{\operatorname{N}}\)-\(S(\mathcal{Y})\)-bimodule, with \(\mathbb{H}_{\operatorname{N}}\) acting naturally on the left, and \(S(\mathcal{Y})\) multiplying on the right, noting that \(\mathrm{e}_{N}\), commutes with all elements of \(S(\mathcal{Y})\). Let us fix an identification
\[S(\mathcal{Y}) \cong\mathcal{B}^{G}\] \[s_{(1^{k})} \mapsto c_{k} \tag{4.4}\]
sending the \(k\)th elementary symmetric polynomial \(s_{(1^{k})}\) to the invariant \(c_{k}\). This equips HK with the structure of \(\mathcal{D}_{\mathfrak{q}}(G)\)-\(S(\mathcal{Y})\)-bimodule, with \(S(\mathcal{Y})\) acting by right multiplication through the inclusion \(i_{\widetilde{\mathcal{B}}}:\mathcal{B}^{G}\subseteq\mathcal{D}_{\mathfrak{q} }(G)^{G}\).
In the same way we equip \(F_{N}(\operatorname{HK})\) with the structure of a \(\mathbb{H}_{\operatorname{N}}\)-\(S(\mathcal{Y})\) bimodule, with \(\mathbb{H}_{\operatorname{N}}\) acting as constructed in [10] and \(S(\mathcal{Y})\) acting by right multiplication as in (4.4).
**Proposition 4.6**.: _The map \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \cdot}}}}}}}}}}}}}}_{N}\) is a homomorphism of \(\mathbb{H}_{\operatorname{N}}\)-\(S(\mathcal{Y})\)-bimodules._
Proof.: The map \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \cdot{\
Because \(\mathrm{e}_{N}\mathrm{e}_{k}=\mathrm{e}_{N}=\mathrm{e}_{k}\mathrm{e}_{N}\) for \(k\leq N\) we have
\[\mathrm{e}_{N}(Y_{1}\cdots Y_{k})\mathrm{e}_{N}\cdot(z\otimes 1) = \Lambda^{N-k}V\otimes\ \Lambda^{k}V\ \otimes(\Lambda^{k}V)^{*}\otimes\ \Lambda^{k}V \tag{4.7}\] \[\Lambda^{N-k}V\otimes\ \Lambda^{k}V\] (4.8) \[\Lambda^{N}V \tag{4.6}\]
which is an element of the one-dimensional vector space, \(\mathrm{Hom}_{U_{\mathfrak{q}}(\mathfrak{g})}(\Lambda^{N}V,\Lambda^{N}V \otimes C(\Lambda^{k}V))\) where \(C(\Lambda^{k}V)\cong(\Lambda^{k}V)^{*}\otimes\Lambda^{k}V\subseteq\mathcal{B} \cong\mathcal{O}_{\mathfrak{q}}(G)\) denotes the \(\mathcal{R}\)-submodule of matrix coefficients of \(\Lambda^{k}V\) under the Peter-Weyl decomposition. Another such homomorphism is given by multiplication by \(c_{k}\) and is depicted in graphical calculus by
\[c_{k}(z\otimes 1) = \Lambda^{N}V\ \otimes(\Lambda^{k}V)^{*}\otimes\ \Lambda^{k}V \tag{4.10}\] \[\Lambda^{N}V \tag{4.9}\]
It follows that \(c_{k}(z\otimes 1)=\lambda\mathrm{e}_{N}Y_{1}Y_{2}\cdots Y_{k}\mathrm{e}_{N}(z\otimes 1)\) for some \(\lambda\in\mathcal{R}\). In order to determine the scalar \(\lambda\), we apply \(\mathrm{Id}_{\Lambda^{N}V}\otimes\epsilon_{\mathcal{B}}\) to both sides.
Applying \(\mathrm{Id}_{\Lambda^{N}V}\otimes\epsilon_{\mathcal{B}}\) to Equation (4.7) gives simply \(z\otimes 1\). On the other hand, applying \(\mathrm{Id}_{\Lambda^{N}V}\otimes\epsilon_{\mathcal{B}}\) to Equation (4.10) gives scalar multiplication by \(\genfrac{[}{]}{0.0pt}{}{N}{k}_{\mathfrak{q}^{-2}}\). Hence \(\lambda=\genfrac{[}{]}{0.0pt}{}{N}{k}_{\mathfrak{q}^{-2}}\), and so \(\mathrm{e}_{N}c_{k}\mathrm{e}_{N}=\genfrac{[}{]}{0.0pt}{}{N}{k}_{\mathfrak{q}^ {-2}}\mathrm{e}_{N}Y_{1}\cdots Y_{k}\mathrm{e}_{N}\). We have the same proportion in \(\mathbb{H}_{\mathrm{N}}\), i.e., \(\mathrm{e}_{N}s_{(1^{k})}\mathrm{e}_{N}=\genfrac{[}{]}{0.0pt}{}{N}{k}_{ \mathfrak{q}^{-2}}\mathrm{e}_{N}Y_{1}\cdots Y_{k}\mathrm{e}_{N}\), and so the \(S(\mathcal{Y})\) actions precisely coincide, as claimed in (4.11).
As an aside, observe the scalars computed above satisfy \(\genfrac{[}{]}{0.0pt}{}{N}{k}_{\mathfrak{q}^{-2}}=\mathfrak{q}^{k(k-N)}\dim_{ \mathfrak{q}}(\Lambda^{k}V)\) in the conventions of this paper.
We record the following corollary, which will appear later in proof of Corollary 6.5.
**Corollary 4.7**.: _Under the identification \(\mathcal{B}^{G}\cong S(\mathcal{Y})\), the restriction to \(\mathcal{B}^{G}\) of the trivial character \(\epsilon:\mathcal{B}\to\mathcal{K}\) coincides with the restriction to \(S(\mathcal{Y})\) of the character \(\mathfrak{q}^{-2\rho}\)._
Proof.: In Proposition 4.6 we have shown \(c_{k}\) acts on \(z\otimes 1\in\Lambda^{N}V\otimes\mathrm{HK}(\epsilon)^{G}\) as
\[\begin{bmatrix}N\\ k\end{bmatrix}_{\mathfrak{q}^{-2}}=s_{(1^{k})}(1,\mathfrak{q}^{-2}, \mathfrak{q}^{-4},\ldots,\mathfrak{q}^{2-2N}).\]
### Polynomial forms
In the proof of Theorem 4.14, we will apply Nakayama's lemma arguments. These arguments are applicable to finitely generated \(\mathcal{R}\)-modules, however the modules to which we apply them are not finitely generated. In the GL setting, we have a natural grading, and in the SL setting we have a natural filtration, however even the graded (resp. filtered) parts are not finitely generated.
We will define "polynomial" forms of the \(\mathcal{R}\)-modules \(\mathcal{D}_{\mathfrak{q}}(G)\), HK, and \(\mathbb{H}\) with the property that the graded parts are finitely generated, and the entire module is obtained by localizing (in the GL case) or specialising (in the SL case) certain explicit quantum determinants. This manoeuvre will allow us to apply Nakayama's lemma.
**Definition 4.8**.: The twisted reflection equation algebra of type \(\mathrm{GL}_{N}\), denoted \(\overline{\mathcal{O}_{\mathfrak{q}}(\mathrm{Mat}_{\mathrm{N}})}\), is the algebra generated by symbols \(\bar{\ell}_{j}^{i}\), for \(i,j=1,\ldots N\), subject to the relations,
\[R_{21}^{-1}\bar{L}_{1}R_{12}^{-1}\bar{L}_{2}=\bar{L}_{2}R_{21}^{-1}\bar{L}_{1}R _{12}^{-1},\]
where \(\bar{L}:=\sum_{i,j}\bar{\ell}_{j}^{i}E_{i}^{j}\) is a matrix with entries the generators \(\bar{\ell}_{j}^{i}\).
Note that \(\overline{\mathcal{O}_{\mathfrak{q}}(\mathrm{Mat}_{\mathrm{N}})}\simeq \mathcal{O}_{\mathfrak{q}^{-1}}(\mathrm{Mat}_{\mathrm{N}})\), as one obtains the inverse R-matrix by inverting \(\mathfrak{q}\).
We note that by construction \(\mathcal{O}_{\mathfrak{q}}(\mathrm{Mat}_{\mathrm{N}})\) and \(\overline{\mathcal{O}_{\mathfrak{q}}(\mathrm{Mat}_{\mathrm{N}})}\) are positively graded with finite-dimensional graded pieces, a property which does not survive to their common localization to \(\mathcal{O}_{\mathfrak{q}}(\mathrm{GL}_{N})\) nor their common quotient to \(\mathcal{O}_{\mathfrak{q}}(\mathrm{SL}_{N})\).
In order to give a polynomial version of \(\mathcal{D}_{\mathfrak{q}}(G)\), we need to introduce the following modifications.
**Definition 4.9**.: For \(G=\mathrm{GL}_{N}\), or \(\mathrm{SL}_{N}\), the algebra \(\mathcal{D}_{\mathfrak{q}}(G)^{+}\) is the twisted tensor product,
\[\mathcal{D}_{\mathfrak{q}}(G)^{+}=\overline{\mathcal{O}_{\mathfrak{q}}(G)} \operatorname{\widetilde{\otimes}}\mathcal{O}_{\mathfrak{q}}(G). \tag{4.8}\]
Denoting \(\bar{a}_{j}^{i}\) and \(b_{j}^{i}\) to be the generators of the first and second factors, the cross relations are given in matrix form by:
\[\bar{A}_{1}R_{21}^{-1}B_{2}R_{21} =R_{12}B_{2}R_{21}\bar{A}_{1}, \text{if }G=\mathrm{GL}_{N} \tag{4.10}\] \[\bar{A}_{1}R_{21}^{-1}B_{2}R_{21} =R_{12}B_{2}R_{21}\bar{A}_{1}\mathfrak{q}^{-2/N}, \text{if }G=\mathrm{SL}_{N} \tag{4.9}\]
where \(\bar{A}=\sum_{i,j}\bar{a}_{j}^{i}E_{i}^{j}\) and \(B=\sum_{i,j}b_{j}^{i}E_{i}^{j}\).
Recall that HK is naturally a quotient of \(\mathcal{D}_{\mathfrak{q}}(G)\) by a certain left ideal \(J\). It will be convenient to use the matrix notation from Proposition 2.17 to describe that ideal and motivate the definition of its polynomial version. Since the matrices \(A\) and \(B\) are invertible, we may rewrite \(J=\mathcal{D}_{\mathfrak{q}}(G)\cdot C(\mathrm{ad})=\mathcal{D}_{\mathfrak{q}} (G)\cdot C^{\prime\prime}(\mathrm{ad})\), where
\[C^{\prime\prime}(\mathrm{ad})=\{\text{matrix coefficients of the matrix $A^{-1}B-BA^{-1}$}\}.\]
**Definition 4.10**.: For \(G=\mathrm{GL}_{N}\) or \(\mathrm{SL}_{N}\), the \(\mathcal{D}_{\mathfrak{q}}(G)^{+}\)-module \(\mathrm{HK}^{+}\) is the quotient,
\[\mathrm{HK}^{+}=\mathcal{D}_{\mathfrak{q}}(G)^{+}/\mathcal{D}_{\mathfrak{q}}(G )^{+}\cdot\bar{C}(\mathrm{ad}),\]
where
\[\bar{C}(\mathrm{ad})=\{\text{matrix coefficients of the matrix $\bar{A}B-B\bar{A}$}\}.\]
By construction we have homomorphisms \(\mathcal{D}_{\mathfrak{q}}(G)^{+}\to\mathcal{D}_{\mathfrak{q}}(G)\) given by \(\bar{A},B\mapsto A^{-1},B\). Note that for \(G=\mathrm{GL}_{N}\) this map is a localization whereas for \(G=\mathrm{SL}_{N}\) it is a quotient. In either case we have an isomorphism,
\[\mathrm{HK}=\mathcal{D}_{\mathfrak{q}}(G)\otimes_{\mathcal{D}_{\mathfrak{q}}( G)^{+}}\mathrm{HK}^{+}\,.\]
We note for use in Theorem 4.14 that \(\mathrm{HK}^{+}\) is a graded \(\mathcal{R}\)-module all of whose graded components are finitely generated over \(\mathcal{R}\). We now turn to the polynomial submodules in the DAHA setting, where definitions are more straightforward.
Let \(\mathcal{R}[\mathcal{Y}]^{+}=\mathcal{R}[Y_{1},\cdots,Y_{n}]\) and \(\mathcal{R}[\mathcal{X}]^{+}=\mathcal{R}[X_{1}^{-1},\cdots,X_{n}^{-1}]\). It is convenient to renotate \(\bar{X}_{i}:=X_{i}^{-1}\). We denote by \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)^{+}\) the \(\mathcal{R}\)-subalgebra of \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\) generated by \(\mathrm{H}^{\mathrm{fin}}\), \(\mathcal{R}[\mathcal{Y}]^{+}\) and \(\mathcal{R}[\mathcal{X}]^{+}\). By construction, we have that \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)\) is the localization,
\[\mathbb{H}_{n}^{\mathrm{GL}}(q,t)=\mathbb{H}_{n}^{\mathrm{GL}}(q,t)^{+}[(Y_{1} \ldots Y_{n})^{-1},(\bar{X}_{1}\ldots\bar{X}_{n})^{-1}]\]
at the elements \((Y_{1}\ldots Y_{n})\), \((\bar{X}_{1}\ldots\bar{X}_{n})\).
We endow \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)^{+}\) with a grading of \(\mathcal{R}\)-modules by setting \(\deg(\bar{X}_{i})=\deg(Y_{i})=1\) for all \(i=1,\ldots n\), and \(\deg(T_{j})=0\) for \(j=1\ldots n-1\), and note that the resulting homogeneous components are finite rank and free over \(\mathcal{R}\).
**Remark 4.11**.: Alternatively \(\mathbb{H}_{n}^{\mathrm{GL}}(q,t)^{+}\) is the subalgebra generated by negative powers of \(\pi\), \(\mathcal{R}[\mathcal{Y}^{+}]\), and \(\mathrm{H}^{\mathrm{fin}}\), noting \((\bar{X}_{1}\cdots\bar{X}_{n})=\pi^{-n}\). With respect to the grading above, we have \(\deg(\pi^{-1})=1\).
We define \(\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)^{+}\) to be the algebra generated by \(\mathrm{H}_{n}^{\mathrm{fin}}\), \(Z_{i}\), \(\overline{\pi}^{-1}\), with relations obtained by modifying each relation in Definition 3.11 (by multiplying on left and right by \(\overline{\pi}^{-1}\) as needed) to involve only \(\overline{\pi}^{-1}\). However, we do not include the final two relations \(Z_{1}\cdots Z_{n}=\boldsymbol{Z}\) and \(\overline{\pi}^{-n}=1\) (modified from the original relation \(\overline{\pi}^{n}=1\)). We denote the corresponding subalgebras of \(\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)^{+}\) by \(\mathcal{R}[\mathcal{Y}]^{+}:=\mathcal{R}[Z_{1},\cdots,Z_{n}]\) and \(\mathcal{R}[\mathcal{X}]^{+}:=\mathcal{R}[\bar{X}_{1},\cdots,\bar{X}_{n}]\), where \(\bar{X}_{1}=T_{1}\cdots T_{n-1}\overline{\pi}^{-1}\) and \(\bar{X}_{i+1}=T_{i}^{-1}\bar{X}_{i}T_{i}^{-1}\). We note that \(\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)^{+}\) surjects to \(\mathbb{H}_{n}^{\mathrm{SL}}(\boldsymbol{q},t)\) with kernel (generated by the omitted relations), rather than embedding as a subalgebra.
Recall that we have the Schur-Weyl duality map (4.3)
\[\mathrm{e}_{N}\mathrm{S}\mathrm{\tilde{W}e}_{N}\colon\mathrm{e}_{N}\mathbb{H} \mathrm{e}_{N}\to\mathrm{HK}^{G}\,.\]
In the case \(G=\mathrm{GL}_{N}\) we note that this map takes the positive subspace \(\mathrm{e}_{N}\mathbb{H}^{+}\mathrm{e}_{N}\) to \((\mathrm{HK}^{+})^{G}\). In the \(\mathrm{SL}_{N}\) case, we note instead that the generators-and-relations construction of \(\mathrm{\check{SW}e}_{N}\) in [10] immediately lifts to a map from \(\mathrm{e}_{N}\mathbb{H}^{+}\mathrm{e}_{N}\) to \((\mathrm{HK}^{+})^{G}\). We formulate these observations more precisely in the following proposition.
**Proposition 4.12**.: _Each of \(\mathrm{e}_{N}\mathbb{H}^{+}\mathrm{e}_{N}\) and \((\mathrm{HK}^{+})^{G}\) are positively graded \(\mathcal{R}\)-modules with finitely generated graded components, and we have a map of graded \(\mathcal{R}\)-modules_
\[(\mathrm{e}_{N}\mathrm{\check{SW}e}_{N})^{+}\colon\mathrm{e}_{N}\mathbb{H}^{+ }\mathrm{e}_{N}\to(\mathrm{HK}^{+})^{G}\]
_such that we recover the map_
\[\mathrm{e}_{N}\mathrm{\check{SW}e}_{N}\colon\mathrm{e}_{N}\mathbb{H}\mathrm{e }_{N}\to(\mathrm{HK})^{G}\]
_by localizing \(\bar{X}_{1}\dots\bar{X}_{N}Y_{1}\dots Y_{N}\) on the source and \(\det_{\mathfrak{q}^{-1}}(\bar{A})\det_{\mathfrak{q}}(B)\) on the target in the \(\mathrm{GL}_{N}\) case, and by setting \(\bar{X}_{1}\dots\bar{X}_{N}Y_{1}\dots Y_{N}=1\) and \(\det_{\mathfrak{q}^{-1}}(\bar{A})\det_{\mathfrak{q}}(B)=1\) in the \(\mathrm{SL}_{N}\) case._
### Proof of Theorem 1.1
We are now ready to prove the main result of this section, which is Theorem 1.1 from the introduction. We now fix \(n=N\) throughout, and suppress these from the notation for \(\mathbb{H}_{\mathbb{n}}\), \(\mathcal{S}H_{N}\), \(\mathrm{e}_{N}\), etc. Recall as in Notation 4.1 we have specialized \(q=t^{-2}=\mathfrak{q}^{-2}\) and \(\boldsymbol{q}=t^{1/N}=\mathfrak{q}^{1/N}\).
**Theorem 4.13**.: _We have the following:_
1. _The map_ \((\mathrm{\check{SW}e})_{\mathcal{K}}\) _is injective._
2. _The map_ \((\mathrm{e}\mathrm{\check{SW}e})_{\mathcal{K}}\) _is injective._
3. _The maps_ \((\mathrm{\check{SW}e})_{\mathcal{R}}\)_,_ \((\mathrm{e}\mathrm{\check{SW}e})_{\mathcal{R}}\) _are injective._
Proof.: Claim (2) follows from claim (1) by applying \(\mathrm{e}\) to both sides. Claim (3) then follows because each of \(\mathbb{H}\mathrm{e}\), \(\mathrm{e}\mathbb{H}\mathrm{e}\) are free over \(\mathcal{R}\), hence the kernel of each map must be both torsion and torsion free, hence zero.
It remains to prove claim (1). For the remainder of the proof we work over base ring \(\mathcal{K}\). By their constructions, for each \(\chi\in\mathrm{Spec}(S(\mathcal{Y}))\) we have natural isomorphisms,
\[\mathbb{H}\cdot\mathrm{e}\underset{S(\mathcal{Y})}{\otimes}\chi\cong \mathrm{Ind}_{\mathcal{S}H}^{\mathbb{H}}(\chi\boxtimes\mathrm{sgn}),\quad \text{ and }\quad\mathrm{HK}\underset{S(\mathcal{Y})}{\otimes}\chi\cong \mathrm{HK}(\chi).\]
By Proposition 4.6, \(\mathrm{\check{SW}e}\) descends to a \(\mathbb{H}\)-module homomorphism,
\[\mathrm{\check{SW}e}\otimes_{i_{\bar{\mathcal{B}}}}\chi:\mathrm{Ind}_{ \mathcal{S}H}^{\mathbb{H}}(\chi\boxtimes\mathrm{sgn})\to F_{N}(\mathrm{HK}( \chi)),\]
such that \((\mathrm{\check{SW}e}\otimes_{i_{\bar{\mathcal{B}}}}\chi)(1\otimes\underline{u })=z\otimes 1\). This expression is nonzero by Corollary 2.32. Recall that \(\mathbb{H}\cdot\mathrm{e}\) is a free right \(S(\mathcal{Y})\)-module. Hence \(\mathrm{\check{SW}e}\) is injective if, and only if, its specialisation \(\mathrm{\check{SW}e}\otimes_{i_{\bar{\mathcal{B}}}}\chi\) is injective for generic \(\chi\). For generic \(\chi\), \(\mathrm{Ind}_{\mathcal{S}H}^{\mathbb{H}}(\chi\boxtimes\mathrm{sgn})\) is irreducible and hence \(\mathrm{\check{SW}e}\otimes_{i_{\bar{\mathcal{B}}}}\chi\), being nonzero, must be injective. Thus \(\mathrm{\check{SW}e}\) is injective.
**Theorem 4.14**.: _We have the following:_
1. _The map_ \((\mathrm{e}\mathrm{\check{SW}e})_{\kappa}\) _is an isomorphism._
_._
2. _The map_ \((\mathrm{e}\tilde{\mathrm{S}}\mathrm{We})_{\mathcal{R}}\) _is surjective._
3. _The map_ \((\mathrm{e}\tilde{\mathrm{S}}\mathrm{We})_{\mathcal{K}}\) _is surjective._
Proof.: By Proposition 4.12 and the right exactness of localization and quotients it suffices to prove these statements at the level of the polynomial forms introduced in Section 4.3. Claim (2) then follows from Claim (1) by an application of Nakayama's lemma, noting that the graded components of the modules are finitely generated \(\mathcal{R}\)-modules. Claim (3) then follows from Claim (2) by the exactness of localization.
Thus it remains to prove Claim (1). First note that the source \((\mathrm{e}\mathbb{H}^{+}\mathrm{e})_{\kappa}\) is commutative, and identifies with the space of diagonally invariant polynomials \(\kappa[\mathcal{X}^{+},\mathcal{Y}^{+}]^{\mathfrak{S}_{N}}:=\kappa[\bar{X}_{1},\dots,\bar{X}_{N},Y_{1},\dots,Y_{N}]^{\mathfrak{S}_{N}}\), via the map which takes an invariant polynomial \(p\) to \(\mathrm{e}\cdot p\cdot\mathrm{e}\). On the other hand, the target \((F_{N}(\mathrm{HK}^{+}))_{\kappa}\) is also commutative, and identifies with \((\mathcal{O}(Mat_{N})/I)^{G}=\mathcal{O}(Mat_{N})^{G}/I^{G}\), where \(I\) is the ideal generated by \(C(\mathrm{ad})_{\kappa}\). According to [10][Theorem 1.2.1], the ideal \(I^{G}\) is reduced, and this algebra identifies with the coordinate ring \(\mathcal{O}(\mathrm{Comm}_{N})^{G}\) of the variety of pairs commuting matrices modulo simultaneous conjugation.
Putting these observations together, we may identify \(\mathrm{e}\tilde{\mathrm{S}}\mathrm{We}_{\kappa}^{+}\) with a certain algebra map
\[s:\kappa[\mathcal{X}^{+},\mathcal{Y}^{+}]^{\mathfrak{S}_{N}}\to\mathcal{O}( \mathrm{Comm}_{N})^{G}.\]
Let us unpack the definition of the map in this setting. First, recall that given a pair of commuting matrices \(\bar{A},B\in\mathrm{End}(V)\), there is a natural algebra map
\[\kappa[\mathcal{X}^{+},\mathcal{Y}^{+}] \to\mathrm{End}(V^{\otimes N})\] \[p \mapsto p_{\bar{A},B}\]
defined by the property that \((\bar{X}_{i})_{\bar{A},B}\) (respectively \((Y_{i})_{\bar{A},B}\)) acts by \(\bar{A}\) (respectively \(B\)) in the \(i\)th tensor factor. When \(p\in\kappa[\mathcal{X}^{+},\mathcal{Y}^{+}]^{\mathfrak{S}_{N}}\) is an invariant polynomial, \(p_{\bar{A},B}\) acts by a scalar on the antisymmetric tensors \(\Lambda^{N}V\hookrightarrow V^{\otimes N}\). This scalar is the value of \(s(p)\) on the pair of matrices \((\bar{A},B)\in\mathrm{Comm}_{N}\).
On the other hand, by [14][Proposition 6.2.1], the natural map induced by restriction to diagonal matrices induces an isomorphism
\[r:\mathcal{O}(\mathrm{Comm}_{N})^{G}\to\mathcal{O}(\mathfrak{t}\times \mathfrak{t})^{\mathfrak{S}_{N}}\cong\kappa[\mathcal{X}^{+},\mathcal{Y}^{+}]^ {\mathfrak{S}_{N}}.\]
We claim that the the maps \(r\) and \(s\) are inverse to one another, and thus \(s\) is an isomorphism as desired. Indeed, unwinding the definitions, the claim boils down to the fact that if \(\bar{A}=diag(a_{1},\dots,a_{N})\) and \(B=diag(b_{1},\dots,b_{N})\) are diagonal matrices, then
\[s(p)(\bar{A},B)=p(a_{1},\dots a_{N},b_{1},\dots,b_{N}),\]
which can be readily checked using the above description of the map \(s\).
In particular, combining Theorem 4.14 with Theorem 4.13, we obtain that the maps \((\mathrm{e}\tilde{\mathrm{S}}\mathrm{We})_{\mathcal{R}}\) and \((\mathrm{e}\tilde{\mathrm{S}}\mathrm{We})_{\mathcal{K}}\) are isomorphisms, thus proving Theorem 1.1.
## 5. Endomorphisms of \(\mathrm{HK}(\epsilon)\) via shift isomorphism
Throughout this section we fix \(G=\mathrm{GL}_{N}\), and set \(n=N\), and we will assume \(q=t^{-2}\). We therefore adopt the shorthand from Notation 4.1. We also work over a base field \(\mathcal{K}\). We sometimes abbreviate \(\mathcal{Y}\) for \(\mathcal{K}[\mathcal{Y}]\) and refer to \(\mathcal{Y}\)-weight spaces. We discuss the modification of statements and their proofs for \(\mathrm{SL}_{N}\) at the end of this section.
Sections 5 and 6 contain proofs and computations in the DAHA that will be used to prove the main theorems of this paper, via the functors \(\mathrm{e}F_{N}\) and \(F^{\prime}_{N}\).
Let us first recall the following basic categorical fact (see e.g. [1, Appendix A]).
**Proposition 5.1**.: _Let \(\mathcal{C}\) be a presentable abelian category and \(P\) a compact projective object of \(\mathcal{C}\). Let \(R=\mathrm{End}_{\mathcal{C}}(P)^{\mathrm{op}}\). Then the natural functor_
\[F=\mathrm{Hom}_{\mathcal{C}}(P,-):\mathcal{C}\to R\text{-mod}\]
_has a fully faithful left adjoint \(F^{L}\), whose essential image is full subcategory of \(\mathcal{C}\) generated by \(P\). In particular, given any quotient object \(P\to Q\) we have an isomorphism_
\[\mathrm{End}_{\mathcal{C}}(Q)\cong\mathrm{End}_{R}(F(Q)).\]
**Remark 5.2**.: The left adjoint \(F^{L}\) is very explicit. Given an \(R\)-module \(L\), choose a presentation \(L=\mathrm{coker}(R^{\oplus I}\to R^{\oplus J})\). Then \(F^{R}(M)\) is given by \(\mathrm{coker}(P^{\oplus I}\to P^{\oplus J})\) (recall that \(R=\mathrm{End}(P)^{\mathrm{op}}\), so the morphisms in the two cases are given by the same data). It follows that if \(M=\mathrm{coker}(P^{\oplus I}\to P^{\oplus J})\) is in the subcategory generated by \(P\), then the counit map is an isomorphism:
\[F^{L}F(M)\cong M.\]
Applying Proposition 5.1 to the category Theorem 1.1.
**Corollary 5.3**.: _The functor_
\[\mathrm{e}F_{N}:\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\to\mathrm{e} \mathbb{H}_{N}\mathrm{e}\text{-mod}\]
_has a fully faithful left adjoint \((eF_{N})^{L}\) whose essential image is the subcategory generated by \(\mathrm{HK}\). In particular, given any quotient \(M\) of \(\mathrm{HK}\), we have an isomorphism of algebras_
\[\mathrm{End}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}}(M)\cong\mathrm{ End}_{\mathrm{e}\mathbb{H}_{N}\mathrm{e}}(\mathrm{e}F_{N}(M)).\]
**Remark 5.4**.: It is proved in [10] that when \(G=\mathrm{GL}_{N}\), the functor \(\mathrm{e}F_{N}\) is conservative (as well as the functor \(F_{N}\) itself), and thus an equivalence. This is not true in the \(SL_{N}\) case.
### The shift isomorphism
The shift isomorphism [10, 11] is an isomorphism between the antispherical DAHA and the spherical DAHA with "shifted" parameters:3
Footnote 3: More precisely, following [11] and [10], the authors of [1] prove a shift isomorphism relating the anti-spherical double affine Hecke algebra to the spherical double affine Hecke algebra, upon trigonometric degeneration. However the proofs apply verbatim to the non-degenerate setting.
\[\mathrm{e}\mathbb{H}_{N}^{\mathrm{GL}}(q,t)\mathrm{e}\cong\mathrm{e}^{+} \mathbb{H}_{N}^{\mathrm{GL}}(q,tq^{1/2})\mathrm{e}^{+}\]
in our conventions. We will need this isomorphism at our specialization \(q=t^{-2}\). Namely, taking the field \(\mathcal{K}\) and \(\mathbb{H}_{\mathrm{N}}\) as in Section 4, we obtain an isomorphism of \(\mathcal{K}\)-algebras
\[\mathrm{e}\mathbb{H}\mathrm{e}\cong\mathrm{e}^{+}\mathcal{D}_{q}(H)\#\mathfrak{ S}_{N}\mathrm{e}^{+}=\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}.\]
The algebra \(\mathcal{D}_{q}(H)\) of \(q\)-difference operators on the maximal torus \(H\) of \(G\) is presented here as the group algebra of the doubled weight lattice \(\Lambda\oplus\Lambda\) of \(G\), with multiplication twisted by the symplectic pairing \(\omega\) canonically attached to the symmetric Cartan pairing on \(\Lambda\). Further, the shift isomorphism sends \(S(\mathcal{Y})\) to \(S(\mathcal{Y})\), where we have identified \(i_{\mathcal{B}}(\mathcal{O}_{q}(H))\) or \(\mathcal{K}[0\oplus\Lambda]\) with \(\mathcal{K}[\mathcal{Y}]\). Below we will identify \(i_{\mathcal{A}}(\mathcal{O}_{q}(H))\) or or \(\mathcal{K}[\Lambda\oplus 0]\) with \(\mathcal{K}[\mathcal{X}]\). (It is important we express characters of \(S(\mathcal{Y})\) in terms of the loop parameter \(q\) and not the quadratic parameter since this parameter changes under the shift.)
Let us also recall that there is a Morita equivalence between \(\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}\) and \(\mathcal{D}_{q}(H)\#\mathfrak{S}_{N}\). Indeed, this follows from Theorem 2.4 of [10], noting that \(\mathcal{D}_{q}(H)\) is simple (when \(q\) is not a root of unity as in our case). Consider the composite functor
\[\Upsilon:\mathcal{D}_{\mathfrak{q}}(G)\text{-}\mathrm{mod}^{G}\xrightarrow[ \mathrm{e}F_{N}]{}\mathrm{e}\mathbb{H}\text{e-}\mathrm{mod}\xrightarrow[\text{ shift}]{}\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}\text{-}\mathrm{mod}\xrightarrow[ \text{Morita}]{}\mathcal{D}_{q}(H)\#\mathfrak{S}_{N}\text{-}\mathrm{mod}\]
Let us record the consequence of all of these observations, which will be used to compute the endomorphism algebras of Hotta-Kashiwara modules.
**Corollary 5.5**.: _The functor_
\[\Upsilon:\mathcal{D}_{\mathfrak{q}}(G)\text{-}\mathrm{mod}^{G}\to\mathcal{D}_{ q}(H)\#\mathfrak{S}_{N}\text{-}\mathrm{mod}\]
_has a fully faithful left adjoint \(\Upsilon^{L}\) whose essential image is the subcategory generated by \(\mathrm{HK}\). In particular, given any quotient \(M\) of \(\mathrm{HK}\), we have an isomorphism of algebras_
\[\mathrm{End}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-}\mathrm{mod}^{G}}(M)\cong \mathrm{End}_{\mathcal{D}_{q}(H)\#\mathfrak{S}_{N}}(\Upsilon(M)).\]
In particular, applying Corollary 5.5 to \(\mathrm{HK}\) itself, we have an isomorphism of algebras (thus giving a \(\mathfrak{q}\)-deformed Levasseur-Stafford isomorphism as in (5.1)):
\[\mathrm{End}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-}\mathrm{mod}^{G}}(\mathrm{ HK})\cong\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}}. \tag{5.1}\]
**Remark 5.6**.: We expect that there exists an analogous functor \(\Upsilon\) and an isomorphism (5.1) for an arbitrary connected reductive group \(G\), even though there is no analogue of the functor \(F_{N}\).
In the remainder of this paper we will present two different proofs of each of Theorem 1.2 and Theorem 1.4 using Corollary 5.3 and Corollary 5.5 respectively.
### Proof of Theorem 1.2 via the shift isomorphism
To lighten notation in this section, write
\[\mathtt{D}=\mathtt{D}_{N}=\mathcal{D}_{q}(H)\#\mathfrak{S}_{N}\text{ as in Corollary \ref{eq:S1},}\] \[\mathtt{S}=\mathtt{S}_{N}=\mathcal{D}_{q}(H)^{\mathfrak{S}_{N}} \text{ as in equation \eqref{eq:S1}}\] \[\Gamma=\Gamma_{N}=\mathcal{K}[\mathcal{Y}]\#\mathfrak{S}_{N} \subseteq\mathtt{D}\] \[\mathfrak{S}\mathfrak{S}_{N}=\mathfrak{S}\mathfrak{S}=S(\mathcal{ Y})\otimes\mathcal{K}[\mathfrak{S}_{N}]\subseteq\Gamma,\]
where we have identified \(i_{\mathcal{B}}(\mathcal{O}_{\mathfrak{q}}(H))\) with \(\mathcal{K}[\mathcal{Y}]\).
Proposition 4.6 and Corollary 4.7 give the following corollary.
**Corollary 5.7**.:
1. _We have isomorphisms,_ \[\mathrm{e}F_{N}(\mathrm{HK}(\epsilon))\cong\mathrm{e}\operatorname{Ind}_{SH}^ {\mathbb{H}_{N}}(\epsilon\boxtimes\operatorname{sgn})\cong\mathrm{e} \operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}_{N}}(q^{\rho})\cong\operatorname{ Ind}_{S(\mathcal{Y})}^{\mathrm{e}\mathrm{I}\mathrm{e}\mathrm{I}}\{q^{\rho}\},\] _where_ \(q^{\rho}(Y_{i}):=q^{i-1}=t^{2-2i}\)_, or_ \(q^{\rho}=t^{-2\rho}\) _corresponds to_ \[\underline{\mathbf{a}}=(1,\cdots,q^{N-1})=(t^{0},t^{-2},\ldots,t^{2-2N}).\]
2. _More generally, if_ \(\chi\) _corresponds to_ \(\underline{\mathbf{a}}\) _via (_4.4_) and is transverse, i.e.,_ \(i\neq j\implies a_{i}\neq a_{j}\)_, then_ \(\mathrm{e}F_{N}(\mathrm{HK}(\chi))\cong\operatorname{Ind}_{S(\mathcal{Y})}^{ \mathrm{e}\mathrm{I}\mathrm{e}\mathrm{I}}\{\underline{\mathbf{a}}\}\)_._
We are now ready to state the main result of this section. The results from Section 4 and Corollary 5.7 imply that
\[\operatorname{End}(\mathrm{HK}(\epsilon))\cong\operatorname{End}_{\mathrm{e} \mathbb{H}\mathrm{e}}(\operatorname{Ind}_{S(\mathcal{Y})}^{\mathrm{e}\mathrm{I }\mathrm{e}}\{q^{\rho}\}), \tag{5.2}\]
which we show in the proposition below is isomorphic to the group algebra of the symmetric group.
**Proposition 5.8**.: \(\operatorname{End}_{\mathrm{e}\mathbb{H}\mathrm{e}}(\operatorname{Ind}_{S( \mathcal{Y})}^{\mathrm{e}\mathrm{I}\mathrm{e}\mathrm{I}}\{q^{\rho}\})\cong \mathcal{K}[\mathfrak{S}_{N}]^{\mathrm{op}}\)__
Proof.: By Theorem 3.22, for descending \(\underline{\mathbf{a}}\) we have:
\[\mathrm{e}\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\underline{\mathbf{a}} \cong\mathrm{e}\operatorname{Ind}_{\mathcal{S}H}\{\underline{\mathbf{a}}\} \boxtimes\operatorname{sgn}=\operatorname{Ind}_{S(\mathcal{Y})}^{\mathrm{e} \mathrm{I}\mathrm{e}\mathrm{I}}\{\underline{\mathbf{a}}\},\]
and likewise a similar statement holds replacing \(\mathrm{e}\) with \(\mathrm{e}^{+}\) if the reverse \(w_{0}(\underline{\mathbf{a}})\) is descending. Note that for any \(\sigma\in\mathfrak{S}_{N}\), and in particular \(\sigma=w_{0}\), we have that \(\operatorname{Ind}_{\mathcal{Y}}^{\Gamma}\underline{\mathbf{a}}\cong \operatorname{Ind}_{\mathcal{Y}}^{\Gamma}\sigma(\underline{\mathbf{a}})\). Further for any affine permutation \(w\in\widehat{\mathfrak{S}}_{N}\) we have \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\underline{\mathbf{a}}\cong \operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}w(\underline{\mathbf{a}})\) (recall the analogous statements for \(\mathrm{H}(\mathcal{Y})\) and \(\mathbb{H}\) are false, as seen in Example 6.18). In particular we have an isomorphism,
\[\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{D}}w_{0}(q^{\rho})\cong \operatorname{Ind}_{\mathcal{Y}}^{\mathbb{D}}1^{N} \tag{5.3}\]
where we have written \(1^{N}\) for the \(\mathcal{K}[\mathcal{Y}]\)-module corresponding to \(\underline{\mathbf{a}}=(1,1,\cdots,1)\) and will write \(\{1^{N}\}\) for \(\{\underline{\mathbf{a}}\}\). It is easy to check the module in (5.3) is \(\mathcal{Y}\)-semisimple, with each weight space of dimension \(N!\).
Using the shift isomorphism, Morita equivalence, and (5.3) we compute
\[\operatorname{End}_{\operatorname{e}\mathbb{H}e}(\operatorname{Ind }_{S(\mathcal{Y})}^{\operatorname{e}\mathbb{H}e}\{q^{\rho}\}) \cong\operatorname{End}_{\operatorname{\mathcal{S}}}(\operatorname{ Ind}_{S(\mathcal{Y})}^{\operatorname{\mathcal{S}}}\{q^{\rho}\})\] \[\cong\operatorname{End}_{\operatorname{\mathcal{D}}}( \operatorname{Ind}_{\mathcal{Y}}^{\operatorname{\mathcal{D}}}w_{0}(q^{\rho})) \cong\operatorname{End}_{\operatorname{\mathcal{D}}}(\operatorname{Ind}_{ \mathcal{Y}}^{\operatorname{\mathcal{D}}}1^{N}).\]
Note \(\operatorname{Ind}_{\mathcal{Y}}^{\Gamma}1^{N}\cong 1^{N}\boxtimes\mathcal{K}[ \mathfrak{S}_{N}]\) is the module on which \(\mathfrak{S}_{N}\) acts via the regular representation and all operators \((Y_{i}-1)\) vanish, for \(1\leq i\leq N\). We now observe that the \(1^{N}\) weight space of \(\operatorname{Ind}_{\mathcal{Y}}^{\operatorname{\mathcal{D}}}1^{N}= \operatorname{Ind}_{\Gamma}^{\operatorname{\mathcal{D}}}\operatorname{Ind}_{ \mathcal{Y}}^{\Gamma}1^{N}\) is exactly \(1^{N}\boxtimes\mathcal{K}[\mathfrak{S}_{N}]\). To continue, the above endomorphism algebra is:
\[\operatorname{End}_{\operatorname{\mathcal{D}}}(\operatorname{Ind}_ {\Gamma}^{\operatorname{\mathcal{D}}}1^{N}\boxtimes\mathcal{K}[\mathfrak{S}_{ N}]) \cong\operatorname{Hom}_{\Gamma}(1^{N}\boxtimes\mathcal{K}[ \mathfrak{S}_{N}],\operatorname{Res}\operatorname{Ind}_{\Gamma}^{ \operatorname{\mathcal{D}}}1^{N}\boxtimes\mathcal{K}[\mathfrak{S}_{N}])\] \[\cong\operatorname{Hom}_{\Gamma}(1^{N}\boxtimes\mathcal{K}[ \mathfrak{S}_{N}],1^{N}\boxtimes\mathcal{K}[\mathfrak{S}_{N}])\] \[\cong\operatorname{Hom}_{\mathcal{K}[\mathfrak{S}_{N}]}( \mathcal{K}[\mathfrak{S}_{N}],\mathcal{K}[\mathfrak{S}_{N}])\cong\mathcal{K} [\mathfrak{S}_{N}]^{\operatorname{op}}.\]
Proposition 5.8 with (5.2) complete the proof of Theorem 1.2.
**Remark 5.9**.: It is important to observe in the proof above that we cannot apply Theorem 3.22 to \(\operatorname{Ind}_{\mathcal{Y}}^{\operatorname{\mathcal{D}}}1^{N}\) as \(f_{i,j}\mid_{1^{N}}=0\) at quadratic parameter \(1\). Indeed the module in (5.3) is not isomorphic to \(\operatorname{Ind}_{S(\mathcal{Y})\otimes\mathfrak{S}_{N}}^{\Gamma}\{1^{N}\} \boxtimes\operatorname{triv}\). It is easy to check the latter module is not \(\mathcal{Y}\)-semisimple; and neither is \(\operatorname{Ind}_{S(\mathcal{Y})}^{\mathcal{Y}}\{1^{N}\}\).
### Proof of Theorem 1.3 via the shift isomorphism
In order to prove the \(M_{\lambda}\) appearing in the statement of Theorem 1.3 are distinct indecomposable modules, we first prove the irreducibility of the indecomposable summands of \(\operatorname{Ind}_{Y}^{\operatorname{\mathcal{D}}}1^{N}\) given by the minimal idempotents of \(\mathcal{K}[\mathfrak{S}_{N}]^{\operatorname{op}}\).
**Proposition 5.10**.: _As a \(\operatorname{\mathcal{D}}\)-\(\mathcal{K}[\mathfrak{S}_{N}]\) bimodule, we have a decomposition_
\[\operatorname{Ind}_{\mathcal{Y}}^{\operatorname{\mathcal{D}}}1^{N}\cong \bigoplus_{\lambda}L_{\lambda}\otimes S^{\lambda}\]
_where the direct sum is indexed by partitions \(\lambda\) of \(N\) and \(S^{\lambda}\) is the corresponding irreducible for \(\mathfrak{S}_{N}\). Furthermore \(L_{\lambda}\cong\operatorname{Ind}_{\Gamma}1^{N}\boxtimes S^{\lambda}\) is a simple \(\operatorname{\mathcal{D}}\)-module._
_Consequently, as a \(\operatorname{e}\mathbb{H}e\)-\(\mathcal{K}[\mathfrak{S}_{N}]\) bimodule, we have a decomposition_
\[\operatorname{Ind}_{S(\mathcal{Y})}^{\operatorname{e}\mathbb{H}e}\{q^{\rho}\} \cong\bigoplus_{\lambda}\bar{L}_{\lambda}\otimes S^{\lambda}\]
_where \(\bar{L}_{\lambda}\) is a simple \(\operatorname{e}\mathbb{H}e\)-module._
Proof.: As noted in the proof of Proposition 5.8, \(\operatorname{Ind}_{\mathcal{Y}}^{\operatorname{\mathcal{D}}}1^{N}= \operatorname{Ind}_{\Gamma}1^{N}\boxtimes\mathcal{K}[\mathfrak{S}_{N}]\) and so by decomposing the regular representation, we see it has summands of the form
\[L_{\lambda}:=\operatorname{Ind}_{\Gamma}1^{N}\boxtimes S^{\lambda},\]
which occur with multiplicity \(\dim S^{\lambda}=|\mathrm{SYT}(\lambda)|.\) Indeed, if \(\{v_{T}\mid T\in\mathrm{SYT}(\lambda)\}\) is a basis of \(S^{\lambda}\) indexed by standard Young tableaux, then \(L_{\lambda}\) has \(\mathcal{Y}\)-weight basis given by \(\{X^{\beta}\otimes v_{T}\mid\beta\in\mathbb{Z}^{N},T\in\mathrm{SYT}(\lambda)\}\). In particular, the \(\mathcal{Y}\)-weight of \(X^{\beta}\otimes v_{T}\) is \(q^{\beta}\). (Recall the relation \(Y_{i}X_{j}=q^{\delta_{ij}}X_{j}Y_{i}\) for \(G=\mathrm{GL}_{N}\).) Any simple submodule of \(L_{\lambda}\) must contain some nonzero weight vector. If it is of weight \(q^{\beta}\) then it must have the form \(X^{\beta}\otimes v\) for some nonzero \(v\in S^{\lambda}\). Then the submodule also contains \(1\otimes v=X^{-\beta}X^{\beta}\otimes v\) and hence is all of \(L_{\lambda}\).
Next, via the Morita equivalence between D and S, we have a similar decomposition of \(\mathrm{e}^{+}\operatorname{Ind}_{\mathcal{Y}}^{\operatorname{\mathfrak{D}}}1 ^{N}\cong\operatorname{Ind}_{S(\mathcal{Y})}^{\operatorname{\mathfrak{S}}}\{ q^{\rho}\}\) as a S-module. Then via the shift isomorphism, an analogous decomposition holds for the anti-spherical DAHA module \(\operatorname{Ind}_{S(\mathcal{Y})}^{\operatorname{e}\operatorname{\mathbb{H }}\operatorname{\mathfrak{e}}}\{q^{\rho}\}=\operatorname{e}\operatorname{Ind} _{\mathcal{Y}}^{\operatorname{\mathbb{H}}}q^{\rho}\).
**Remark 5.11**.: \((\diamond)\) For \(G=\mathrm{SL}_{N}\), one modifies the relations on the quantum torus \(\mathcal{D}_{q}(H)\#\mathfrak{S}_{N}\) appropriately and replaces \(1^{N}\) with \((Z^{1/N})^{N}\). This completes the proof of Theorem 1.3.
Applying the left adjoint \(\Upsilon^{L}\) (or respectively \((eF_{N})^{L}\)) to the first (respectively, second) isomorphism in Proposition 5.10, we obtain a corresponding decomposition (see Remark 5.2:
\[\Upsilon^{L}(\Upsilon\operatorname{HK}(\chi))\cong\operatorname{HK}(\chi) \cong\bigoplus_{\lambda}\Upsilon^{L}(L_{\lambda})\otimes S^{\lambda}.\]
As \(\Upsilon^{L}\) is fully faithful and additive, the \(\mathcal{D}_{\operatorname{\mathfrak{q}}}(G)\)-modules \(M_{\lambda}:=\Upsilon^{L}(L_{\lambda})\) are indecomposable4 and pairwise non-isomorphic for distinct \(\lambda\). This proves Theorem 1.3.
Footnote 4: As mentioned in the introduction, the results of [GJYY] allow us to upgrade “indecomposable” to “simple”. Indeed, we will show that the functor \(\Upsilon\) is an equivalence for \(G=GL_{N}\) and a projection onto a direct summand for \(G=SL_{N}\).
### Proof of Theorem 1.4 via the shift isomorphism
As in the case of \(\epsilon\), we get a generalization of Theorem 1.3 for transverse \(\chi\), using notation from Theorem 1.4. We found it more intuitive to first give the proof of the special case \(\chi=\epsilon\) and then introduce appropriate modification for transverse \(\chi\). The necessity for transversality is explained in Remark 5.9. We give an example of non-transverse \(\chi\) in Example 6.19.
**Theorem 5.12**.: _Suppose that \(\chi\) is in transverse position, i.e., that \(\operatorname{Stab}(\chi)\cap\mathfrak{S}_{N}=\{\mathrm{Id}\}\). Then_
\[\operatorname{End}_{\operatorname{e}\operatorname{\mathbb{H}} \operatorname{e}}(\operatorname{e}F_{N}(\operatorname{HK}(\chi))) \cong\mathcal{K}[W_{J}]^{\mathrm{op}}\,\text{ and}\] \[\operatorname{e}F_{N}(\operatorname{HK}(\chi)) \cong\bigoplus_{\lambda}L_{\underline{\lambda}}\boxtimes S^{ \underline{\lambda}},\]
_where \(W_{J}=\sigma^{-1}\operatorname{Stab}(\chi)\sigma\subseteq\mathfrak{S}_{N}\) is a standard parabolic subgroup for some \(\sigma\in\widehat{\mathfrak{S}}_{n}\). The decomposition is as \(\operatorname{e}\operatorname{\mathbb{H}}\operatorname{e}-W_{J}\) bimodules, and the direct sum is indexed by multi-partitions \(\underline{\lambda}\) of total size \(N\) and "shape" \(J\), and \(S^{\underline{\lambda}}\) is the corresponding irreducible for \(W_{J}\). \(L_{\underline{\lambda}}\) is an irreducible \(\operatorname{e}\operatorname{\mathbb{H}}\operatorname{e}\)-module._
Proof.: Using (4.4), we identify \(\chi\) with \(\{\underline{\mathbf{a}}\}\), where we choose \(\underline{\mathbf{a}}\in\mathcal{K}^{N}\) to be descending and have the further hypothesis that \(\underline{\mathbf{a}}\) is transverse, i.e. \(a_{i}\neq a_{j}\) for \(i\neq j\). Let \(\underline{\mathbf{b}}=\sigma^{-1}\underline{\mathbf{a}}\) so that \(\mathrm{Stab}(\underline{\mathbf{b}})=W_{J}\subseteq\mathfrak{S}_{N}\). We have \(\mathrm{Ind}_{\mathcal{Y}}^{\mathbb{D}}\,\underline{\mathbf{a}}\cong\mathrm{ Ind}_{\mathcal{Y}}^{\mathbb{D}}\,\underline{\mathbf{b}}\). The transversality of \(\underline{\mathbf{a}}\) ensures \(\mathrm{Ind}_{\mathfrak{S}\mathfrak{S}}^{\Gamma}\{\underline{\mathbf{a}}\} \boxtimes\mathrm{triv}\cong\mathrm{Ind}_{\mathcal{Y}}^{\Gamma}\,\underline{ \mathbf{a}}\cong\mathrm{Ind}_{\mathcal{Y}}^{\Gamma}\,\underline{\mathbf{b}}\). (As in Remark 5.9, we warn the reader that \(\mathrm{Ind}_{\mathfrak{S}\mathfrak{S}}^{\Gamma}\{\underline{\mathbf{b}}\} \boxtimes\mathrm{triv}\not\cong\mathrm{Ind}_{\mathcal{Y}}^{\Gamma}\,\underline {\mathbf{b}}\) (unless \(W_{J}\) is trivial).) Similar to the proof of Proposition 5.8 above, which was the special case \(\underline{\mathbf{b}}=1^{N}\), the \(\underline{\mathbf{b}}\) weight space of \(\mathrm{Ind}_{\mathcal{Y}}^{\mathbb{D}}\,\underline{\mathbf{b}}\) is exactly \(\mathbf{b}\boxtimes\mathcal{K}[W_{J}]\), where \(\mathcal{K}[W_{J}]\) denotes the regular representation of the parabolic subgroup \(W_{J}\). Then the analogous computation of the Hom space yields that the endomorphism algebra in question is \(\mathcal{K}[W_{J}]^{\mathrm{op}}\).
Note \(W_{J}=\mathfrak{S}_{\eta_{1}}\times\mathfrak{S}_{\eta_{2}}\times\cdots\times \mathfrak{S}_{\eta_{\ell}}\subseteq\mathfrak{S}_{N}\) for a corresponding composition \((\eta_{1},\ldots,\eta_{\ell})\) of \(N\) which we say has shape \(J\). Thus we see the irreducible representations of \(W_{J}\) are multipartitions also of shape \(J\), i.e. \(\underline{\lambda}=(\lambda^{(1)},\ldots,\lambda^{(\ell)})\) where \(\lambda^{(i)}\) is a partition of size \(\eta_{i}\).
\(W_{J}\) has corresponding parabolic subalgebra \(\Gamma_{J}\subseteq\Gamma\), generated by \(W_{J}\) and \(\mathcal{Y}\). Then \(\mathrm{Ind}_{\Gamma_{J}}^{\Gamma}\,\underline{\mathbf{b}}\boxtimes S^{ \underline{\lambda}}\) is an irreducible representation of \(\Gamma\). The indecomposable summands of \(\mathrm{Ind}_{\mathcal{Y}}^{\mathbb{D}}\,\underline{\mathbf{b}}\) are \(\mathrm{Ind}_{\Gamma}^{\mathbb{D}}(\mathrm{Ind}_{\Gamma_{J}}^{\Gamma}\, \underline{\mathbf{b}}\boxtimes S^{\underline{\lambda}})=\mathrm{Ind}_{\Gamma_ {J}}^{\mathbb{D}}\,\underline{\mathbf{b}}\boxtimes S^{\underline{\lambda}}\). A nonzero simple submodule would have to contain a \(\mathcal{Y}\)-weight vector, say of weight \((q^{\beta_{1}}b_{u(1)},\cdots,q^{\beta_{N}}b_{u(N)})\) for some \(\beta\in\mathbb{Z}^{N}\) and \(u\in\mathfrak{S}_{N}\). (\(\circ\)) Again we write this for \(G=\mathrm{GL}_{N}\) and one modifies appropriately for \(G=\mathrm{SL}_{N}\).) Recall \(b_{i}\neq b_{j}\implies b_{i}/b_{j}\neq q^{z}\) for any \(z\in\mathbb{Z}\). This weight vector must then have the from \(X^{\beta}\otimes v\) for some \(v\in S^{\underline{\lambda}}\). As before, the submodule then contains \(X^{-\beta}X^{\beta}\otimes v=1\otimes v\) and hence is the whole module. Thus the corresponding decomposition into indecomposables given by the first part of Theorem 5.12 is actually a decomposition into simples.
As in Section 5.3, this proves Theorem 1.4 by applying the left adjoint \(\Upsilon^{L}\).
## 6. Endomorphisms of \(\mathrm{HK}(\epsilon)\) via intertwiners
The goal of this section is to give an alternate proof of Proposition 5.8, and hence of Theorem 1.2, avoiding use of the shift isomorphism and instead relying on the Morita equivalence of Proposition 6.1 below. Throughout this section we work over base ring \(\mathcal{K}\). We work directly in the DAHA instead of the anti-spherical DAHA, which allows us to make more explicit calculations and to use intertwiners. Using intertwiners we construct explicit endomorphisms \(\Phi_{w}\) of \(F_{N}(\mathrm{HK}(\epsilon))\), defining a ring homomorphism,
\[\Phi:\mathcal{K}[\mathfrak{S}_{N}]^{op} \to\mathrm{End}_{\mathbb{H}}(F_{N}(\mathrm{HK}(\epsilon))) \tag{6.2}\] \[w \mapsto\Phi_{w}, \tag{6.1}\]
and we proceed to show it is an isomorphism. To connect the results in this section to Theorem 1.2, we will require the following algebraic input:
**Proposition 6.1** ([Gjvy]).: _The sign idempotent \(\mathrm{e}\) defines a Morita equivalence between \(\mathbb{H}_{\mathrm{N}}\) and \(\mathrm{e}\mathbb{H}_{\mathrm{N}}\mathrm{e}\)._
Analogous results to Proposition 6.1 are known for the rational degeneration of \(\mathbb{H}_{\mathrm{N}}\) (see e.g. [1]), however we could not find a reference in the literature for this statement for \(\mathbb{H}_{\mathrm{N}}\). In our forthcoming paper [10], we give a complete proof of a more general version of this statement.
**Remark 6.2**.: The reader will note that Proposition 6.1 could be applied to many of the results of Section 4 to replace \(\mathrm{e}\mathbb{H}_{\mathrm{N}}\mathrm{e}\) with \(\mathbb{H}_{\mathrm{N}}\) we have done in Proposition 6.3. We have avoided doing so to ensure that the present paper, with the exception of the present section, which merely gives an alternative and more constructive proof of Theorem 1.2 - is independent of [10].
**Proposition 6.3**.: _For any strongly equivariant \(\mathcal{D}_{\mathfrak{q}}(G)\)-module \(M\) in the subcategory generated by \(\mathrm{HK}\) we have isomorphisms,_
\[\mathrm{End}_{\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}}(M)\cong\mathrm{ End}_{\mathcal{D}_{\mathfrak{q}}(G)}(M)\cong\mathrm{End}_{\mathrm{e}\mathbb{H}_{ \mathrm{N}}\mathrm{e}}(\mathrm{e}F_{N}(M))\cong\mathrm{End}_{\mathbb{H}_{ \mathrm{N}}}(F_{N}(M)).\]
Proof.: The first isomorphism is due to Proposition 2.21.
For the second isomorphism, recall that the universal Hotta-Kashiwara module \(\mathrm{HK}\) is a projective object in \(\mathcal{D}_{\mathfrak{q}}(G)\text{-mod}^{G}\) by Proposition 2.26, hence we have an equivalence of categories between the subcategory generated by \(\mathrm{HK}\) and \(\mathrm{End}(\mathrm{HK})^{op}\text{-mod}\), given by applying the functor \(\mathrm{e}\cdot F_{N}\cong\mathrm{Hom}(\mathrm{HK},-)\). The last isomorphism follows from the Morita equivalence asserted in Proposition 6.1.
In the case \(M=\mathrm{HK}(\chi)\), we can describe the rightmost endomorphism algebra of Proposition 6.3, by first realising \(F_{N}(\mathrm{HK}(\chi))\) as an induced module, using the identification (4.4).
We can strengthen Corollary 5.7 using Proposition 6.1.
**Proposition 6.4**.: _There is an isomorphism of \(\mathbb{H}\)-modules_
\[F_{N}(\mathrm{HK}(\chi))\cong\mathrm{Ind}_{\mathcal{SH}}^{\mathbb{H}}(\chi \boxtimes\mathrm{sgn})\]
Proof.: Recall from the proof of Theorem 4.13 that there is a natural map
\[\mathrm{\breve{Sw}e}\otimes_{i_{\vec{\mathcal{B}}}}\chi:\mathrm{Ind}_{ \mathcal{SH}}^{\mathbb{H}}(\chi\boxtimes\mathrm{sgn})\to F_{N}(\mathrm{HK}( \chi)).\]
By Proposition 6.1, to check that \(\mathrm{\breve{Sw}e}\otimes_{i_{\vec{\mathcal{B}}}}\chi\) is an isomorphism, it suffices to check that it is an isomorphism after multiplying on the left by \(\mathrm{e}\). But this follows from the Theorem 4.14, tensoring the isomorphism
\[\mathrm{e}\mathrm{\breve{Sw}e}:\mathrm{e}\mathbb{H}\mathrm{le}\to\mathrm{HK} ^{G}\]
with the character \(\chi\).
Proposition 4.6, together with Proposition 6.4, imply the following corollary, which is a strengthening of the first part of Corollary 5.7.
**Corollary 6.5**.: _We have isomorphisms,_
\[F_{N}(\mathrm{HK}(\epsilon))\cong\mathrm{Ind}_{\mathcal{SH}}^{\mathbb{H}_{ \mathrm{N}}}(\epsilon\boxtimes\mathrm{sgn})\cong\mathrm{Ind}_{\mathcal{Y}}^{ \mathbb{H}_{\mathrm{N}}}(q^{\rho}).\]
Proof.: The first isomorphism is the case \(\chi=\epsilon\) of Proposition 6.4. Recall from Notation 4.1 we have \(t=\mathtt{q}\) and \(q=t^{-2}\). The latter isomorphism is Corollary 4.7 combined with Theorem 3.22, as \(t^{-2\rho}\) is descending.
Using Proposition 6.1, we can strengthen Proposition 5.10 to give a decomposition of \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}q^{\rho}\) into irreducibles.
**Corollary 6.6**.: _As a \(\mathbb{H}_{N}\)-\(\mathcal{K}[\mathfrak{S}_{N}]\) bimodule, we have a decomposition_
\[F_{N}(\operatorname{HK}(\epsilon))\cong\operatorname{Ind}_{\mathcal{Y}}^{ \mathbb{H}}q^{\rho}\cong\bigoplus_{\lambda}L_{\lambda}^{q}\otimes S^{\lambda}\]
_where the direct sum is indexed by partitions \(\lambda\) of \(N\) and \(S^{\lambda}\) is the corresponding irreducible for \(\mathfrak{S}_{N}\). Furthermore \(L_{\lambda}^{q}\cong\operatorname{Ind}_{\operatorname{H}(\mathcal{Y})}S_{t}^ {\lambda}\) is a simple \(\mathbb{H}\)-module._
Proof.: The finite Hecke algebra \(\operatorname{H}^{\operatorname{fin}}\) has irreducible modules labeled by partitions, and we can inflate these to the affine Hecke algebra \(\operatorname{H}(\mathcal{Y})\) along the evaluation homomorphism that sends \(T_{i}\mapsto T_{i}\), \(Y_{1}\mapsto 1\). We call the resulting irreducible AHA-module \(S_{t}^{\lambda}\).
We observe \(\operatorname{e}L_{\lambda}^{q}=\bar{L}_{\lambda}\) from Proposition 5.10. Given e yields a Morita equivalence, the result follows.
As mentioned in Footnote 4, from this corollary, the functor \(\Upsilon\) and Proposition 6.1, we may conclude the \(M_{\lambda}\) of Theorem 1.3 are irreducible, not just indecomposable.
**Remark 6.7**.: Given a \(\mathcal{D}_{\mathtt{q}}(G)\)-module \(M\), recall that while the action of \(\mathbb{H}_{\operatorname{n}}\) is only well defined on \((V^{\otimes n}\otimes M)^{G}\), the action of \(\operatorname{H}(\mathcal{Y})\) is well defined on all of \(V^{\otimes n}\otimes M\). Let us record the following peculiar corollary of [13, Theorem 1.3] (which we will not use in the remainder of the paper): for any \(n\), the operator \(Y=Y_{1}\) acting on \(V^{\otimes n}\otimes 1\subset V^{\otimes n}\otimes\operatorname{HK}(\epsilon)\) satisfies the characteristic equation,
\[(Y-1)(Y-\mathtt{q}^{-2})\cdots(Y-\mathtt{q}^{(2-2N)})=0\]
We note that when \(\mathtt{q}=1\) this becomes the condition that \(Y\) lies in the unipotent cone, however for quantum parameter \(\mathtt{q}\) generic it implies \(Y\) acts diagonalizably.
Now we proceed to build the DAHA machinery to show \(\Phi\) of (6.1) is a well-defined homomorphism. Although redundant to give this second proof of Proposition 5.8, it is more direct than working with the anti-spherical and spherical DAHA, and we build some worthwhile DAHA tools along the way.
### Poles and zeros of intertwiners
Our strategy is to define the required endomorphisms \(\Phi_{w}\) in terms of renormalized intertwiners \(\nu_{i}\in\tilde{\mathbb{H}}\) from Definition 3.15. The expressions \(f_{i,i+1}^{\pm 1}\) can introduce poles and zeroes into the definition, and hence much of the technical complication in making sense of the \(\Phi_{w}\) is to keep track of these.
To this end, let us begin by collecting some simple observations concerning zeros of the \(f_{i,j}\) when evaluated at the character \(q^{\rho}=t^{-2\rho}\) appearing in Corollary 6.5. We have:
\[(Y_{i}-Y_{j})|_{t^{-2\rho}}=0\quad\text{if, and only if, }j-i\equiv 0\mod(n+1).\]
With the \(q=t^{-2}\) specialization we observe:
\[f_{i,j}=tY_{i}-t^{-1}Y_{j}=t^{-1}(Y_{i+n}-Y_{j})=t(Y_{i}-Y_{j-n}),\]
hence we conclude:
\[f_{i,j}|_{q^{\rho}}=0\quad\text{ if, and only if }j-i\equiv n\mod(n+1).\]
Let us first rewrite
\[\nu_{i}=T_{i}\frac{tf_{i-n,i+1}}{f_{i,i+1}}+\frac{(t-t^{-1})Y_{i+1}}{f_{i,i+1}}.\]
As a consequence of this formula and Equation (3.19), if we consider the normal ordering \(\,:\!\nu_{w}\,\): of some operator \(\nu_{w}\) as described in Section 6.2 (see Definition 6.10) below, we see that the poles and zeros of \(\,:\!\nu_{w}\,\): are controlled by the set of inversions of \(w\). This is captured in the following Definition, as will become clearer in the proof of Theorem 6.11.
**Definition 6.8**.: Fix \(w\in\widehat{\mathfrak{S}}_{n}\). We say that an inversion \((i,j)\in\operatorname{Inv}(w)\) is **vanishing** if \((j-i)\equiv 0\mod(n+1)\), and **singular** if \((j-i)\equiv n\mod(n+1)\). We denote by \(\operatorname{Inv}_{0}(w)\) and \(\operatorname{Inv}_{\infty}(w)\), respectively, the sets of vanishing and singular inversions.
Recall that \(\mathfrak{S}_{n}\subset\widehat{\mathfrak{S}}_{n}\) is the stabilizer of \((0,\dots,0)\) with respect to the action on \(\mathbb{Z}^{n}\). Let us denote by \(\gamma\in\widehat{\mathfrak{S}}_{n}\) the translation by the element \(-\rho=(0,-1,\dots,1-n)\), so that \(\gamma\cdot\rho=(0,\dots,0)\). Recall the symbol \(\cdot\) denotes our usual (not dot) action from Section 3.3. Clearly, the stabilizer of \(\rho\) with respect to the above action of \(\widehat{\mathfrak{S}}_{n}\) on \(\mathbb{Z}^{n}\) is
\[\operatorname{Stab}_{\widehat{\mathfrak{S}}_{n}}(\rho)=\gamma^{-1}\mathfrak{ S}_{n}\gamma.\]
We work with \(-\rho\) in this section since \(q^{\rho}=t^{-2\rho}\) but conventionally \(\widehat{\mathfrak{S}}_{n}\) acts on \(\mathbb{Z}^{n}\), not \((2\mathbb{Z})^{n}\). The stabilizer is therefore generated by the elements
\[\begin{split}\gamma^{-1}s_{i}\gamma&=[1,\,2,\,\cdots,\,\,i+1+n,\,\,i-n,\,\,\cdots,\,n]\\ &=\underbrace{s_{i}s_{i+1}\cdots s_{n-1}s_{i-1}\cdots s_{2}s_{1} }_{\delta_{i}^{-1}}s_{0}\underbrace{s_{1}s_{2}\cdots s_{i-1}s_{n-1}\cdots s_{ i+1}s_{i}}_{\delta_{i}},\end{split} \tag{6.3}\]
expressed first in "window notation" (see [1]), and then as a reduced expression. In particular we record the equation \(\ell en(\gamma^{-1}s_{i}\gamma)=2n-1\). Note that \(\delta_{i}=[2,\,3,\,\cdots,\,i,\,n,\,1,\,\,i+1,\,\cdots,\,n-1]\) is an \(n\)-cycle.
The following lemma is straightforward; we omit the proof, but instead give an example in Figure 6.12
**Lemma 6.9**.: _Let \(w\in\mathfrak{S}_{n}\subset\widehat{\mathfrak{S}}_{n}\). We have bijections,_
\[\begin{array}{ll}\alpha:&\operatorname{Inv}(w)\to\operatorname{Inv}_{0}( \gamma^{-1}w\gamma)\\ &(i,j)\mapsto(i,j+(j-i)n)\end{array},\qquad\begin{array}{ll}\beta:& \operatorname{Inv}(w)\to\operatorname{Inv}_{\infty}(\gamma^{-1}w\gamma),\\ &(i,j)\mapsto(i,j+(j-i+1)n)\end{array}.\]
### Normal orderings
Recall that \(\mathbb{H}_{n}(N)\) has a vector space decomposition \(\mathbb{H}_{n}(N)=\operatorname{H}(\mathcal{X})\otimes\mathcal{K}[\mathcal{Y}]\), which extends to a decomposition \(\widetilde{\mathbb{H}}=\operatorname{H}(\mathcal{X})\otimes\mathcal{K}[ \widetilde{\mathcal{Y}}]\). Let us fix the standard basis of \(\operatorname{H}(\mathcal{X})\) consisting of elements \(T_{w}\), for \(w\in\widehat{\mathfrak{S}}_{n}\).
**Definition 6.10**.: Given \(h\in\widetilde{\mathbb{H}}\), we will write \(:h\colon\in\operatorname{H}(\mathcal{X})\otimes\mathcal{K}[\widetilde{ \mathcal{Y}}]\) for its **normal ordering**,
\[:h\colon=\sum_{w\in\widehat{\mathfrak{S}}_{n}}T_{w}g_{w},\]
where \(g_{w}\in\mathcal{K}[\widetilde{\mathcal{Y}}]\) and all but finitely many \(g_{w}\) are nonzero. In particular, we have \(h\in\mathbb{H}\) if, and only if \(g_{w}\in\mathcal{K}[\mathcal{Y}]\subset\mathcal{K}[\widetilde{\mathcal{Y}}]\) for all \(w\in\widehat{\mathfrak{S}}_{n}\).
Given a normally ordered element \(:h\colon=\sum_{x}T_{x}g_{x}\) of \(\widetilde{\mathbb{H}}\), we will say it has a leading term if there exists \(w\in\widehat{\mathfrak{S}}_{n}\) such that \(g_{w}\neq 0\), and for \(x\neq w\), \(g_{x}\neq 0\) implies \(\ell en(x)<\ell en(w)\). In this case we call \(T_{w}\) the **leading term** and \(g_{w}\in\mathcal{K}[\widetilde{\mathcal{Y}}]\) the **leading coefficient**. We stress that, because length does not define a total ordering, not every normally ordered expression has a leading term.
We now turn to the key technical result of this section.
**Theorem 6.11**.: _Let \(w\in\mathfrak{S}_{n}\), and consider the normal ordering of \(\nu_{\gamma^{-1}w\gamma}\),_
\[:\nu_{\gamma^{-1}w\gamma}\colon=\sum_{x\in\widehat{\mathfrak{S}}_{n}}T_{x}g_ {x}.\]
_Then \(:\nu_{\gamma^{-1}w\gamma}\colon\) has leading term \(T_{\gamma^{-1}w\gamma}\), and the leading coefficient \(g_{\gamma^{-1}w\gamma}\) has neither a zero nor a pole at \(t^{-2\rho}=q^{\rho}\). Moreover, no lower order coefficient \(g_{x}\) has a pole at \(t^{-2\rho}\)._
Proof.: To simplify notation, we will write
\[a_{i,j}=\frac{tf_{i-n,j}}{f_{i,j}},\qquad b_{i,j}=\frac{(t-t^{-1})Y_{j}}{f_{i,j}},\quad\text{ hence }\nu_{i}=T_{i}a_{i,i+1}+b_{i,i+1}. \tag{6.4}\]
Note that \(a_{i,j}=a_{i+n,j+n}\) and \(b_{i,j}=b_{i+n,j+n}\).
Pick a reduced expression \(\gamma^{-1}w\gamma=s_{i_{1}}\cdots s_{i_{p}}\), thereby inducing an ordering
\[\operatorname{Inv}(\gamma^{-1}w\gamma)=\{(i_{1},j_{1})\ldots(i_{p},j_{p})\}\]
on the set of inversions of \(\gamma^{-1}w\gamma\). We have:
\[\nu_{\gamma^{-1}w\gamma}=\sum_{\varepsilon\in\{0,1\}^{p}}T_{i_{1}}^{ \epsilon_{1}}\ldots T_{i_{p}}^{\epsilon_{p}}a_{i_{1},j_{1}}^{\epsilon_{1}}b_{i _{1},j_{1}}^{(1-\epsilon_{1})}\ldots a_{i_{p},j_{p}}^{\epsilon_{p}}b_{i_{p},j_ {p}}^{(1-\epsilon_{p})}.\]
It is clear from the above sum that \(:\nu_{\gamma^{-1}w\gamma}\colon\) has leading term \(T_{\gamma^{-1}w\gamma}\).
By inspection, we note that at \(t^{-2\rho}\), each \(a_{i,j}\) contributes a zero precisely when \((i,j)\) is a vanishing inversion of \(\gamma^{-1}w\gamma\), while each \(a_{i,j}\) and each \(b_{i,j}\) contributes a pole precisely when \((i,j)\) is a singular inversion of \(\gamma^{-1}w\gamma\). Below we analyze the cancellation of such poles once one simplifies the products above.
It follows from the bijection between singular and vanishing inversions established in Lemma 6.9 that the leading coefficient \(g_{\gamma^{-1}w\gamma}\) has neither a zero nor a pole at \(t^{-2\rho}\). More precisely, we note that the term \(T_{\gamma^{-1}w\gamma}\) has \(\epsilon_{r}=1\) for all vanishing \((i_{r},j_{r})\in\operatorname{Inv}_{0}(\gamma^{-1}w\gamma)\). By Lemma 6.9 the potential zero at \(a_{i_{r},j_{r}}\) will cancel with the potential pole of \(a_{i_{s},j_{s}}\) for \((i_{s},j_{s})=(i_{r},j_{r}+n)\in\operatorname{Inv}_{\infty}(\gamma^{-1}w\gamma)\). Thus the leading coefficient \(g_{\gamma^{-1}w\gamma}\), once simplified, has neither zero nor pole at \(t^{-2\rho}\).
It remains to show that no pole arises at \(t^{-2\rho}\) in any coefficient \(g_{x}\), with \(\ell en(x)<\ell en(w)\). For this, we require a more elaborate inductive argument to ensure pole cancellation for the sum of the \(2^{p-1}\) terms with \(\epsilon_{r}=0\). We will induct on \(\ell en(w)\); if \(\ell en(w)=0\) there is nothing to prove.
Next consider \(\ell en(w)=1\). We give a more careful analysis of the terms for which \(\epsilon_{r}=0\) when \(w=s_{i}\). Using the reduced expression from (6.3), note \(\gamma^{-1}s_{i}\gamma\) has one vanishing inversion \((i_{r},j_{r})=(i,i+1+n)\in\operatorname{Inv}_{0}(\gamma^{-1}s_{i}\gamma)\) for \(r=n\) and one singular inversion \((i_{s},j_{s})=(i,i+1+2n)\in\operatorname{Inv}_{\infty}(\gamma^{-1}s_{i}\gamma)\) for \(s=1\) corresponding to \(\operatorname{Inv}(s_{i})=\{(i,i+1)\}\). Thus the only potential poles arise from \(b_{i_{s},j_{s}}=b_{i-n,i+1+n}\) or \(a_{i_{s},j_{s}}=a_{i-n,i+1+n}\). We first rewrite
\[\nu_{\gamma^{-1}s_{i}\gamma} =T_{i}\nu_{s_{i}\delta_{i}^{-1}}T_{0}\nu_{\delta_{i}}a_{i-n,i+1}a_ {i-n,i+1+n}+\nu_{s_{i}\delta_{i}^{-1}}T_{0}\nu_{\delta_{i}}a_{i-n,i+1}b_{i-n,i+ 1+n}\] \[\quad+T_{i}\nu_{s_{i}\delta_{i}^{-1}}\nu_{\delta_{i}}b_{i-n,i+1}a_ {i-n,i+1+n}+\nu_{s_{i}\delta_{i}^{-1}}\nu_{\delta_{i}}b_{i-n,i+1}b_{i-n,i+1+n}\] \[=T_{i}\nu_{s_{i}\delta_{i}^{-1}}T_{0}\nu_{\delta_{i}}a_{i-n,i+1}a_ {i-n,i+1+n}+\nu_{s_{i}\delta_{i}^{-1}}T_{0}\nu_{\delta_{i}}a_{i-n,i+1}b_{i-n,i +1+n}\] \[+((t^{2}-1)T_{i}(Y_{i}+t^{2}Y_{i+1})+(t^{-1}Y_{i}+(t-t^{-1}-t^{3}) Y_{i+1})f_{i,i+1}^{-1}b_{i-n,i+1}.\]
Above we simplified the last two terms which we see have no pole at \(t^{-2\rho}\). In the following computations, let denote either the symbol \(a\) or \(b\). For the first two terms, recall \(a_{i_{r},j_{r}}\,\raisebox{-1.5pt}{\includegraphics[height=14.226378pt]{figs.eps}} _{i_{s},j_{s}}=a_{i-n,i+1}\,\raisebox{-1.5pt}{\includegraphics[height=14.226378 pt]{figs.eps}}_{i-n,i+1+n}\) has no pole at \(t^{-2\rho}\). However, we need to check the terms that contribute to \(\epsilon_{n}=0\) arising from moving \(\raisebox{-1.5pt}{\includegraphics[height=14.226378pt]{figs.eps}}_{k,n}\) or \(\raisebox{-1.5pt}{\includegraphics[height=14.226378pt]{figs.eps}}_{1,k}\) past \(T_{0}\). The following identities
\[\nu_{s_{i}\delta_{i}^{-1}} =\sum T_{i+1}^{\epsilon_{2}}\cdots T_{1}^{\epsilon_{n-1}}a_{i+1,n} ^{\epsilon_{2}}b_{i+1,n}^{1-\epsilon_{2}}\cdots a_{1,2}^{\epsilon_{n-1}}b_{1,2} ^{1-\epsilon_{n-1}}\] \[a_{1,k}T_{0} =T_{0}a_{0,k}+b_{0,k}b_{1,k}Y_{1}/Y_{k} \text{for $1<k\leq i$}\] \[b_{1,k}T_{0} =T_{0}b_{0,k}-tb_{0,k}b_{1,k}Y_{1}/Y_{k} \text{for $1<k\leq i$}\] \[a_{k,n}T_{0} =T_{0}a_{k,n+1}+b_{k,n+1}b_{k,n}Y_{k}/Y_{n} \text{for $i+1\leq k<n$}\] \[b_{k,n}T_{0} =T_{0}b_{k,n+1}-tb_{k,n+1}b_{k,n}Y_{k}/Y_{n} \text{for $i+1\leq k<n$}\] \[\raisebox{-1.5pt}{\includegraphics[height=14.226378pt]{figs.eps}}_{0,k} \,\raisebox{-1.5pt}{\includegraphics[height=14.226378pt]{figs.eps}}_{1,k}\,\nu_{ \delta_{i}} =\nu_{\delta_{i}}\,\raisebox{-1.5pt}{\includegraphics[height=14.226378 pt]{figs.eps}}_{i-n,k-1}\,\raisebox{-1.5pt}{\includegraphics[height=14.226378 pt]{figs.eps}}_{i+1,k-1} \text{for $1<k\leq i$}\] \[\text{for $i+1
We have shown \(\,:\nu_{\gamma^{-1}s_{i\gamma}}\,\): has no poles, i.e., is well-defined when evaluated at \(t^{-2\rho}\). This is equivalent to the following: if \(\underline{v}\) is a weight vector of weight \(t^{-2\rho}\) so is \(\underline{v}^{\prime}=\,:\nu_{\gamma^{-1}s_{i\gamma}}\,:\underline{v}\). (The first part of the proof shows \(\underline{v}^{\prime}\) is nonzero, although that fact is not necessary here.) Now we can proceed with the induction. Given \(w\) with \(\ell en(w)>1\), choose \(i\) so \(\ell en(ws_{i})<\ell en(w)\) and write \(u=ws_{i}\) which has smaller length than \(w\). Next \(:\nu_{\gamma^{-1}w\gamma}\,:\underline{v}\,=\,:\nu_{\gamma^{-1}u\gamma}\,: \,\nu_{\gamma^{-1}s_{i\gamma}}\,:\underline{v}\,=\,:\nu_{\gamma^{-1}u\gamma}\, :\underline{v}^{\prime}\), which is a well-defined weight vector of weight \(t^{-2\rho}\) by induction. Thus the coefficients of \(\,:\nu_{\gamma^{-1}w\gamma}\,\): also have no pole at \(t^{-2\rho}\).
**Example 6.12**.: Let \(n=N=3\). Then \(\gamma=[1,-1,-3]\) in window notation. Consider \(w=s_{1}s_{2}=[2,3,1]\in\mathfrak{S}_{3}\) pictured as, which has inversions \(\{(1,3),(2,3)\}\) Then \(u:=\gamma^{-1}w\gamma=[1,5,9]\,[2,3,1]\,[1,-1,-3]=[5,6,-5]\) is determined by \(u(1),u(2+3)\) and \(u(3+6)\), which must be permuted among themselves to stabilize \(q^{\rho}\). For instance \(Y_{1}\nu_{u}=\nu_{u}Y_{9}=\nu_{u}q^{-2}Y_{3}=\nu_{u}t^{4}Y_{3}\) and we note \(Y_{1}\mid_{t^{-2\rho}}=t^{4}Y_{3}\mid_{t^{-2\rho}}=t^{0}\).
We mark below \(i\) how \(Y_{i}\) acts on a weight vector of weight \(q^{\rho}=t^{-2\rho}\).
We depict \(u\) above. The blue boxed inversions labeled \(0\) are \(\operatorname{Inv}_{0}(u)=\{(1,9),\,(2,6)\equiv(5,9)\}=\{\alpha(1,3),\,\alpha( 2,3)\}\) and correspond to where blue strands cross. These are in bijection with \(\operatorname{Inv}_{\infty}(u)=\{(1,12),\,(2,9)\equiv(5,12)\}=\{\beta(1,3),\, \beta(2,3)\}\) corresponding to the forced inversions from the \(N\)-translated green strand. These are labeled \(\infty\) and green boxed. Below we draw more of \(u\in\widehat{\mathfrak{S}}_{3}\), taking note that contributions to \(\operatorname{Inv}_{\infty}(u)\) only occur for \(i<j\). Observe \(\ell en(u)=8\) and \(\operatorname{Inv}(u)=\{(1,12),(2,12),(1,9),(2,9),(1,6),(2,6),(1,3),(2,3)\}\) which is ordered corresponding to the reduced expression \(u=s_{1}s_{2}s_{0}s_{1}s_{2}s_{0}s_{1}s_{2}\).
We may expand \(\nu_{u}=\sum_{\varepsilon\in\{0,1\}^{8}}T_{1}^{\varepsilon_{1}}T_{2}^{ \varepsilon_{2}}T_{0}^{\varepsilon_{3}}T_{1}^{\varepsilon_{4}}T_{2}^{ \varepsilon_{5}}T_{0}^{\varepsilon_{6}}T_{1}^{\varepsilon_{7}}T_{2}^{ \varepsilon_{8}}\,\)\(-_{2,9}\,\)\(\,-_{1,9}\,\)\(\,-_{2,6}\,\)\(\,-_{1,6}\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(=\sum_{\varepsilon\in\{0,1\}^{8}}T_{1}^{\varepsilon_{1}}T_{2}^{ \varepsilon_{2}}T_{0}^{\varepsilon_{3}}T_{1}^{\varepsilon_{4}}T_{2}^{ \varepsilon_{5}}T_{0}^{\varepsilon_{6}}T_{1}^{\varepsilon_{7}}T_{2}^{ \varepsilon_{8}}\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\
In the next two theorems, we use \(\underline{v}\) to denote a generator of a one-dimensional \(\mathcal{K}[\mathcal{Y}]\)-module. Observe that if \(\underline{v}\) has \(\mathcal{Y}\)-weight \(t^{-2\rho}=q^{\rho}\) then for a \(g\in\mathcal{K}[\widetilde{\mathcal{Y}}]\) without zeros or poles at \(t^{-2\rho}\) we have \(\,:\!g\!:\!\overline{|}_{t^{-2\rho}}\otimes\underline{v}=\,:\!g\!:\otimes \underline{v}\).
**Theorem 6.13**.: _The map_
\[\Phi:\mathcal{K}[\mathfrak{S}_{n}]^{op} \to\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{Y }}^{\mathbb{H}}(t^{-2\rho})),\] \[w \mapsto\Phi_{w},\]
_where \(\Phi_{w}(h\otimes\underline{v})=(h\!:\!\nu_{\gamma^{-1}w\gamma}\!:\!)\otimes \underline{v}\), for \(h\in\mathbb{H}\) defines a \(\mathcal{K}\)-algebra isomorphism._
Proof.: First, let us confirm that each \(\Phi_{w}\) is indeed a \(\mathbb{H}\)-linear endomorphism of \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}(t^{-2\rho})\). Theorem 6.11 ensures the well-definedness of each expression \((\,:\!\nu_{\gamma^{-1}w\gamma}\!:\,)\otimes\underline{v}\), while the intertwiner property implies that \((\,:\!\nu_{\gamma^{-1}w\gamma}\!:\,)\otimes\underline{v}\) lies in the \(t^{-2\rho}\)\(\mathcal{Y}\)-weight space. Hence Frobenius reciprocity for the functor \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\) produces the asserted module-homomorphism.
The homomorphism property for \(\Phi\) follows from the following identity in \(\widetilde{\mathbb{H}}\):
\[\nu_{\gamma^{-1}w\gamma}\nu_{\gamma^{-1}w\gamma}=\nu_{\gamma^{-1}ww\gamma}.\]
Finally, we observe that, as a consequence of Theorem 6.11, the set \(\{(\,:\!\nu_{\gamma^{-1}w\gamma}\!:\,)\otimes\underline{v}\mid w\in\mathfrak{S }_{n}\}\) forms a basis of the \(t^{-2\rho}\)\(\mathcal{Y}\)-weight space, hence \(\Phi\) is an isomorphism.
Theorem 1.2 now follows, using Proposition 6.3 and Corollary 6.5.
### Proof of Theorem 1.4 via intertwiners
In this section, we consider the endomorphism algebra \(\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}} \underline{\mathbf{a}})\) for more general \(\underline{\mathbf{a}}\) than \(t^{-2\rho}=q^{\rho}\), but still descending. Note \(q^{\rho}=t^{-2\rho}\) is descending. In other words, by Theorem 3.22, we consider \(\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{S}H}^{\mathbb{H }}\{\underline{\mathbf{a}}\}\boxtimes\operatorname{sgn})\).
It is convenient to introduce the following terminology:
**Definition 6.14**.: Let \(a,b,r\in\mathcal{K}^{\times}\). We say that \(a\) and \(b\) are **in the same \(r\)-line** if \(a/b=r^{z}\) for some \(z\in\mathbb{Z}\), and otherwise that they are **in distinct \(r\)-lines**.
**Theorem 6.15**.: _Let \(\underline{\mathbf{a}}\in(\mathcal{K}^{\times})^{n}\) be descending such that entries in the same \(t^{2}\)-line are in consecutive position. Further assume that \(\underline{\mathbf{a}}\) is **transverse**, meaning that \(\operatorname{Stab}_{\widetilde{\mathfrak{S}}_{n}}(\underline{\mathbf{a}}) \cap\mathfrak{S}_{n}=\operatorname{Stab}_{\mathfrak{S}_{n}}(\underline{ \mathbf{a}})=\{\operatorname{Id}\}\), i.e., that \(a_{i}\neq a_{j}\) if \(i\neq j\). Let \(\gamma_{\underline{\mathbf{a}}}\in\widehat{\mathfrak{S}}_{n}\) be of minimal length such that \(\gamma_{\underline{\mathbf{a}}}\operatorname{Stab}_{\widehat{\mathfrak{S}}_{n }}(\underline{\mathbf{a}})\gamma_{\underline{\mathbf{a}}}^{-1}=W_{J}\subseteq \mathfrak{S}_{n}\) for appropriate standard parabolic subgroup. Then the map_
\[\Phi:\mathcal{K}[W_{J}]^{op} \to\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{Y }}^{\mathbb{H}}\underline{\mathbf{a}}),\] \[w \mapsto\Phi_{w},\]
_where \(\Phi_{w}(h\otimes\underline{v})=(h\!:\!\nu_{\gamma_{\underline{\mathbf{a}}}^{ -1}w\gamma_{\underline{\mathbf{a}}}}\!:\!|\underline{\mathbf{a}})\otimes \underline{v}\), defines a \(\mathcal{K}\)-algebra isomorphism._
Proof.: Let us outline how to modify the proof of Theorem 6.13 to apply here. Note that \(J\) is determined by the different \(t^{2}\)-lines that the \(a_{i}\) lie on. In particular, the generalized \(\underline{\mathbf{a}}\)\(\mathcal{Y}\)-weight space of \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\underline{\mathbf{a}}\) has dimension \(|W_{J}|\). Similar to Lemma 6.9, if \(w\in W_{J}\), we still
have bijections between \(\operatorname{Inv}(w)\), the inversions of \(\gamma_{\underline{\mathbf{a}}}^{-1}w\gamma_{\underline{\mathbf{a}}}\) that potentially introduce poles to the coefficients of \(\,:\,\nu_{\gamma_{\underline{\mathbf{a}}}^{-1}w\gamma_{\underline{\mathbf{a}}}}\,:\,\), and those inversions that potentially introduce zeros. (The criterion for an inversion being vanishing or singular is more complicated in this case and not just determined by height.) As before, we are able to construct \(|W_{J}|\) linearly independent \(\mathcal{Y}\)-weight vectors of weight \(\underline{\mathbf{a}}\) and corresponding endomorphisms to yield the isomorphism \(\Phi\).
Theorem 1.4 now follows, using Proposition 6.3.
**Remark 6.16**.: \((\diamond)\) Very few modifications need be made for the \(G=\operatorname{SL}_{N}\) case of the above proof. Wherever \(\widehat{\mathfrak{S}}_{n}\) appears, we instead take its quotient by the subgroup generated by \(\pi^{n}\). In computing the stabilizer of a weight we use the modified action of \(\overline{\pi}\) as in (3.7). We also compare entries in the same \(t^{2/N}\)-lines, and similarly we declare these to be descending if whenever \(a_{i}/a_{j}=t^{2z/N}\) with \(z\in\mathbb{Z}\) and \(i<j\) then \(z\geq 0\). The \(\underline{\mathbf{a}}\) we will consider furthermore are required to have \(\prod_{i=1}^{N}a_{i}=\boldsymbol{Z}\).
**Remark 6.17**.: The non-transverse case, where \(\operatorname{Stab}_{\widehat{\mathfrak{S}}_{n}}(\underline{\mathbf{a}})\cap \mathfrak{S}_{n}=W_{K}\neq\{\operatorname{Id}\}\), is more complicated. We may still write \(\gamma_{\underline{\mathbf{a}}}\operatorname{Stab}_{\widehat{\mathfrak{S}}_{n }}(\underline{\mathbf{a}})\gamma_{\underline{\mathbf{a}}}^{-1}=W_{J}\subseteq \mathfrak{S}_{n}\), but we have \(K\subseteq J\). While the _generalized_\(\underline{\mathbf{a}}\)\(\mathcal{Y}\)-weight space of \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\underline{\mathbf{a}}\) still has dimension \(|W_{J}|\), the _ordinary_\(\underline{\mathbf{a}}\)\(\mathcal{Y}\)-weight space only has dimension \(|W_{J\setminus K}|\). Hence \(\dim\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{Y}}^{ \mathbb{H}}\underline{\mathbf{a}})=|W_{J\setminus K}|\) in this case. However the ring structure of \(\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}} \underline{\mathbf{a}})\) is more complicated than that of the semisimple algebra \(\mathcal{K}[W_{J\setminus K}]\), and can contain nilpotent elements. This is related to the fact that the induced module is not semisimple. See Example 6.19 below.
**Example 6.18**.: This example shows the necessity of \(\underline{\mathbf{a}}\) being descending for various results in this paper. Let \(n=N=2\). Let \(\underline{\mathbf{a}}=t^{-2\rho}=(t^{0},t^{-2})\) and \(\underline{\mathbf{b}}=(t^{-2},t^{0})\). Then \(\operatorname{Ind}_{\mathcal{S}H}^{\mathbb{H}}\{\underline{\mathbf{a}}\}\boxtimes \operatorname{sgn}\simeq\operatorname{Ind}_{\mathcal{S}H}^{\mathbb{H}}\{ \underline{\mathbf{b}}\}\boxtimes\operatorname{sgn}\simeq\operatorname{Ind}_{ \mathcal{Y}}^{\mathbb{H}}\underline{\mathbf{a}}\simeq\mathbf{T}\oplus \mathbf{S}\). We have let \(\mathbf{T}=\operatorname{Ind}_{\operatorname{H}(\mathcal{Y})}^{\mathbb{H}}\)**triv** for \(\operatorname{triv}\) the one-dimensional \(\operatorname{H}(\mathcal{Y})\)-module on which \((T_{1}-t)\), \((Y_{1}-t^{-2}),(Y_{2}-1)\) all vanish. We let \(\mathbf{S}=\operatorname{Ind}_{\operatorname{H}(\mathcal{Y})}^{\mathbb{H}} \operatorname{sgn}\) for \(\operatorname{sgn}\) the one-dimensional \(\operatorname{H}(\mathcal{Y})\)-module on which \((T_{1}+t^{-1})\), \((Y_{1}-1),(Y_{2}-t^{-2})\) all vanish. As in Theorem 6.13, \(\operatorname{End}_{\mathbb{H}}(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}} (t^{-2\rho}))\simeq\operatorname{End}_{\mathbb{H}}(\mathbf{T}\oplus\mathbf{S} )\simeq\mathcal{K}[\mathfrak{S}_{2}]^{\operatorname{op}}\).
Next consider \(\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\underline{\mathbf{b}}\). Observe \(\underline{\mathbf{b}}\) is not descending, but since it is in the same \(\widehat{\mathfrak{S}}_{2}\)-orbit as \(\underline{\mathbf{a}}\), it will have the same composition factors as above. In fact \(0\to\mathbf{S}\to\operatorname{Ind}_{\mathcal{Y}}^{\mathbb{H}}\underline{ \mathbf{b}}\to\mathbf{T}\to 0\) is non-split and so the induced modules has trivial endomorphism algebra. To see the above exact sequences does not split, one can compute that the \(\underline{\mathbf{b}}\)\(\mathcal{Y}\)-weight space of \(\mathbf{S}\) is zero. (This is a "rectangular" representation from [11].) On the other hand, the generalized \(\underline{\mathbf{b}}\) weight space of \(\mathbf{T}\) is two-dimensional, but the ordinary weight space is just one-dimensional. In particular \(\mathbf{T}\) is not \(\mathcal{Y}\)-semisimple. (Note, we could have also taken \(\underline{\mathbf{b}}=(t^{0},t^{0})\) as in Section 5.2, keeping Remark 5.9 in mind.)
**Example 6.19**.: This example shows the necessity of \(\underline{\mathbf{a}}\) being transverse for Theorem 6.15 to hold. We give an example of an induced module that has a nonzero nilpotent endomorphism, hence neither it nor its endomorphism algebra is semisimple. Let \(n=N=2\)
3 and \(\underline{\mathbf{a}}=(t^{0},t^{0},t^{-2})\) which is descending but clearly not transverse. We take \(\gamma\) to be translation by \((0,0,-1)\), so the stabilizer of \(\underline{\mathbf{a}}\) is
\[\gamma^{-1}\mathfrak{S}_{3}\gamma=\{\text{Id},\,s_{1},\,s_{2}s_{1}s_{0}s_{1}s_{ 2},\,s_{1}s_{2}s_{1}s_{0}s_{1}s_{2},\,s_{2}s_{1}s_{0}s_{1}s_{2}s_{1},\,s_{1}s_{ 2}s_{1}s_{0}s_{1}s_{2}s_{1}\}.\]
The first difference is that \(\nu_{s_{1}}\underline{v}=\underline{v}\) and \(T_{1}\underline{v}\) is a generalized \(\underline{\mathbf{a}}\)\(\mathcal{Y}\)-weight vector. It is easy to see the generalized \(\underline{\mathbf{a}}\) weight space has dimension 6 and the ordinary \(\underline{\mathbf{a}}\) weight space has dimension \(\leq 3\). But in fact, as asserted above, the weight space only has dimension 2, as indicated by \(|J\setminus K|=1\). Next, consider \(\gamma^{-1}s_{2}\gamma=s_{2}s_{1}s_{0}s_{1}s_{2}\). One can expand
\[\nu_{\gamma^{-1}s_{2}\gamma} =\sum_{\underline{\mathbf{\varepsilon}}\in\{0,1\}^{5}}T_{2}^{ \epsilon_{1}}T_{1}^{\epsilon_{2}}T_{0}^{\epsilon_{3}}T_{1}^{\epsilon_{4}}T_{2 }^{\epsilon_{5}}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{././. }}_{-1,6}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1,1}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1,3}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{2,3}\] \[=\sum_{\underline{\mathbf{\varepsilon}}\in\{0,1\}^{5}}T_{2}^{ \epsilon_{1}}T_{1}^{\epsilon_{2}}T_{0}^{\epsilon_{3}}T_{1}^{\epsilon_{4}}T_{2 }^{\epsilon_{5}}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{2,9}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{3,1}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{2,6}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{1,3}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{2,3}.\]
As in the proof of Theorem 6.11, for each term with \(\epsilon_{3}=1\), the potential zero in \(a_{-1,3}\) cancels the potential pole of \(\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1,6}\). The collection of terms with \(\epsilon_{3}=0\) all combine and cancel the potential pole of \(\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1,6}\). However the fact that \(s_{1}\) stabilizes \(\underline{\mathbf{a}}\) gives an unexpected pole at \(\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1,1}\) that cannot be cancelled. We rectify this by rescaling by \(f_{-1,1}\), that is, we take \(\nu_{s_{2}s_{1}s_{0}s_{1}s_{2}}f_{-1,1}=\nu_{s_{2}}\varphi_{s_{1}}\nu_{s_{0}s_ {1}s_{2}}\) whose normal ordering has no poles at \(\underline{\mathbf{a}}\) and has leading term corresponding to \(\gamma^{-1}s_{2}\gamma\). In other words, \(\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1,1}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{-1}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./. }}_{2}\) in a nonzero \(\mathcal{Y}\)-weight vector of weight \(\underline{\mathbf{a}}\). However one may compute that the corresponding nonzero endomorphism determined by \(\underline{v}\mapsto\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{. /. }}_{\gamma^{-1}s_{2}\gamma}f_{-1,1}\,\raisebox{-1.0pt}{\includegraphics[height=14. 226378pt]{./. }}_{\text{is nilpotent--it squares to 0}}\).
|
2309.13304 | A new analytical model of magnetofluids surrounding rotating black holes | In this study, we develop a simplified magnetofluid model in the framework of
GRMHD. We consider an ideal, adiabatic fluid composed of two components, ions
and electrons, having a constant ratio between their temperatures. The flows
are assumed to be governed by gravity, enabling us to employ the ballistic
approximation, treating the streamlines as timelike geodesics. We show that the
model is analytically solvble around a rotating black hole if the angular
velocity of the geodesic $u^\theta$ is vanishing. In the corresponding
solution, which is named the conical solution, we derive a comprehensive set of
explicit expressions for the thermodynamics and the associated magnetic field.
Furthermore, we explore the potential applications of our model to describe the
thick disks and the jets at the horizon scale. Our model provides a direct
pathway for the study of black hole imaging. | Yehui Hou, Zhenyu Zhang, Minyong Guo, Bin Chen | 2023-09-23T08:39:11Z | http://arxiv.org/abs/2309.13304v3 | # A new analytical model of magnetofluids surrounding rotating black holes
###### Abstract
In this study, we develop a simplified magnetofluid model in the framework of GRMHD. We consider an ideal, adiabatic fluid composed of two components, ions and electrons, having a constant ratio between their temperatures. The flows are assumed to be governed by gravity, enabling us to employ the ballistic approximation, treating the streamlines as timelike geodesics. We show that the model is analytically solvable around a rotating black hole if the angular velocity of the geodesic \(u^{\theta}\) is vanishing. In the corresponding solution, which is named the conical solution, we derive a comprehensive set of explicit expressions for the thermodynamics and the associated magnetic field. Furthermore, we explore the potential applications of our model to describe the thick disks and the jets at the horizon scale. Our model provides a direct pathway for the study of black hole imaging.
\(\ast\) Corresponding author: [email protected]
Introduction
The unveiling of horizon-scale images of black holes captured by the Event Horizon Telescope (EHT) has sparked widespread interest and fascination [1, 2, 3, 4]. The integration of theoretical and observational findings enables us to attain a clearer comprehension of the intricate physics surrounding black holes [2, 5, 6, 7, 8]. Numerous studies, including the efforts of the EHT collaboration, suggest that the millimeter-wave emissions observed originate from the accretion disk or the surface of the jet, namely the funnel wall jet (FWJ) [9, 10, 11, 12, 13, 14, 15]. The observational signatures of the disks and FWJs depend not only on gravitational effects but also significantly on the intrinsic physical properties of the magnetofluid, including electron distribution, temperature, and the associated magnetic field [14]. Theoretically, it is strongly demanding for an analytical model in the framework of general relativistic magnetohydrodynamics (GRMHD), which can effectively capture essential characteristics of the magnetofluid. Such a model would greatly facilitate the study of the magnetofluid surrounding the black holes and enhance the research on black hole imaging.
There have been many models on accretion disk. The Novikov-Thorne disk, an extension of the standard disk model [16], was established as the typical geometrically thin, equatorial accretion disk model in a relativistic background [17, 18]. Other studies have also explored different thin disk models [19, 20, 21, 22]. Generally, geometrically thin disks are optically thick and emit black body radiation. However, recent observations by the EHT and other studies [14, 23] suggest that accretions close to super massive black holes may exhibit geometrically thick, optically thin structures due to the dominant influence of gravity, which hinders rapid cooling and compression of matter in the vertical direction [20]. Theoretical investigations related to geometrically thick disks are still ongoing [24]. Abramowicz et al. proposed a torus model consisting of pure toroidal flows [25], which was subsequently extended to include magnetized tori [26]. The torus model has proven to be a suitable initial condition for GRMHD simulations. Additionally, by employing the ballistic approximation that considers only the geodesic motions of streamlines, off-equatorial streamlines can be analytically solved [27, 28]. However, these studies primarily focus on the contribution of ballistic flows to the formation of thin disks [29, 30], without considering horizon-scale magnetofluids.
Most of the existing disk models have predominantly focused on circular orbits and aimed to capture the dynamics on large scales. However, the potential to obtain highly resolved images of black holes presents an opportunity to reveal the behaviors of fluid at the horizon scale. Consequently, investigating the fluid dynamics near black holes has become increasingly important. In [21], the authors examined the Novikov-Thorne disk around a high-spin black hole and discovered a self-similar solution in the near-horizon region. In a recent study [31], the authors analyzed the equatorial accretion flow inspiraling from the innermost stable circular orbit (ISCO) of a Kerr black hole and derived
analytical thermodynamic solutions. However, it should be noted that the accretion flow close to the black hole is more likely to be geometrically thick. Furthermore, the polarized images of M87\({}^{*}\) suggest a preference for radial alignment in the magnetic field [2], indicating a significant radial velocity within the magnetofluid. Hence, there is a need to investigate the dynamics and morphology of a geometrically thick disk with nonvanishing radial inflows.
For the study of relativistic jets, despite various theoretical models [32, 33, 34], the real launching mechanism remains unknown. The outer boundary of the jet (FWJ) in the launching region is composed of hot plasma, thus contributing to the millimeter-wave black hole images through synchrotron radiation. However, the dynamics of the FWJ are still uncertain, despite some theoretical proposals [35, 36, 37, 38, 33].
With these questions in mind, we propose a self-consistent, analytical model for horizon-scale magnetofluids in the framework of GRMHD, trying to capture key features of thick disks and FWJs. Specifically, we focus on high-temperature systems with ultra-relativistic electrons in the adiabatic limit. To simplify the analysis, we introduce a linear relationship between the temperatures of electrons and ions, represented by a constant factor, \(z=T_{\rm ion}/T_{\rm e}\), and consider both the non-relativistic and ultra-relativistic limits of ions. With the relativistic Euler equations for ideal, adiabatic magnetofluids, we express the temperature and pressure as power functions of the particle number density, with the exponents determined by \(z\). Then, we demonstrate that in the scenarios where both the sound speed and Alfven velocity are sub-relativistic, the streamlines follow geodesics in the leading-order approximation, namely, the ballistic approximation. Furthermore, for a stationary, axisymmetric, ballistic fluid configuration in Kerr spacetime, we derive explicit expressions for the fluid thermodynamics under specific conditions regarding the streamlines. Especially, one of the conditions leads to the conical solution, which is of particular importance in the study of thick disks and FWJs. We also determine the accompanying magnetic field structure in the conical solution under the ideal MHD condition. These self-consistent results enable the construction of thick disk and FWJ models that capture essential characteristics of the emission profiles of astrophysical black holes. It is worth mentioning that we also employ the astrophysical parameters of M87\({}^{*}\) to estimate the feasibility of the approximations used in our model.
The structure of this manuscript is outlined as follows: Sec. 2 presents the framework and introduces the fundamental equations of the ideal and adiabatic fluid. In Sec. 3.1, we introduce the ballistic approximation and examine its applicability to the case of M87\({}^{*}\). In Sec. 3.2, we derive the explicit expressions for the thermodynamics in Kerr spacetimes. We investigate the conical solution of the streamlines in detail in Sec. 3.3. The associated magnetic field is discussed in Sec. 3.4. Moving on to Sec. 4, we apply the conical solution to the study of thick disks and FWJs, and use graphical illustrations to help understanding. We conclude in Sec. 5 with summary and discussions. We include
some technical details in a few appendices. Throughout this study, we adopt the unit with \(G=c=1\).
## 2 Basic equations for ideal and adiabatic fluid
In magnetohydrodynamics (MHD), the stress-energy tensor comprises several components: the fluid part, the viscosity part, the Maxwell part, and the radiation part. Different magnetofluid models adopt different forms of the stress-energy tensor. For instance, thick disk models often focus solely on the fluid part, disregarding other components, while thin disk models assume a negligible Maxwell part. In this study, we assume that the viscosity and radiation parts of the stress-energy tensor are negligible, and we only consider the fluid and Maxwell parts. In other words, we neglect the effects of radiative losses and diffusion during the evolution of fluid dynamics.
In fact, we will study the MHD of an ideal fluid in the presence of a magnetic field. For an ideal fluid, its stress-energy tensor can be written as
\[T_{\rm IF}^{\mu\nu}=u^{\mu}u^{\nu}(\Xi+p)+g^{\mu\nu}p\,, \tag{2.1}\]
where \(u^{\mu}\) is the 4-velocity, \(p\) is the isotropic pressure, \(\Xi\) is the internal energy density of the fluid. For the electromagnetic field \(F_{\mu\nu}\), it obeys the equation \(F_{\mu\nu}u^{\nu}=0\), which suggests the absence of an electric field as measured by the fluid. The Maxwell stress-energy tensor can be expressed as[10]:
\[T_{\rm EM}^{\mu\nu}=-\frac{1}{4}g^{\mu\nu}F^{2}+F^{\mu\alpha}F_{ \alpha}^{\nu}=\bigg{(}u^{\mu}u^{\nu}+\frac{1}{2}g^{\mu\nu}\bigg{)}B^{2}-B^{ \mu}B^{\nu}\,, \tag{2.2}\]
where \(B_{\mu}\) is the magnetic field measured by the fluid. Thus, the total stress-energy tensor can be written as
\[T^{\mu\nu}=T_{\rm IF}^{\mu\nu}+T_{\rm EM}^{\mu\nu}\,. \tag{2.3}\]
The fluid dynamics are governed by the conservation law, known as the relativistic Euler equations,
\[\nabla_{\mu}T^{\mu\nu} = 0\,. \tag{2.4}\]
Integrating Eqs. (2.1), (2.3) and (2.4), we can find
\[0=\nabla_{\mu}T_{EM}^{\mu\nu}+(g^{\mu\nu}+u^{\nu}u^{\mu})\nabla _{\mu}p+(\Xi+p)u^{\mu}\nabla_{\mu}u^{\nu}+Pu^{\nu}\,, \tag{2.5}\]
where we have introduced a scalar function \(P=u^{\mu}\nabla_{\mu}\Xi+(\Xi+p)\nabla_{\mu}u^{\mu}\). Upon contracting Eq. (2.5) with \(u_{\nu}\), we read
\[P=u^{\mu}\nabla_{\mu}\Xi+(\Xi+p)\nabla_{\mu}u^{\mu}=0\,, \tag{2.6}\]
where we have used the relation \(u_{\mu}\nabla_{\nu}T_{\rm EM}^{\mu\nu}=0\)1. Then, Eq. (2.5) can be simplified and reduced to
Footnote 1: For a detailed exposition of the proof for this formula, one might refer to Appendix C in [31].
\[u^{\mu}\nabla_{\mu}u^{\nu}=-\frac{1}{\Xi+p}\big{[}(u^{\mu}u^{\nu}+g^{\mu\nu}) \nabla_{\mu}p+\nabla_{\mu}T_{\rm EM}^{\mu\nu}\big{]}\,, \tag{2.7}\]
which governs the dynamics of \(u^{\mu}\) along the streamlines.
Furthermore, we consider the conservation law for the particle number
\[\nabla_{\mu}(nu^{\mu})=0\,, \tag{2.8}\]
where \(n\) is the number density of the plasma. As the most abundant gases in the universe are hydrogen and helium, in a fully ionized plasma, it is typically composed of negatively charged electrons and positively charged hydrogen or helium ions. For the sake of simplicity, let us assume that all the positive ions are hydrogen ions, thus the number densities of electrons and ions are equal, \(n_{\rm e}=n_{\rm ion}=n\). From Eq. (2.8), we have
\[\nabla_{\mu}u^{\mu}=-\frac{u^{\mu}\nabla_{\mu}n}{n}=-\frac{d}{d \tau}\log n\,, \tag{2.9}\]
where we have defined the proper time \(\tau\) along the streamline such that \(\frac{d}{d\tau}=u^{\mu}\nabla_{\mu}\). And Eq. (2.6) can be rewritten as
\[\frac{d\Xi}{d\tau}=-(\Xi+p)\nabla_{\mu}u^{\mu}=\frac{\Xi+p}{n} \frac{dn}{d\tau}\,, \tag{2.10}\]
which is a consequence of energy conservation in the fluid. Indeed, this equation can be derived through the thermodynamic laws of an adiabatically closed system. Similarly, the equation \(P=0\) can be also obtained via thermodynamic relationships.
We further make the assumption that the thermal distributions of electrons and ions in the comoving frame of the fluid are approximated by the isotropic Maxwell-Juttner distribution[39], which is characterized by a dimensionless temperature, \(\Theta_{j}\equiv k_{B}T_{j}/m_{j}\), where \(j\in\{{\rm e},{\rm ion}\}\) indicates electron or ion, \(k_{B}\) is the Boltzmann constant, \(m_{\rm e}\) and \(m_{\rm ion}\) are the masses for electron and ion, respectively. For ultra-relativistic particles with \(\Theta_{j}\gg 1\), the internal enerty density becomes \(\Xi_{j}\approx 3nm\Theta_{j}\). For non-relativistic particles with \(\Theta_{j}\ll 1\), we obtain \(\Xi_{j}\approx nm+3nm\Theta_{j}/2\).
In this work, we would like to treat the electrons as ultra-relativistic particles at the horizon scale of the black hole, \(\Xi_{\rm e}=3nm_{\rm e}\Theta_{\rm e}\), corresponding to \(T_{\rm e}\gg 10^{9}\) K. The electrons far away from the black hole become mildly relativistic, which is not of interest to us 2. The hot plasma is most likely collisionless, resulting in a lack of thermal equilibrium between electrons and ions [2]. For simplicity,
we assume that the temperature ratio between ions and electrons at the horizon scale is characterized by a constant, given by \(T_{\rm ion}=zT_{\rm e}\). Then, we have
\[\Theta_{\rm ion}=z\Theta_{\rm e}\frac{m_{\rm e}}{m_{\rm ion}}=\frac{z}{Z}\Theta_ {\rm e}\,, \tag{2.11}\]
where \(Z=\frac{m_{\rm ion}}{m_{\rm e}}\simeq 1836\). The value of \(z\) is uncertain and depends on the complex physical properties of the magnetofluid. To conduct a detailed analysis of the ion's temperature, we define the characteristic quantities to represent the orders of temperature at the horizon scale, denoted as \(\Theta_{\rm e}^{\rm c}=\Theta_{\rm e}(r_{h})\gg 1\,,\Theta_{\rm ion}^{\rm c}= \Theta_{\rm ion}(r_{h})\), and \(z_{\rm c}=Z/\Theta_{\rm e}^{\rm c}\). For example, in the case of M87\({}^{*}\), observations suggest that the electron temperature in the emitting region can reach \(10^{11}\)K [14, 40]. Therefore, we can estimate the characteristic quantities for M87\({}^{*}\) to be \(\Theta_{\rm e}^{\rm c}\simeq 15\), \(z_{\rm c}\simeq 122\).
The properties of the two-component fluid studied in this work depend heavily on the nature of ions. For the ions, we will consider two limiting cases in order to obtain analytic results: either they are non-relativistic with \(\Theta_{\rm ion}^{\rm c}\ll 1\), or they are ultra-relativistic with \(\Theta_{\rm ion}^{\rm c}\gg 1\). In the first scenario, we have \(z\ll z_{\rm c}\) and \(\Xi_{\rm ion}\approx nm_{\rm ion}+3nm_{\rm ion}\Theta_{\rm ion}/2\), so that
\[\Xi=\Xi_{\rm e}+\Xi_{\rm ion}=nm_{\rm ion}+3\bigg{(}\frac{1}{2}z+1\bigg{)}nk_ {B}T_{e}\,. \tag{2.12}\]
For the non-relativistic case, the speed of sound is defined by
\[c_{s}=\sqrt{\frac{dp}{d\rho}}\,, \tag{2.13}\]
where \(p\) and \(\rho\) are the total pressure and the rest mass density of the fluid, respectively,
\[p=nk_{B}(T_{\rm e}+T_{\rm ion})=nk_{B}T_{\rm e}(1+z),\ \ \ \ \rho=n(m_{\rm e}+m_{\rm ion})=nm_{\rm e}(1+Z)\approx Znm_{\rm e}. \tag{2.14}\]
Thus, we can rewrite \(\Xi\) in terms of \(\rho\) and \(p\),
\[\Xi=\rho+\frac{3(z+2)}{2(z+1)}p\,. \tag{2.15}\]
From Eq. (2.10), we can read
\[\frac{dp}{d\rho}=\frac{5z+8}{3(z+2)}\frac{p}{\rho}=\frac{(5z+8)(1+z)}{3(z+2)( 1+Z)}\Theta_{\rm e}\,, \tag{2.16}\]
and find
\[c_{s}=\sqrt{\frac{(5z+8)(1+z)}{3(z+2)(1+Z)}}\Theta_{\rm e}\sim\sqrt{\frac{(5z +8)(1+z)}{3(z+2)z_{c}}}\,, \tag{2.17}\]
which is sub-relativistic as \(z\ll z_{\rm c}\). Moreover, Eq. (2.16) can be equivalently rewritten as
\[\frac{dT_{\rm e}}{T_{\rm e}}=\frac{2(1+z)}{3(2+z)}\frac{dn}{n}\,, \tag{2.18}\]
which leads to
\[T_{\rm e}(x^{\mu})={\cal T}_{0}[n(x^{\mu})]^{\frac{2(1+z)}{3(2+z)}}\,, \tag{2.19}\]
with \({\cal T}_{0}\) being the integration constant.
Next, we consider the second scenario in which the ions are ultra-relativistic with \(\Theta_{\rm ion}\gg 1\). In this case, both the electrons and the ions are ultra-relativistic particles, so that the rest mass density \(\rho=nm_{\rm e}(1+Z)\) can be ignored compared to the pressure \(p=3nm_{\rm e}\Theta_{\rm e}(1+z)\). As a result, we have
\[\Xi=3n(m_{\rm e}\Theta_{\rm e}+m_{\rm ion}\Theta_{\rm ion})=3nm_{ \rm e}\Theta_{\rm e}(1+z)=3p\,. \tag{2.20}\]
Note that in the relativistic case, the speed of sound is modified to
\[c_{s}=\sqrt{\frac{dp}{d\Xi}}\,, \tag{2.21}\]
so that we have \(c_{s}=\frac{\sqrt{3}}{3}\) in the case that \(\Theta_{\rm ion}\gg 1\). In addition, similar to Eq. (2.18), we obtain
\[\frac{dT_{\rm e}}{T_{\rm e}}=\frac{1}{3}\frac{dn}{n}\,, \tag{2.22}\]
which leads to
\[T_{\rm e}(x^{\mu})={\cal T}_{0}[n(x^{\mu})]^{\frac{1}{3}}\,, \tag{2.23}\]
with \({\cal T}_{0}\) being the integration constant.
## 3 Analytical solutions
In this section, we present an analytical method to determine the properties of the magnetofluid. We have obtained the relation between the temperature \(T_{\rm e}\) and number density \(n\) in both the non-relativistic and ultra-relativistic limits,
\[T_{\rm e}(x^{\mu})=\begin{cases}{\cal T}_{0}[n(x^{\mu})]^{\frac{ 2(1+z)}{3(2+z)}}\,,&\quad z\ll z_{\rm c}\\ {\cal T}_{0}[n(x^{\mu})]^{\frac{1}{3}}\,.&\quad z\gg z_{\rm c}\end{cases} \tag{3.1}\]
The pressure is then obtained by \(p=(1+z)nk_{B}T_{\rm e}\). Hence, the remaining physical quantities to be determined in the fluid are the number density \(n\), the four-velocity \(u^{\mu}\) and the magnetic field \(B^{\mu}\) which are jointly governed by Eqs. (2.7), (2.9) and the ideal MHD condition, \(F_{\mu\nu}u^{\mu}=0\).
As we will show shortly, the motion of particles in the above two limits can be treated approximately as geodesic motion. Then, the number density can be obtained from Eq. (2.9) by integrating along the geodesics. In general, it is not easy to derive the number density analytically from Eq. (2.9).
However, if the left-hand side of Eq. (2.9) can be rearranged as a total derivative with respect to \(\tau\), the solution for \(n\) can be obtained directly. Although we do not know the general condition under which this case holds, we will provide a nontrivial example in Kerr spacetime in the following section. This example will be the main focus of this paper.
### The ballistic approximation
In this subsection, we are going to deal with the relativistic Euler equations, Eq. (2.7). Let us first estimate the magnitudes of the terms on the right-hand side (RHS) of Eq. (2.7). We find that the coefficient of the first term behaves as
\[\frac{p}{\Xi+p}\simeq\begin{cases}\frac{2(1+z)}{2z_{\rm c}+5z+8}\,,&z\ll z_{ \rm c}\\ \frac{1}{4}\,.&z\gg z_{\rm c}\end{cases} \tag{3.2}\]
Hence, \(\frac{p}{\Xi+p}\) is small in both limits, especially for \(z\ll z_{\rm c}\), so that the first term on the RHS of Eq. (2.7) can be neglected in the leading-order approximation. This approximation is consistent with the fact that the speed of sound discussed in Sec.(2) is sub-relativistic such that the gas pressure is not significant [41]. For M87\({}^{*}\), \(z_{\rm c}\simeq 122\), and as long as \(z\ll 122\), the contribution of the gas pressure can be neglected. This is consistent with the parameter range typically considered in numerical simulations of accretions of M87\({}^{*}\)[14].
The second term on the RHS of Eq. (2.7) can be expanded as
\[-\frac{\nabla_{\mu}T_{\rm EM}^{\mu\nu}}{\Xi+p}=-\frac{\big{[}B^{2}u^{\mu} \nabla_{\mu}u^{\nu}+B^{2}u^{\nu}\nabla_{\mu}u^{\mu}+\big{(}u^{\mu}u^{\nu}+ \frac{1}{2}g^{\mu\nu}\big{)}\nabla_{\mu}B^{2}-B^{\mu}\nabla_{\mu}B^{\nu}-B^{ \nu}\nabla_{\mu}B^{\mu}\big{]}}{\Xi+p}\,. \tag{3.3}\]
Consequently, the magnitudes of the coefficients in the second term can be approximated as
\[v_{A}^{2}=\frac{B^{2}}{\Xi+p}<\begin{cases}\frac{B^{2}}{\rho} \simeq\frac{B^{2}}{nm_{\rm ion}}\,,&z\ll z_{\rm c}\\ \frac{B^{2}}{4p}\simeq\frac{ZB^{2}}{4nz\Theta_{\rm e}m_{\rm ion}}\,,&z\gg z_{ \rm c}\end{cases} \tag{3.4}\]
where \(v_{A}\) is the Alfven velocity, which has distinct expressions in both non-relativistic and ultra-relativistic limits. In this work, we assume that the magnetic field strength is dynamically unimportant such that \(v_{A}\ll 1\), and the second term on RHS of Eq. (2.7) can be neglected as well. Actually this assumption holds practical significance. The EHT collaboration estimated the number density for M87\({}^{*}\) using a spherical, one-zone toy model, yielding a range of \(n=10^{10}\sim 10^{11}\,m^{-3}\)[14]. However, a more realistic thick disk model developed in [40] estimated a maximum number density of \(10^{12}\sim 10^{13}\,m^{-3}\) to ensure an observed flux of 0.5 Jy at 230 GHz. Regarding the associated magnetic field strength, the polarized images of M87\({}^{*}\) suggest parameter estimates ranging from 1 to 30 Gauss
[2]. Therefore, for our estimations, we take \(B=10\) Gauss and \(n=5\times 10^{12}\,m^{-3}\), yielding the following results
\[\begin{cases}v_{A}<0.1\ll 1\,,&z\ll z_{\rm c}\\ v_{A}\ll 0.05\ll 1\,,&z\gg z_{\rm c}\end{cases} \tag{3.5}\]
which indicates that our assumption on magnetic field and Alfven velocity makes sense to the case of M87\({}^{*}\). Therefore, under our assumed conditions, Eq. (2.7) can be simplified as follows:
\[u^{\mu}\nabla_{\mu}u^{\nu}\simeq 0\,, \tag{3.6}\]
which is the geodesic equation of the particle. The aforementioned approximation for fluid is known as the ballistic approximation, which has been employed to investigate various types of accretion streamlines [27, 29, 30]. However, the study of the thermodynamics and magnetic field structure of the ballistic magnetofluid at horizon scales of astrophysical black holes is lacking. It is also noteworthy to mention that the same treatment has been carried out in [31], where they conducted a thorough examination of analytical thermodynamic solutions for intra-ISCO accretion in the equatorial plane.
Notably, the ballistic approximation is independent of the associated magnetic field, allowing us to solve for the four-velocity independently. Once we obtain the four-velocity, we can further determine the magnetic field through the ideal MHD condition \(F^{\mu\nu}u_{\mu}=0\). In this way, we can find the solution for physical parameters of fluid.
### Analytical solutions in Kerr spacetime
In this subsection, we will analytically solve the fluid dynamics, namely the four-velocity, the particle number density, and the temperature of fluid in Kerr spacetime. The Kerr metric can be expressed in terms of the Boyer-Lindquist coordinates:
\[{\rm d}s^{2} = -\bigg{(}1-\frac{2Mr}{\Sigma}\bigg{)}{\rm d}t^{2}+\frac{\Sigma} {\Delta}{\rm d}r^{2}+\Sigma{\rm d}\theta^{2}+\bigg{(}r^{2}+a^{2}+\frac{2Mra^{ 2}}{\Sigma}{\rm sin}^{2}\theta\bigg{)}{\rm sin}^{2}\theta\,{\rm d}\phi^{2}- \frac{4Mra}{\Sigma}{\rm sin}^{2}\theta\,{\rm d}t{\rm d}\phi \tag{3.7}\] \[= g_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}.\]
The parameters \(\Delta\) and \(\Sigma\) are defined as
\[\Delta=r^{2}-2Mr+a^{2}\,,\quad\Sigma=r^{2}+a^{2}\cos^{2}\theta\,, \tag{3.8}\]
where \(M\) and \(a\) are the mass and the spin parameters of the Kerr black hole, respectively. For the sake of simplicity and without sacrificing generality, we shall henceforth adopt \(M=1\). Then, the event horizon of the Kerr black hole resides at \(r_{h}=1+\sqrt{1-a^{2}}\).
The timelike geodesic equations can be described in terms of three conserved quantities
\[\Sigma u^{t} =\Bigg{[}1+\frac{2r\big{(}r^{2}+a^{2}\big{)}}{\Delta}\Bigg{]}E-\frac {2ar}{\Delta}L\,, \tag{3.9}\] \[\Sigma u^{r} =\sigma_{r}\sqrt{R}=\sigma_{r}\sqrt{\big{[}E(r^{2}+a^{2})-aL \big{]}^{2}-\Delta\big{[}Q+(aE-L)^{2}+r^{2}\big{]}}\,,\] (3.10) \[\Sigma u^{\theta} =\sigma_{\theta}\sqrt{\Theta}=\sigma_{\theta}\sqrt{Q-\cos^{2} \theta\bigg{[}a^{2}(1-E^{2})+\frac{L^{2}}{\sin^{2}\theta}\bigg{]}}\,,\] (3.11) \[\Sigma u^{\phi} =\frac{\Delta-a^{2}\sin^{2}\theta}{\Delta\sin^{2}\theta}L+\frac {2ar}{\Delta}E\,, \tag{3.12}\]
where \(E\geq 1,L\) are the energy and angular momentum per unit mass stemmed from the Killing vectors \(\partial_{t}\) and \(\partial_{\phi}\); \(Q\) is the Carter constant per unit square mass, originating from the Killing 2-form of Kerr spacetime [42]; \(\sigma_{r},\sigma_{\theta}\) denotes the sign of \(u^{r}\) and \(u^{\theta}\), respectively.
Next, we proceed to study the stationary, axisymmetric streamlines of fluid in Kerr spacetime, particularly by calculating the expansion \(\nabla_{\mu}u^{\mu}\) to solve the particle number conservation Eq. (2.9). Under the ballistic approximation, the streamlines become a bundle of geodesic described by Eqs. (3.9)-(3.12). We consider that the streamlines extend within the spacetime outside an inner boundary shell denoted by \(r_{i}\geq r_{h}\), with \(\theta_{i}\) representing the polar angle of the streamline on the shell. The streamlines with turning points in either the \(r\) or \(\theta\) directions will not be considered here, as they are unnatural in a black hole accretion system [29, 43].
To evolve the streamlines of the fluid, we impose the distribution of conserved quantities of geodesics as initial conditions on the shell, \(X=X(\theta_{i})\), where \(X\in\{E,L,Q\}\). Using Eqs. (3.10), (3.11), for fixed \(r_{i}\), the angle \(\theta_{i}\) can be written as a function of \(r,\theta\) and the conserved quantities along the geodesic, known as the inverse formula (Appendix A). By substituting \(X=X(\theta_{i})\) into the inverse formula, we can, in principle, solve for \(\theta_{i}=\theta_{i}(r,\theta)\), depending on the specific form of \(X(\theta_{i})\). It should be emphasized that \(\theta_{i}(r,\theta)\) must be single-valued, as two streamlines cannot intersect at a point. This condition is not always satisfied because we have not yet restricted the form of \(X(\theta_{i})\). However, the specific form of \(X(\theta_{i})\) that satisfies this condition is unknown, and we assume that \(\theta_{i}\) is single-valued in the following derivation. Actually, the fluid we subsequently obtain indeed meets this condition.
The potentials \(R,\Theta\) in Eqs. (3.10), (3.11) can be rewritten as \(R=R(r,X(\theta_{i})),\Theta=\Theta(\theta,X(\theta_{i}))\). Since \(\theta_{i}\) is only a boundary value, \(X(\theta_{i})\) is evidently invariant along the geodesic. This leads to
\[\frac{dR}{d\tau}=u^{r}\partial_{r}R\big{|}_{\theta_{i}}\,,\quad \frac{d\Theta}{d\tau}=u^{\theta}\partial_{\theta}\Theta\big{|}_{\theta_{i}}\,. \tag{3.13}\]
Then, we can calculate the expansion as
\[\frac{1}{\sqrt{|g|}}\partial_{\mu}(\sqrt{|g|}u^{\mu}) =\frac{\sigma_{r}}{\Sigma}\partial_{r}\sqrt{R}\big{|}_{\theta}+ \frac{\sigma_{\theta}}{\Sigma\sin\theta}\partial_{\theta}\big{(}\sin\theta \sqrt{\Theta}\,\big{)}\big{|}_{r}\] \[=u^{r}\partial_{r}\log\sqrt{R}\big{|}_{\theta}+u^{\theta} \partial_{\theta}\log\sqrt{\Theta}\big{|}_{r}+u^{\theta}\partial_{\theta}\log \sin\theta\,, \tag{3.14}\]
where we have used Eqs. (3.10), (3.11). Employing Eqs. (3.13), (3.14), the expansion can be further expressed as
\[u^{\tau}\partial_{r}\log\sqrt{R}\big{|}_{\theta_{i}}+u^{\tau} \partial_{\theta_{i}}\log\sqrt{R}\big{|}_{r}\partial_{r}\theta_{i}+u^{\theta} \partial_{\theta}\log\sqrt{\Theta}\big{|}_{\theta_{i}}+u^{\theta}\partial_{ \theta_{i}}\log\sqrt{\Theta}\big{|}_{\theta}\partial_{\theta}\theta_{i}+u^{ \theta}\partial_{\theta}\log\sin\theta\] \[=\frac{d}{d\tau}\big{[}\log\sqrt{R\Theta}\sin\theta\,\big{]}+u^{ \tau}\partial_{\theta_{i}}\log\sqrt{R}\big{|}_{r}\partial_{r}\theta_{i}+u^{ \theta}\partial_{\theta_{i}}\log\sqrt{\Theta}\big{|}_{\theta}\partial_{\theta }\theta_{i}\] \[=\frac{d}{d\tau}\big{[}\log\sqrt{R\Theta}\sin\theta\,\big{]}+ \partial_{\theta_{i}}\big{[}\log\sqrt{R^{-1}\Theta}\,\big{]}\big{|}_{r,\theta }u^{\theta}\partial_{\theta}\theta_{i}\,, \tag{3.15}\]
where we have used \(d\theta_{i}/d\tau=(u^{\tau}\partial_{r}+u^{\theta}\partial_{\theta})\theta_{i}=0\) to get the last term in the last line. Here, we have taken into account of the effect of variation of the conserved quantities with respect to the boundary value \(\theta_{i}\). We notice that if the second term on the RHS of Eq. (3.15) can be neglected, the expansion becomes a total derivative term with respect to \(\tau\). This occurs if the streamlines satisfy one of the following conditions: (1) The variations of the conserved quantities with respect to \(\theta_{i}\) are tiny on the boundary shell, then \(\partial_{\theta_{i}}R|_{r}=\frac{dX}{d\theta_{i}}\partial_{X}R|_{r}\approx 0,\partial_{\theta_{i}}\Theta|_{\theta}=\frac{dX}{d\theta_{i}}\partial_{X} \Theta|_{\theta}\approx 0\); (2) \(u^{\theta}\) is directly equal to zero. We assume that the fluid model we are considering satisfies at least one of the above conditions. Thus, with Eqs. (2.9), (3.1), we obtain the explicit expressions for the number density and the temperature
\[n(r,\theta)=n(r_{i},\theta_{i})\sqrt{\frac{R(r_{i})\Theta(\theta_{i})}{R(r) \Theta(\theta)}}\frac{\sin\theta_{i}}{\sin\theta}\,, \tag{3.16}\]
and
\[T_{\rm e}(r,\theta)=\begin{cases}T(r_{i},\theta_{i})\bigg{[} \frac{R(r_{i})\Theta(\theta_{i})}{R(r)\Theta(\theta)}\bigg{]}^{\frac{(1+z)}{3( 2+z)}}\bigg{[}\frac{\sin\theta_{i}}{\sin\theta}\bigg{]}^{\frac{2(1+z)}{3(2+z)} }\,,&z\ll z_{\rm c}\\ T(r_{i},\theta_{i})\bigg{[}\frac{R(r_{i})\Theta(\theta_{i})}{R(r)\Theta( \theta)}\bigg{]}^{\frac{1}{5}}\bigg{[}\frac{\sin\theta_{i}}{\sin\theta}\bigg{]} ^{\frac{1}{3}}\,,&z\gg z_{\rm c}\end{cases} \tag{3.17}\]
where \(n(r_{i},\theta_{i}),T(r_{i},\theta_{i})\) are introduced as the boundary values, with \(\theta_{i}\) being determined by the inverse formula \(\theta_{i}(r,\theta)\). At this point, we have obtained the explicit solutions for the thermodynamics of the fluid, i.e., the particle number density \(n(r,\theta)\) (Eq. (3.16)), the temperature \(T_{\rm e}(r,\theta)\) (Eq. (3.17)). As mentioned before, the expressions are inapplicable at the turning points in radial or angular directions, where Eqs. (3.16), (3.17) have poles.
Different from the torus model [25, 26] with pure toroidal fluid, the model we are studying describes a fluid configuration with nonzero radial velocity. In the near-horizon region of an astrophysical black hole, the radial flow is a more realistic scenario, where gravity plays a dominant role, and heat transfer primarily occurs through advection [44, 45]. The fluid configurations satisfying \(u^{\theta}=0\) are similar to simulated accretion flows and FWJs at horizon scales. Hence, in the following discussion, we will focus on this scenario, providing an explicit solution and conducting an analysis accordingly.
### The conical solution
In this subsection we delve into the fluid satisfying the condition \(u^{\theta}=0\). In this case, the streamline maintains a constant value of \(\theta\) in the polar direction. This type of motion can be achieved when \(\Theta=\partial_{\theta}\Theta=0\), resulting in the following expression:
\[L=\pm_{L}a\sqrt{E^{2}-1}\sin^{2}\theta\,\quad Q=-a^{2}(E^{2}-1)\cos^{4}\theta \tag{3.18}\]
with "\(\pm_{L}\)" denoting the sign of \(L\). Moreover, one requires \(\partial_{\theta}^{2}\Theta=-8a^{2}(E^{2}-1)\cos^{2}\theta\leq 0\) in order to have stable geodesics. As the fluid is foliated by the streamlines on conical surfaces, we call the solution satisfying Eq. (3.18) the conical solution. The radial potential can be expressed as
\[R_{\rm c}(r,\theta) = (E^{2}-1)r^{4}+2r^{3}+2a^{2}(E^{2}-1)\cos^{2}\theta r^{2} \tag{3.19}\] \[+ 2a^{2}\bigg{[}\big{(}E\mp_{L}\sqrt{E^{2}-1}\sin^{2}\theta\big{)} ^{2}-(E^{2}-1)\cos^{4}\theta\bigg{]}r+a^{4}(E^{2}-1)\cos^{4}\theta\,,\]
which is non-negative as long as \(E\geq 1\). The 4-velocity is now of the form
\[u^{t}=E\bigg{[}1+\frac{2r(r^{2}+a^{2})}{\Delta\Sigma}\bigg{]} \mp_{L}\sqrt{E^{2}-1}\frac{2a^{2}r\sin^{2}\theta}{\Delta\Sigma}\,,\] \[u^{r}=\sigma_{r}\frac{\sqrt{R_{\rm c}}}{\Sigma}\,,\quad u^{ \theta}=0\,,\] \[u^{\phi}=E\frac{2ar}{\Delta\Sigma}\pm_{L}\sqrt{E^{2}-1}\frac{a( \Delta-a^{2}\sin^{2}\theta)}{\Delta\Sigma}\,. \tag{3.20}\]
The number density and temperature get simplified to
\[n(r,\theta)=n(r_{i},\theta)\sqrt{\frac{R_{\rm c}(r_{i},\theta)}{R_{\rm c}(r, \theta)}}\,, \tag{3.21}\]
and
\[T_{\rm e}(r,\theta)=\begin{cases}T(r_{i},\theta)\bigg{[}\frac{R_ {\rm c}(r_{i},\theta)}{R_{\rm c}(r,\theta)}\bigg{]}^{\frac{(1+z)}{3(2+z)}}\,,& \quad z\ll z_{\rm c}\\ T(r_{i},\theta)\bigg{[}\frac{R_{\rm c}(r_{i},\theta)}{R_{\rm c}(r,\theta)} \bigg{]}^{\frac{1}{\theta}}\,,&\quad z\gg z_{\rm c}\end{cases} \tag{3.22}\]
where \(n(r_{i},\theta)\) and \(T(r_{i},\theta)\) are the boundary values of \(n\) and \(T_{\rm e}\), respectively. For convenience in the following studies, we select a Gaussian distribution in the \(\theta\) direction for \(n(r_{i},\theta)\), and let \(T(r_{i},\theta)\) be a constant for the conical solution:
\[n(r_{i},\theta)=n_{i}\exp\bigg{[}-\bigg{(}\frac{\sin\theta-\sin \theta_{J}}{\sigma}\bigg{)}^{2}\bigg{]}\,,\quad T(r_{i},\theta)=T_{i}\,, \tag{3.23}\]
where \(\theta_{J}\) is the mean position in \(\theta\) direction, and \(\sigma\) describes the standard deviation of the distribution. As mentioned before, the inner boundary \(r_{i}\) can be positioned at any location outside the horizon. For the sake of convenience, let us choose to place it at the horizon in subsequent studies, that is \(r_{i}=r_{h}\)
### The magnetic field
Next, let us proceed to address the magnetic field structure accompanying the axisymmetric and stationary magnetofluid. The non-vanishing components of \(F_{\mu\nu}\) are described by the gauge potential \(A_{\mu}(r,\theta)\)[46],
\[F_{r\phi}=\partial_{r}A_{\phi}\,,\quad F_{\theta\phi}=\partial_{ \theta}A_{\phi}\,,\quad F_{r\theta}=\partial_{r}A_{\theta}-\partial_{\theta}A _{r}\,. \tag{3.24}\]
By considering the Maxwell equation \(\nabla_{\mu}\,^{*}F^{\mu\phi}=0\), and the ideal MHD condition, \(F^{\phi\mu}u_{\mu}=0\), one finds
\[(u^{r}\partial_{r}+u^{\theta}\partial_{\theta})A_{\phi}=0\,, \tag{3.25}\]
which means that \(A_{\phi}\) is invariant along the streamline. This indicates that the general form of \(A_{\phi}\) is \(A_{\phi}=A_{\phi}(\theta_{i})\), where \(\theta_{i}=\theta_{i}(r,\theta)\) is the polar angle on the boundary shell. Thus, one can solve the field configuration if the streamlines are known. Moreover, combining Eq. (3.25) and the \(t\) component of the Maxwell equation \(\nabla_{\mu}\,^{*}F^{\mu t}=0\) gives
\[F_{r\theta}=\frac{u^{\phi}}{u^{r}}\partial_{\theta}A_{\phi}\,. \tag{3.26}\]
With the expressions of \(F_{\mu\nu}\), the magnetic field \(B^{\mu}=-\,^{*}F^{\mu\nu}u_{\nu}\) can be obtained accordingly, that is,
\[B^{t} =\frac{1}{\sqrt{|g|}}\frac{\partial_{\theta}A_{\phi}}{u^{r}} \big{(}u^{t}u_{t}+1\big{)}\,,\] \[B^{i} =\frac{1}{\sqrt{|g|}}\frac{\partial_{\theta}A_{\phi}}{u^{r}}u_{t }u^{i}\,,\quad i=r,\theta,\phi\,. \tag{3.27}\]
The spatial component \(B^{i}\) is parallel to \(u^{i}\), indicating that the magnetic field is frozen into the streamlines. It is important to emphasize that the aforementioned derivation (Eq.(3.24)-Eq.(3.27)) solely relies on the assumption of stationarity, axisymmetry, and the ideal MHD condition [46], without utilizing explicit metric and the ballistic approximation.
For the sake of further discussion about the magnetic field in the conical solution in Sec. 3.3, we derived the magnetic field measured in the comoving frame of the fluid in the case of \(u^{\theta}=0\). The setup of tetrads can be found in Appendix B. In the comoving frame, the magnetic field components take
\[B^{(0)}=B^{(2)}=0\,,\quad B^{(1)}=\frac{\partial_{\theta}A_{ \phi}}{\hat{u}\sqrt{|g|}}\sqrt{g_{rr}g_{\phi\phi}}\,\omega^{\phi}\,,\quad B^{ (3)}=-\alpha\frac{\partial_{\theta}A_{\phi}}{u^{r}\hat{u}\sqrt{|g|}}(u^{t}u_{t }+1)\,, \tag{3.28}\]
where \(\alpha\) is the lapse function, \(\omega^{\phi}\) is the angular velocity of frame dragging, and \(\hat{u}=\sqrt{(u^{(r)})^{2}+(u^{(\phi)})^{2}}=\sqrt{-1+(\alpha u^{t})^{2}}\) is the magnitude of the fluid velocity in zero-angular momentum observers (ZAMOs).
It can be seen that \(B^{(1)}\) is purely induced by the frame dragging. Besides, Eq. (3.25) shows that \(\Psi\equiv\partial_{\theta}A_{\phi}\) is a function of \(\theta\) for conical solutions. For simplicity, we employ the split monopole configuration, which is the simplest model for the global magnetic field configuration [32],
\[\Psi=\Psi_{0}\,\text{sign}(\cos\theta)\sin\theta\,, \tag{3.29}\]
where \(\Psi_{0}\) is a constant. The introdution of \(\sin\theta\) is to ensure regularity at the poles \(\sin\theta=0\). The sign function ensures that the black hole has zero magnetic charge. Actually, numerical studies indicate that a split monopole configuration naturally emerges in the near-horizon region with a initial uniform magnetic field [47, 48].
We would like to emphasize that although the magnetic field is dynamically unimportant under the ballistic approximation, it can still be constrained by the ideal MHD conditions to a global factor along the streamlines. For the magnetized torus model [26], the magnetic pressure affects the torus structure, but additional simplifying assumptions are also required to obtain explicit expressions.
## 4 Applications to accretion and jet flow
In this section, we apply the conical solution discussed in the previous section to the study of accretion flow and jet. In the case of accretion flow, we manage to establish a thick accretion flow model at the horizon scale. As for the jet flow, we present a FWJ model. These two models have the advantage of being analytically solvable.
### Accretion flow
It is well known that the standard accretion disk model describes a geometrically thin and cold accretion flow in the equatorial plane [17], where viscosity and radiation play significant roles. However, observations indicate the existence of diverse types of thick disk morphologies around active galactic nuclei [14, 23]. For instance, in the case of M87\({}^{*}\), studies suggest the presence of a Radiatively Inefficient Accretion Flow (RIAF) surrounding it [14]. Furthermore, when approaching the event horizon, the behaviors of the flows are primarily governed by gravity. With the development of the EHT imaging, the understanding of the plasma physics in this region becomes increasingly important. Motivated by this fact, we will utilize the analytical model developed in the last section to investigate the thick accretion flow close to the black hole. With the explicit expressions for \(n,\,T\), and \(B^{\mu}\), the model offers a direct avenue to study black hole imaging, which will be demonstrated in [49].
In this subsection, we discuss the thermodynamics and magnetic field structure of accretion flows exhibiting conical motions. We consider a thick accretion disk that exhibits equatorial symmetry,
described by \(\theta_{J}=\pi/2,\sigma=1/5\). However, it is generally uncertain to determine the boundary conditions for such accretion flow. In the case of an idealized thin disk, the accretion flow near the horizon falls in from the ISCO, and its boundary values are taken at the ISCO, while for a thick disk, such a boundary does not exist, and the large-scale dynamics are also unknown. Here we simply consider a freely falling flow that satisfies
\[\sigma_{r}=-1\,,\quad E=1\,. \tag{4.1}\]
In this case, the radial potential simplifies to \(R_{\rm c}(r)=2r(r^{2}+a^{2})\). From Eqs. (3.21) and (3.22), the thermodynamics for the free falling accretion is characterized by
\[n(r,\theta)=n_{i}\exp\bigl{[}-25(\sin\theta-1)^{2}\bigr{]}\sqrt{\frac{2r_{h}^{ 2}}{r(r^{2}+a^{2})}}\,, \tag{4.2}\]
and
\[T_{\rm e}(r)=\begin{cases}T_{i}\Biggl{[}\sqrt{\frac{2r_{h}^{2}}{r(r^{2}+a^{2} )}}\Biggr{]}^{\frac{(1+z)}{3(2+z)}}\,,&z\ll z_{\rm c}\\ T_{i}\Biggl{[}\sqrt{\frac{2r_{h}^{2}}{r(r^{2}+a^{2})}}\Biggr{]}^{\frac{1}{6}} \,,&z\gg z_{\rm c}\end{cases} \tag{4.3}\]
Note that we have set \(r_{i}=r_{h}\) when deriving the above equations. We have only presented the accretion flow characterized by conical motion here. It is worth considering more general streamline configurations in practice.
### Funnel wall jet
Astrophysical black holes with high spin often exhibit collimated jets about their rotational axis while accreting matter. Within our model, we designate the angle of the FWJ's center to be \(\theta_{J}=\pi/4\) and assume \(\sigma=1/10\). Furthermore, since in the case of a jet flow, the kinetic energy of particles at infinity can be non-zero, we can consider taking \(E\geq 1\) in this scenario. Note that while \(L\) and \(Q\) are determined by Eq. (3.18), the dependence of \(E\) on \(\theta\) remains arbitrary. For simplicity, we will treat it as a constant in this study. Moreover, numerical studies indicate the presence of inward FWJs at horizon scales [50, 51]. Thus, the radial velocity \(u^{r}\) can be either outward or inward, which means both positive and negative values of \(\sigma_{r}\) are permissible. However, from Eqs. (3.16), (3.17), and (3.27), it can be observed that the sign of \(\sigma_{r}\) does not affect the particle number density \(n\) and the electron temperature \(T_{\rm e}\), and it only alters the sign of certain components of the magnetic field. Hence, we may consider \(\sigma_{r}=+1\) for the jet flow.
We would like to emphasize that our model for the FWJ is not able to fully characterize the jet dynamic, but it can at least capture the morphology of the off-equatorial radiation source with double-cone structure, and thus contributes to the study of black hole imaging. Specifically, the direct and lensed images of the double-cone structure might be of interest; the Doppler shift caused by the inward and outward flows significantly affects the imaging process as well.
### Figures
To gain a more intuitive understanding, we present figures illustrating the particle number density \(n\), electron temperature \(T_{\rm e}\), and magnetic field \(B^{\mu}\). For convenience, we introduce Cartesian coordinates as
\[\mathcal{X}=r\sin\theta\cos\phi\,,\quad\mathcal{Y}=r\sin\theta\sin\phi\,, \quad\mathcal{Z}=r\cos\theta\,. \tag{4.4}\]
In Fig. 1, we illustrate the two-dimensional density plot of the particle number density \(n/n_{i}\) in the \(\mathcal{X}\)-\(\mathcal{Z}\) plane. The left panel displays the results for the accretion disk, while the right panel shows the results for the FWJ. The white region at the center of each figure represents the black hole. Here and throughout, we assume a black hole spin parameter of \(a=0.94\).
Upon examining Eqs. (3.21), (3.22), it becomes apparent that the dependence of particle number density and electron temperature on \(r\) and \(\theta\) is not decoupled, as evidenced by the expression for \(R_{\rm c}\) (Eq. (3.19)), unless \(E=1\). However, since the conical motion occurs solely along the \(r\) direction, it is meaningful to study the variations of particle number density and temperature separately with
Figure 1: The particle number density in the \(\mathcal{X}\)-\(\mathcal{Z}\) plane. The left figure corresponds to the accretion disk, for which we select \(E=1,\sigma=1/5\) and \(\theta_{J}=\pi/2\), while the right one corresponds to the FWJ, for which we take \(E=1,\sigma=1/10\) and \(\theta_{J}=\pi/4\).
respect to \(r\). In the cases of the accretion disk and FWJ, the differences lie in the polar positions and the assigned values of \(E\): for the accretion disk, \(E=1\), while for the FWJ, \(E\geq 1\) can take on various values. Additionally, as the ratio \(z\) between ion temperature and electron temperature changes, the behavior of the electron temperature will also undergo corresponding variations.
In Fig. 2, we present the analytical results depicting the variations of particle number density and temperature as a function of \(r\). To focus solely on the effects of \(E\) and \(z\), we set \(\theta=\theta_{J}\) in Eq. (3.23) in all cases to eliminate the influence of the Gaussian distribution. Moreover, we have set \(z_{\rm c}=122\) to match the astronomical environment of M87\({}^{*}\). In the left and middle plots, we have selected \(E=1,1.1,1.5,10\) respectively. In the right plot, we have modified the value of \(z\) to be \(1,5,10\), and significantly greater than \(122\), while keeping \(E=1\). From these plots, we can observe that both \(n\) and \(T_{\rm e}\) are decreasing functions of \(r\). Furthermore, as \(E\) increases, the decreasing rate becomes larger. Additionally, concerning the electron temperature, when the ions are non-relativistic with \(z\ll 122\), an increasing \(z\) leads to a faster decreasing in \(T_{\rm e}\). However, in the case of ultra-relativistic ions with \(z\gg 122\), the decreasing rate is actually the smallest. Comparing the quantities of \(n\) and \(T_{\rm e}\) under the same conditions, it is evident that the decay rate of \(n\) is larger.
Now let us shift our attention to the magnetic field structures. For simplicity, we choose \(\Psi_{0}=-1\) in the subsequent discussion. Fig. 3 displays the magnetic fields for both the thick disk and FWJ. The first panel illustrates the magnetic field configuration in the equatorial plane for the disk model3. The middle and right panels depict the magnetic fields in the \(\mathcal{X}\)-\(\mathcal{Z}\) plane for the disk and FWJ models, respectively. The left plot clearly reveals that in the near-horizon region, the magnetic field takes on
Figure 2: Particle number densities \(n\) and electron temperatures \(T_{\rm e}\) as functions of \(r\). In the preceding two plots, we have respectively chosen \(E\) values of \(1,1.1,1.5,\) and \(10\), and \(z\gg z_{\rm c}\) for the temperature. In the third plot, we have taken \(z=1,5,10\), \(z\gg z_{\rm c}\), and kept \(E=1\).
a spiral shape. As \(r\to r_{h}\) one finds \(B^{r}/B^{\phi}\to 0\) with Eq. (3.27) and Eq. (3.20), indicating that due to frame dragging, the magnetic field exhibits an extremely spiral pattern when \(r\to r_{h}\). Therefore, the polarized images of the model are expected to carry the information about the magnetic field structure, fluid streamlines, and black hole spin. Once moving away from the black hole, the magnetic field direction becomes predominantly radial. In the \(\mathcal{X}\)-\(\mathcal{Z}\) plane, the magnetic field structure is similar to the disk model and the FWJ model. It follows a primarily radial pattern, but with a distinction: in the northern hemisphere, the magnetic field direction diverges away from the black hole; while in the southern hemisphere, the magnetic field points towards the black hole.
In Fig. 4, we present the variation of the magnetic field with respect to \(r\) in the comoving frame of the fluid. By examining the left plot, which shows the changes of the magnetic field in the disk model, we observe that \(B^{(1)}<0\) and \(B^{(3)}>0\), but both magnitudes decrease as the radius increases.
Figure 4: The variation of the magnetic field with respect to \(r\) in the comoving frame of the fluid. The left plot illustrates the variations in the accretion disk model, with \(E=1\). The subsequent two diagrams depict the outcomes of the FWJ model, with the choice \(E=1,1.1,1.5,10\) respectively.
Figure 3: Magnetic field configurations in the accretion disk and the FWJ models. The left plot shows the magnetic field in the equatorial plane in the accretion disk model, the middle and the right ones depict the poloidal field structures in the \(\mathcal{X}\)-\(\mathcal{Z}\) plane for the accretion disk and FWJ models, respectively. In all the plots, we set \(E=1\). In the case of the FWJ model, while the value of \(E\) can be adjusted, it only affects the magnitude of the magnetic field and not its orientation.
The middle and right plots in Fig. 4 show the variations of magnetic fields in the FWJ model. We find that both \(B^{(1)}\) and \(B^{(3)}\) within the jet are negative, and their magnitudes exhibit a monotonically decreasing trend. Moreover, for all the parameters chosen, \(|B^{(3)}|\) is much larger than \(|B^{(1)}|\) throughout the streamlines, signifying that the magnetic field in the comoving frame of the fluid is primarily aligned along the direction of flow. It is evident that the spatial arrangement of the magnetic field within the FWJ exhibits a noticeable difference compared to the one within the thick disk, despite their analogous mathematical representations.
## 5 Summary and discussions
In this work, we have developed a simplified magnetofluid model that enables us to study analytically the kinematics and thermodynamics of the magnetofluid surrounding rotating black holes. For the fluid part, we took it to be an ideal, adiabatic plasma consisting of two components, electrons and ions, having different temperatures but being characterized by a constant temperature ratio \(z\). The horizon-scale electrons are treated as ultra-relativistic thermal particles with \(T_{\rm e}\gg 10^{9}\)K, corresponding to the hot plasma around a supermassive black hole. For ion temperature, there exists a characteristic quantity, denoted as \(z_{\rm c}\), such that the ions with \(z\ll z_{\rm c}\) can be considered non-relativistic, while the ions with \(z\gg z_{\rm c}\) are considered ultra-relativistic. In our analysis, we have focused solely on these two limiting cases, for the sake of simplicity. As for the magnetic field, we assume that the magnetic pressure is weak, resulting in the Alfven velocity being sub-relativistic in the plasma.
Based on the aforementioned treatments, the flows are primarily governed by gravity, allowing us to employ the ballistic approximation for the magnetofluid. We have analyzed the relevant parameter space and justified the applicability of the ballistic approximation to the magnetofluid surrounding M87\({}^{*}\). Subsequently, by examining the stationary, axisymmetric, and ballistic fluid configuration in Kerr spacetime, we discovered that the thermodynamics can be solved analytically (Eqs. (3.16), (3.17)) for the fluid characterized by a slow variation of conserved quantities between neighboring streamlines or the fluid satisfying \(u^{\theta}=0\), the latter being referred to as the conical solution. In particular, since the conical solution exhibits features similar to the simulated accretion flow and FWJ close to the black hole, we focused on this scenario and provided detailed discussions in Sec. 3.3, including the structure of the accompanying magnetic field in Sec. 3.4. Furthermore, we have explored the potential applications of our model to real astronomical environments and utilized the conical solution to describe thick accretion disks and FWJs at the horizon scale, as shown in Sec. 4. Additionally, we have presented graphs in Sec. 4.3 to facilitate a more intuitive understanding of the characteristics of the thick disk and FWJ described by the conical solution.
Compared to previous studies, our model presents a novel magnetofluid configuration with inward
or outward radial flows that extend beyond the equatorial plane, aiming to capture the fluid streamlines and thermodynamics at the horizon scale. The presence of geometrically thick, optically thin magnetofluid with radial flows close to M87\({}^{*}\)[1, 2] provides strong motivation for the development of our model. Within the framework of the ideal magnetofluid, we have employed the ballistic approximation to facilitate analytical discussions. However, there are a few subtle issues that need to be addressed. Firstly, unlike the studies that used a similar approximation to study the formations of thin disks by directing the streamlines into the equatorial plane [28, 29], we treated the horizon-scale plasma as a ballistic fluid to solve its thermodynamics. As our focus was solely on the physics at the horizon scale, we chose the boundary values at the horizon. Therefore, we need further physical considerations or inputs from the observation to determine the appropriate boundary conditions. For instance, if the EHT observations can successfully capture the inner shadow of the horizon, the intensity distribution of its edge contour undoubtedly has the potential to reflect specific boundary conditions. Secondly, in a more realistic scenario, the magnetic field is dynamically important and should be taken into account as a correction to the Euler equations, Eq. (2.7). Our approach may not be applicable to strongly magnetized flows, like the strong magnetically arrested disk (SMAD) or the funnel jet region, where the magnetic field almost dominates the fluid.
Nevertheless, further studies can be conducted with our simplified model. With the explicit expressions for \(n,T_{\rm e}\), and \(B^{\mu}\), our model provides a direct avenue for studying black hole imaging. Specifically, for the FWJ model, the gravitational lensing of the double-cone structure may be of interest, and the observed intensities will be affected by the Doppler shift caused by the inward and outward flows. In our upcoming work [49], we will demonstrate the imaging features of the thermal synchrotron radiation from thick disks and FWJs using the conical solution in this work. Besides, it is shown in Fig. 3 that the magnetic field in Eq. (3.27) exhibits a highly spiraling nature near the horizon, which influences both the anisotropic emissivity and plasma polarization. It may lead to the polarization patterns of specific features, which could be checked in the images of the black holes.
## Acknowledgments
We thank Zhong-Ying Fan and Ye Shen for helpful discussions. The work is partly supported by NSFC Grant No. 12275004, 12205013 and 11873044. MG is also endorsed by "the Fundamental Research Funds for the Central Universities" with Grant No. 2021NTST13.
Inverse formula for \(\theta_{i}\)
The expression for \(\theta_{i}(r,\theta)\) can be obtained by solving the timelike geodesic equations in Kerr spacetime, resulting in the so-called inverse formula expressed in terms of elliptic functions. For \(Q>0\), the expression takes the following form
\[\cos\theta_{i} = \sqrt{u_{+}}\,{\rm sn}\bigg{(}F_{\theta}+\sigma_{\theta}a\sqrt{-u_ {-}}\,I_{r}\,\bigg{|}\,\frac{u_{+}}{u_{-}}\bigg{)}\,,\] \[{\rm with}\ \ F_{\theta} = F\bigg{(}\arcsin\bigg{(}\frac{\cos\theta}{\sqrt{u_{+}}}\bigg{)} \,\bigg{|}\,\frac{u_{+}}{u_{-}}\bigg{)}\,.\] (A.1)
For \(Q<0\), the expression takes the form
\[\cos\theta_{i} = h_{\theta}\sqrt{u_{-}}\,{\rm dn}\bigg{(}F_{\theta}+h_{\theta} \sigma_{\theta}a\sqrt{u_{-}}\,I_{r}\,\bigg{|}\,1-\frac{u_{+}}{u_{-}}\bigg{)}\,,\] \[F_{\theta} = F\bigg{(}\arcsin\sqrt{\frac{\cos^{2}\theta-u_{-}}{u_{+}-u_{-}}} \,\bigg{|}\,1-\frac{u_{+}}{u_{-}}\bigg{)}\,,\] (A.2)
where \(h_{\theta}={\rm sign}(\cos\theta)\), \(u_{\pm}=\Delta_{\theta}\pm\sqrt{\Delta_{\theta}^{2}+Q(E^{2}-1)^{-1}a^{-2}}\) with \(\Delta_{\theta}=\frac{1}{2}(1-(Q+L^{2})(E^{2}-1)^{-1}a^{-2})\). The radial integral takes
\[I_{r}=\sqrt{E^{2}-1}\int_{r_{i}}^{r}\frac{dr}{\sqrt{R}}\,.\] (A.3)
The inverse formula for null geodesics is identical to that for timelike geodesics, with the conserved quantities are replaced by impact parameters \(L/\sqrt{E^{2}-1}\to\lambda\), \(Q/(E^{2}-1)\to\eta\). For the case of freely falling flow with \(E=1,L=0,Q\geq 0\), we have \(R=2r(r^{2}+a^{2})-Q\Delta,\Theta=Q\). In this case, \(\theta_{i}\) is determined by
\[\theta_{i}=\theta-\sigma_{\theta}\sqrt{Q}\int_{r_{i}}^{r}\frac{dr}{\sqrt{R}}\,,\] (A.4)
where \(\sigma_{\theta}\) dose not change its sign, as \(\Theta\) is a constant. The expression of \(\theta_{i}\) indicates that the Carter constant \(Q\) governs the bending of geodesics, which is expected to be small when describing the accretion flows around black holes. For small \(Q\), we have \(R\approx 2r(r^{2}+a^{2})\), and Eq. (A.4) can be expressed in terms of hypergeometric functions,
\[\theta_{i}=\theta-\sigma_{\theta}\sqrt{\frac{2Q}{a}}\bigg{[}\sqrt{\frac{r}{a}} \ _{2}F_{1}\bigg{(}\frac{1}{4},\frac{1}{2},\frac{5}{4},-\frac{r^{2}}{a^{2}} \bigg{)}-\sqrt{\frac{r_{i}}{a}}\ _{2}F_{1}\bigg{(}\frac{1}{4},\frac{1}{2},\frac{5}{4},- \frac{r_{i}^{2}}{a^{2}}\bigg{)}\bigg{]}\,.\] (A.5)
## Appendix B Magnetic field components in different frames
In the cases that both the spacetime and electromagnetic field are stationary and axisymmetric, and the fluid fulfills the ideal MHD condition of \(F_{\mu\nu}u^{\nu}=0\), the magnetic field 4-vector takes
\[B^{\mu}=\frac{1}{\sqrt{-g}}\frac{\partial_{\theta}A_{\phi}}{u^{r}}\big{(}u^{ \mu}u_{t}+\delta_{t}^{\mu}\big{)}\,,\quad B^{2}=\frac{1}{-g}\bigg{(}\frac{ \partial_{\theta}A_{\phi}}{u^{r}}\bigg{)}^{2}\big{(}g_{tt}+u_{t}^{2}\big{)}\,.\] (B.1)
Then, it is convenient to introduce the frames of ZAMOs as
\[\hat{e}^{\,\mu}_{(t)}=\frac{1}{\alpha}(\partial_{t}^{\,\mu}+\omega^{\phi}\partial _{\phi}^{\,\mu})\,,\quad\hat{e}^{\,\mu}_{(i)}=\frac{1}{\sqrt{g_{ii}}}\partial_{i }^{\,\mu}\,,\quad i=r,\theta,\phi\] (B.2)
with \(\alpha\) being the lapse function, \(\alpha=\sqrt{-g_{tt}+\frac{g_{t\phi}^{2}}{g_{\phi\phi}}}\). The angular velocity of frame dragging is denoted as \(\omega^{\phi}=-g_{t\phi}/g_{\phi\phi}\). The magnetic field component expressed in the tetrad of ZAMOs is given by \(\hat{B}^{(a)}=\hat{e}^{(a)}_{\mu}B^{\mu}\), which leads to
\[\hat{B}^{(t)} =\frac{\partial_{\theta}A_{\phi}}{\sqrt{-gu^{r}}}\bigg{[}\alpha- \frac{u_{t}}{\alpha}(u_{t}+\omega^{\phi}u_{\phi})\bigg{]}\,,\] \[\hat{B}^{(i)} =\frac{\partial_{\theta}A_{\phi}}{\sqrt{-gu^{r}}}\sqrt{g_{ii}}\, u^{i}u_{t}\,,\quad i=r,\theta,\] \[\hat{B}^{(\phi)} =\frac{\partial_{\theta}A_{\phi}}{\sqrt{-gu^{r}}}\frac{u_{t}u_{ \phi}+g_{t\phi}}{\sqrt{g_{\phi\phi}}}\,.\] (B.3)
Next, we explore the magnetic field components measured in the comoving frame of the fluid in the conical solution. Since \(u^{\theta}=0\), the tetrad of the rest frame of the fluid can be chosen as
\[s^{\mu}_{(0)} =u^{\mu}=\hat{u}^{(t)}\hat{e}^{\mu}_{(t)}+\hat{u}^{(r)}\hat{e}^{ \mu}_{(r)}+\hat{u}^{(\phi)}\hat{e}^{\mu}_{(\phi)}\,,\] \[s^{\mu}_{(1)} =\frac{\hat{u}^{(\phi)}}{\hat{u}}\hat{e}^{\mu}_{(r)}-\frac{\hat{u }^{(r)}}{\hat{u}}\hat{e}^{\mu}_{(\phi)}\,,\] \[s^{\mu}_{(2)} =\hat{e}^{\mu}_{(\theta)}\,,\] \[s^{\mu}_{(3)} =\hat{u}\,\hat{e}^{\mu}_{(t)}+\frac{\hat{u}^{(t)}\hat{u}^{(r)}}{ \hat{u}}\hat{e}^{\mu}_{(r)}+\frac{\hat{u}^{(t)}\hat{u}^{(\phi)}}{\hat{u}}\hat {e}^{\mu}_{(\phi)}\,,\] (B.4)
where \(\hat{u}^{(a)}=\hat{e}^{(a)}_{\mu}u^{\mu}\) represents the 4-velocity measured by ZAMOs, and \(\hat{u}=\sqrt{(u^{(r)})^{2}+(u^{(\phi)})^{2}}=\sqrt{-1+(\alpha u^{t})^{2}}\) is the magnitude of the 4-velocity. Clearly, in the perspective of ZAMOs, \(s^{\mu}_{(1)},s^{\mu}_{(3)}\) is orthogonal and parallel to the fluid velocity, respectively. Then, the magnetic field components in the comoving frame are obtained through \(B_{(a)}=s^{\mu}_{(a)}B_{\mu}\). Our calculations reveal that \(B_{(a)}\) has the following simple form
\[B^{(0)}=B^{(2)}=0\,,\quad B^{(1)}=\frac{\partial_{\theta}A_{\phi}}{\hat{u} \sqrt{-g}}\sqrt{g_{rr}g_{\phi\phi}}\,\omega^{\phi}\,,\quad B^{(3)}=-\alpha \frac{\partial_{\theta}A_{\phi}}{u^{r}\hat{u}\sqrt{-g}}(u^{t}u_{t}+1)\,.\] (B.5)
In a Kerr spacetime, the magnetic field component Eq. (B.1) in the Boyer-Lindquist coordinates diverges as \(r\to r_{h}\), while \(B^{2}\) remains finite. In the comoving frame, it can be checked from Eqs. (3.20), (B.5) that \(B^{(a)}\) is finite outside the horizon.
|
2309.17162 | APNet: Urban-level Scene Segmentation of Aerial Images and Point Clouds | In this paper, we focus on semantic segmentation method for point clouds of
urban scenes. Our fundamental concept revolves around the collaborative
utilization of diverse scene representations to benefit from different context
information and network architectures. To this end, the proposed network
architecture, called APNet, is split into two branches: a point cloud branch
and an aerial image branch which input is generated from a point cloud. To
leverage the different properties of each branch, we employ a geometry-aware
fusion module that is learned to combine the results of each branch. Additional
separate losses for each branch avoid that one branch dominates the results,
ensure the best performance for each branch individually and explicitly define
the input domain of the fusion network assuring it only performs data fusion.
Our experiments demonstrate that the fusion output consistently outperforms the
individual network branches and that APNet achieves state-of-the-art
performance of 65.2 mIoU on the SensatUrban dataset. Upon acceptance, the
source code will be made accessible. | Weijie Wei, Martin R. Oswald, Fatemeh Karimi Nejadasl, Theo Gevers | 2023-09-29T11:54:36Z | http://arxiv.org/abs/2309.17162v1 | # APNet: Urban-level Scene Segmentation of Aerial Images and Point Clouds
###### Abstract
In this paper, we focus on semantic segmentation method for point clouds of urban scenes. Our fundamental concept revolves around the collaborative utilization of diverse scene representations to benefit from different context information and network architectures. To this end, the proposed network architecture, called APNet, is split into two branches: a point cloud branch and an aerial image branch which input is generated from a point cloud. To leverage the different properties of each branch, we employ a geometry-aware fusion module that is learned to combine the results of each branch. Additional separate losses for each branch avoid that one branch dominates the results, ensure the best performance for each branch individually and explicitly define the input domain of the fusion network assuring it only performs data fusion. Our experiments demonstrate that the fusion output consistently outperforms the individual network branches and that APNet achieves state-of-the-art performance of 65.2 mIoU on the SensatUrban dataset. Upon acceptance, the source code will be made accessible.
## 1 Introduction
Urban-level point cloud segmentation is an important stepping stone for semantic scene understanding for various applications like autonomous driving, robotics, large-scale map creation or mixed reality [15, 7]. The majority of urban semantic segmentation methods can be categorized to either use aerial / birds-eye-view image data [32, 38] or 3D point cloud data [19, 39, 10].
On the one hand, 2D/2.5D image-based approaches benefit from the simple data structure that allows for highly effective aggregation of large spatial contexts which is useful for semantic inference and for which a large pool of network architectures exist [13, 18, 32, 38]. However, these methods are limited to resolve full 3D shapes and spatial context along the gravity directions.
On the other hand, point cloud-based approaches can leverage full 3D spatial context, but context aggregation and high detail levels are generally much more expensive to progress and are thus more limited in spatial resolution and context reasoning. Unlike images, as they may suffer from large color variations due to changing weather conditions or day-to-night cycles, point clouds are more robust to these phenomena [15]. However, point clouds are more challenging to process due to their irregular and non-uniform structure. Similarly, many established network architectures exist for point cloud processing [3, 19, 26, 39, 10].
We argue that semantic reasoning in both domains has advantages and disadvantages, _e.g._ incorporating a larger context within a 2D domain enhances the recognition ca
Figure 1: **APNet Segmentation.** Starting from a colored input point cloud (a) the data is fed into two separate branches: a point-cloud branch (b) and an aerial image branch (c). The key idea is to exploit the advantages of both branches regarding spatial context and network architectures. The results of both branches is then merged with a fusion network. APNet achieves a better result, which is much closer to the ground truth than the solution of individual branches.
pabilities of flat and large objects, whereas small objects with a 3D spatial extension are more effectively detectable within the 3D domain. With this objective in mind, our primary aim is to leverage and combine the best properties of both domains to propose a unified semantic segmentation approach that synergistically learns from both.
Recent papers show impressive results on vehicle-based point cloud datasets by combining different representations [16, 23, 31]. However, their corresponding representations, _e.g_. range-view and voxelization, are less suited for UAV-based datasets. To address the aforementioned objectives, we propose APNet, which concurrently operates within the aerial image domain and the point cloud domain. Exemplary results of APNet are depicted in Fig. 1. Our **contributions** can be summarized as follows:
* We introduce APNet, an effective network architecture for urban-level point cloud segmentation that leverages differences in domain properties regarding network architectures and spatial context by following a multi-branch where each branch is specialized for a particular domain.
* We propose a geometry-aware fusion module that introduces the geometric information of the original points into the process of feature fusion of two branches and achieve a better performance.
* Our experiments demonstrate the efficacy of APNet by attaining state-of-the-art performance on the SensatUrban dataset [8].
## 2 Related Work
### Single Representation for Point Cloud Segmentation
In recent years, various deep learning-based methods are proposed for point cloud segmentation. These methods can be grouped into three categories based on their representation: projection-based, voxelization-based and point-based methods. The aim of both the projection-based and voxelization-based methods is to transform 3D point clouds to a regular representation and then use off-the-shelf networks to extract the features. In contrast, point-based methods directly process irregular point clouds.
**Projection-based representation.** Deep learning has made great strides in 2D computer vision tasks, leading researchers to apply the well-established 2D networks to 3D tasks. Lawin et al. [14] propose a 3D-2D-3D pipeline to solve point cloud segmentation. They project a point cloud onto multi-view 2D planes and feed the resulting images to a 2D segmentation network. The final semantic per-point label is obtained by fusing the pixel-level predictions. Although the multi-view strategy can alleviate occlusion, the pre- and postprocessing are inefficient and the results are sensitive to viewpoint selection. Furthermore, multi-view projection is typically used for a single scene or object, whereas urban-scale point clouds usually result in more occlusion. Other approaches utilize range-view planes as an intermediate representation for point cloud datasets collected by rotating laser scanner [29, 30, 18], which is a typical sensor for autonomous vehicles. In this scenery, the egocentric spherical representation can retain more information in contrast to a single plane representation. However, this representation is not well-suited for UAV-based datasets as it results in server occlusion due to the inconsistency of laser direction and projection direction. Inspired by these methods, we propose to project the point cloud onto aerial-view plane that is perpendicular to the laser. The one-time aerial-view projection is efficient and avoids information loss caused by occlusion as much as possible.
**Voxelization-based representation.** These methods convert a point cloud into a discrete representation, such as cubic voxels, and then use a 3D convolution neural network (CNN) to compute the features [40, 25]. This representation naturally preserves the neighborhood structure of 3D point clouds but 3D CNNs are memory and computation-intensive. These costs increase dramatically in outdoor scenarios due to the sparsity of points leading many empty voxels. Although some methods use sparse convolution to reduce these costs, the discretization unit is non-trivial to determine [23, 6]. Furthermore, urban-level datasets often contain heterogeneous objects, ranging from tiny bikes and to huge buildings and streets, which makes them unsuitable for voxelization-based methods.
**Point-based representation.** Point-based methods directly process irregular point clouds by different means, _e.g_. multi-layer perceptron, point convolution or graph-based operations. MLP-based networks usually stack multiple MLPs with a feature aggregation module in accordance to the convolution layers with a subsequent pooling layer in 2D neural network [3, 19, 10]. Furthermore, point convolution simulates powerful 2D convolution in 3D space by utilizing a parametric continuous convolution layer [28] or a group of kernel points as reference points [26]. Point-based methods are applicable to various datasets because they do not rely on transforming a point cloud to other intermediate representations. So far, there are only point-based methods proposed for urban-level point cloud segmentation. For instance, both EyeNet [34] and LGS-Net [21] utilize a point-based network, namely RandLA-Net [10], as their backbone. MRNet exploits multiple 3D receptive fields and LGS-Net emphasizes the utilization of geometric information. Du _et al_. [5], using KPConv [26] as the backbone, exploit a multi-task framework to achieve both boundary localization and semantic segmentation. Huang _et al_.
al._[11] improve a transformer-based network by applying a local context propagation module to ensure message passing among neighboring local regions. Despite numerous efforts, point-based methods remain computationally intensive. Increasing the receptive field of point-based methods is challenging, whereas this can be easily accomplished in highly-optimized 2D networks.
In summary, numerous methods have been proposed for point cloud segmentation, but a handful of them are suitable for urban-level point cloud segmentation. Additionally, single representations have their limitations. For urban-level point cloud segmentation, geometric information and large receptive fields are equally crucial. Therefore, we propose APNet to combine aerial-views and point-based representations. To the best of our knowledge, we are the first to propose a hybrid method to handle urban-level point cloud segmentation.
### Hybrid Representation for Point Cloud Segmentation
There are also a number of methods that combine different representations. One common strategy is to parallelize multiple networks processing different representations and combining features at different levels. SPVNAS [23], Cylinder3D [41] and DRINet [33] share the concept of paralleling voxel-point architectures. SPVNAS [23] introduces a sparse voxel convolution and combines voxel-wise and point-wise features in different stages. Cylinder3D [41] imposes a point refinement module at the end of the network, which sums voxel-wise and point-wise features followed by three fully-connected layers. DRINet [33] introduces a voxel-point iteration module to iteratively interact between two features. RPVNet [31] consists of three branches, _i.e_. range-view, point-wise and voxel branches. A gated attention module generates coefficients for a linear combination that point-wisely combines information from three branches. These methods combine features either by a simple addition or point-wise combination but fall short to incorporate features from neighbour points. AMVNet [16] addresses this issue by training a small assertion-based network and feeding information from neighbours into it to generate final predictions. However, in the small network, only semantic predictions, _i.e_. class-wise probability scores, are considered. Hence, deeper features with richer contextual information are ignored. In conclusion, hybrid methods leverage prior knowledge from different representations to enhance features to achieve better performance. Nevertheless, the fusion module is often naive and the information from neighbour points is ignored.
Therefore, in this paper, we propose a simple yet effective fusion module that takes both contextual and geometric features as input and exploits positional relationships among neighbour points to generate descriptive features. In contrast to previous methods, our approach effectively incorporates information from neighboring points and achieves better performance on urban-level point cloud segmentation tasks.
## 3 Methodology
In this section, we first present the problem statement. Then, we discuss the different components of our APNet, _i.e_. the dual-encoder and the GAF. Finally, we explain the segmentation heads and define loss functions.
**Problem statement.** Given a colored point cloud \(\mathbf{P}=\{(p_{k},c_{k})\}_{k=1}^{N}\) with \(N\) point coordinates \(p_{k}=(x_{k},y_{k},z_{k})\in\mathbb{R}^{3}\) and colors \(c_{k}=(r_{k},g_{k},b_{k})\in\mathbb{R}^{3}\), the aim is to compute the corresponding semantic labels \(\mathbf{L}=\{(l_{k})\}_{k=1}^{N}\) for every point. We train a deep learning model \(h(\cdot|\theta)\) with parameter \(\theta\) by minimizing the difference between the prediction \(\mathbf{L}=h(\mathbf{P}|\theta)\) and corresponding ground truth label set \(\mathbf{\hat{L}}\). The urban-level point cloud datasets are obtained by UAVs.
### Dual-encoder
The key idea of our approach is to split up the label prediction into two different domains: an aerial (A)-branch and a point-based (P)-branch to leverage the advantages of using different spatial contexts that corresponding 2D vs. 3D network architectures have. The output of both branches is then fused within a geometry-aware fusion (GAF) module as illustrated in Fig. 2. Rather than fusing the label predictions of each branch \(\mathbf{L}^{a}\) and \(\mathbf{L}^{p}\), the GAF operates on intermediate feature representations \(\mathbf{F}^{a}\) and \(\mathbf{F}^{p}\) for a more informed label decision process. We detail both branches in the following paragraphs.
**Aerial image branch.** To obtain a pseudo aerial image of a point cloud, we first project it to an aerial view by an orthographic projection. Assuming that the gravity direction is aligned with the z-axis, each point \(p_{k}=(x_{k},y_{k},z_{k})\) is converted to a pixel \(p_{i}=(u_{i},v_{i})\) via a mapping \(\rho:\mathbb{R}^{3}\mapsto\mathbb{R}^{2}\), as defined by
\[(u_{i},v_{i})^{T}=\rho(p_{k})=\left(\left\lfloor\frac{x_{k}}{s}\right\rfloor, \left\lfloor\frac{y_{k}}{s}\right\rfloor\right)^{T}\ \, \tag{1}\]
where \(i\) is the index of a pixel and \(s\) is the quantization unit, _i.e_. pixel size. By aggregating all 3D points into pixels, we obtain the initial aerial image \(\mathbf{I}^{init}\in\mathbb{R}^{H\times W\times 3}\). Note that the mapping function \(\rho\) is a many-to-one function and we only preserve the properties, _e.g_. color and label, of the highest point in the final image. Moreover, due to the sparsity of LiDAR points, a pseudo image created from the projection of a point cloud must be completed because, unlike a genuine aerial image, it contains both valid and null pixels. A pixel is considered valid if it covers a minimum
of one LiDAR point and is regarded as null otherwise. During the completion, valid pixels are dilated. When the eight neighbour pixels of a null pixel have more than two distinct values, its value is updated by the value that occurs most frequently among its neighbouring pixels. After the completion, we obtain the input aerial image \(\mathbf{I}\in\mathbb{R}^{H\times W\times 3}\). The same projection and completion are operated on labels.
Due to the simplicity of our method, A-branch can be any end-to-end 2D semantic segmentation network. Its output is defined as follows:
\[\mathbf{F}^{a}=h^{a}(\mathbf{I}|\theta^{a})\enspace, \tag{2}\]
where \(h^{a}(\cdot|\theta^{a})\) is the A-branch network and \(\mathbf{F}^{a}\in\mathbb{R}^{H\times W\times C}\).
**Point cloud branch.** The original point cloud provides precise geometric information and is of importance in the ultimate evaluation. However, the spatial distribution of a point cloud is not uniform and local points with the same semantics tend to contain homogeneous information. To ensure the points are sampled uniformly and to increase the network's receptive field, grid-downsampling is frequently used [26, 10]. We follow KPConv [26] to perform grid-downsampling on the original point cloud, which creates a barycenter point for each non-empty grid, with the average values of all points within the same grid serving as the new properties of the barycenter point. The downsampled points are denoted as
\[\mathbf{P}^{d}=\{(p_{k},c_{k})\}_{k=1}^{N^{d}}\enspace.\]
Similar to the flexibility of the A-branch, the P-branch can be easily replaced by any point-based network and is denoted by \(h^{p}(\cdot|\theta^{p})\). By passing downsampled points to the P-branch, a point-wise feature representation is obtained:
\[\mathbf{F}^{p}=h^{p}(\mathbf{P}^{d}|\theta^{p})\enspace, \tag{3}\]
where \(\mathbf{F}^{p}\in\mathbb{R}^{N^{d}\times C}\).
For both the P-branch and A-branch, instead of using ultimate semantic predictions of two base models, we use the high-dimensional features from the intermediate layers of two base models.
### Geometry-aware Fusion Module
In point cloud segmentation, many methods [26, 10, 31] commonly employ a preprocessing step to achieve a uniform point density. This is typically achieved through grid-downsampling, wherein the point cloud is transformed into a gird-based representation. During the training and validation stages, only the newly generated points are processed within the network. The postprocessing, namely upsampling, only occurs during the testing phase, where the labels of original points are determined based on the predictions of their nearest neighbouring points. However, this pipeline fails to include the features of other neighbouring points and the geometric information of the original point cloud throughout the training process. To address this, we employ a skip connection to convey geometric information of the original point cloud to the fusion module and utilize a point convolution to gather features of neighbour points. Our GAF module includes two parts, namely feature extraction and fusion, as illustrated in Fig. 3.
The feature extraction is performed at the downsampled point level to reduce the computational complexity. For a given point belonging to downsampled points \(p_{k}^{d}\in\mathbf{P}^{d}\), its features are computed from the outputs of two branches.
Figure 2: **Architecture overview of APNet.** The network consists of a dual-encoder, a geometry-aware fusion module and three segmentation heads that operate in different domains. The two representations of a sample, _i.e_. aerial image and down-sampled point cloud, are fed into the dual-encoder. Their outputs are passed to the fusion module for feature aggregation. Finally, the features are sent to the segmentation head for point-wise segmentation.
Specifically, the output of the P-branch, which is in the form of a point-wise feature and thus ready to use, is denoted as \(f_{k}^{d}\) for a specific point \(p_{k}^{d}\). In the other hand, for the pixel feature, unlike the quantization operation in generating the aerial image, the bilinear interpolation and the precise 2D coordinates of the point \(p_{k}^{d}\), _i.e_. \((u_{k},v_{k})=(x_{k}^{d}/s,y_{k}^{d}/s)\), are used to obtain pixel feature:
\[f_{k}^{a}=\sum_{u,v\in\delta(k)}\phi(u_{k},v_{k},u,v)\mathbf{F}^{a}(u,v)\enspace, \tag{4}\]
where \(\delta(k)\) is the set of the four neighboring pixels of point \(k\) and \(\phi(\cdot)\) computes the bilinear weights.
The process of feature involves the concatenation of features derived from the two branches and a point convolution. A point convolution, KPConv [26], is defined as follows,
\[f_{k}=\mathcal{G}(p_{k})=\sum_{p_{l}\in\mathcal{N}_{p_{k}}}g(p_{k}-p_{l})f_{l}\enspace, \tag{5}\]
where \(\mathcal{G}\) represents the point convolution, while \(g(\cdot)\) denotes the kernel function that computes the weights based on the vector from target point \(p_{k}\) to one of its neighbouring points \(p_{l}\). \(f_{l}\) is the concatenated feature of point \(p_{l}\) from feature extraction module and \(\mathcal{N}_{p_{k}}\) refers to the neighbouring points of point \(p_{k}\). In summary, the feature of a target point is obtained by weighted sum the features of its neighbouring points.
For each single point convolution, we use one point from a pre-defined query set \(\mathbf{P}^{q}\) as the target point and obtain its features based on its neighbouring points from a pre-defined support set \(\mathbf{P}^{s}\). Note that the neighbouring point set, denoted as \(\mathcal{N}_{p_{k}}\), is a subset of the support set \(\mathbf{P}^{s}\). This subset is generated by considering the distances between the neighbouring points and the target point \(p_{k}\). A common practice is to use a same point cloud, a downsampled point cloud, for both the query set and support set [26], which is denoted as the naive GAF module, as discussed in Sec. 4.3. In this way, the entire network works at the level of downsampled points. Nevertheless, our investigations indicate that the performance is negatively affected by disregarding the precise geometric information of the original points. To address this, we opt to utilise the original points \(\mathbf{P}\) instead of the downsampled points \(\mathbf{P}^{d}\) as the query set, which implies that we set \(\mathbf{P}^{q}=\mathbf{P}\). The fused feature \(\mathbf{f}_{k}^{\text{fused}}\) of point \(p_{k}\) is obtained by \(\mathbf{f}_{k}^{\text{fused}}=\mathcal{G}(p_{k})\) and the feature set is defined as \(\mathbf{F}^{\text{fused}}=\{\mathbf{f}_{k}^{\text{fused}}|k=1,2,..N\}\).
In summary, the feature extraction operates at the level of downsampled points and the feature fusion incorporates the precise geometric information of the original points during the training stage, which enhances the accuracy.
### Segmentation Heads and Loss function
The segmentation heads are a set of convolutional layers with \(1\times 1\) kernel compressing the channel from a high dimension to a low one, namely the number of categories. The final output of the model is defined by:
\[\mathbf{Pred}^{\text{rep}}=\mathrm{Conv}_{1\times 1}^{m}(\mathbf{F}^{\text{rep}})\enspace, \tag{6}\]
where \(\mathbf{Pred}^{\text{rep}}\in\mathbb{R}^{1\times N_{\text{classes}}}\) is the probabilistic prediction based on the feature \(f^{\text{rep}}\) and \(rep\in\{a,p,fused\}\) stands for aerial, point-wise or fused representation. \(\mathrm{Conv}_{1\times 1}^{m}\) means \(1\times 1\) a convolutional layer is repeated for \(m\) times.
Two class-balanced loss function is used, _i.e_. weighted cross-entropy (WCE) with inverse frequency [4] and Lovasz-softmax loss [2]. The WCE loss is applied between the output of three segmentation heads and corresponding ground-truths:
\[\mathcal{L}_{1}^{\text{rep}}=\mathcal{L}_{\text{WCE}}(\mathbf{Pred}^{\text{ rep}},\mathbf{\hat{L}})\enspace, \tag{7}\]
Note that although three representations share the same segmentation head and the loss function, the ground-truths \(\mathbf{\hat{L}}\) are different. The pixel-wise label, grid-downsampled point label and the label for raw points are applied to aerial, point-wise and fused predictions respectively. The Lovasz-softmax loss is only applied to the fused representation:
\[\mathcal{L}_{2}=\mathcal{L}_{\text{Lovasz}}(\mathbf{Pred}^{\text{rep}},\mathbf{ \hat{L}})\enspace, \tag{8}\]
Eventually, the overall loss is calculated as:
\[\mathcal{L}_{\text{all}}=\sum_{\text{rep}=\{a,p,\text{fused}\}}\alpha^{ \text{rep}}\mathcal{L}_{1}^{\text{rep}}+\beta\mathcal{L}_{2}\enspace. \tag{9}\]
where \(\alpha\) and \(\beta\) are the factors to adjust the scale of loss functions.
Figure 3: **Geometry-aware fusion module** includes feature extraction and fusion. Given support points and a contextual feature map, point-wise contextual features are extracted and concatenated with point-wise geometric features. The concatenated features and geometric information of both query points and support points are fed into a point convolution to aggregate geometric context information for generating the fused output features.
## 4 Experiments
In this section, we introduce the implementation details of our APNet in Sec. 4.1. Then we compare the proposed model with SOTA models on the SensatUrban dataset [8] in Sec. 4.2. Finally, the effectiveness of all components are analyzed in Sec. 4.3.
### Experimental setup
**SensatUrban Dataset.** SensatUrban [8] is an urban-level photogrammetric point cloud dataset collected by a UAV. It covers a total of 7.64 square kilometers in three UK cities, Birmingham, Cambridge and York, and provides annotations for 13 semantic categories. Its average density of it is 473 points per square meter. For easier processing, the data are cut into 43 blocks with a maximum size of 400 meters by 400 meters. We follow the official split, which consists of training/validation/testing set with 33/4/6 blocks. During training and evaluation, the data from different cities are exploited mutually. We use the training set for training and report ablation studies on the validation set. We also report results on the testing set by submitting the predictions to the leaderboard where the ground truths are unpublished for a fair comparison. The grid size for down-sampling is set as 0.2 meters, resulting in 92% of the original points being filtered out. The pixel size for projection is set as 0.04 meters and the image size is set as \(512\times 512\), resulting in a coverage of \(20.48m\times 20.48m\).
**Metrics.** As official recommendations [8], the main metric for per-category evaluation is intersection-over-union (IoU) and its mean value (mIoU) over all classes. The IoU is formulated as follows:
\[\text{IoU}_{c}=\frac{TP_{c}}{TP_{c}+FP_{c}+FN_{c}}\enspace, \tag{10}\]
where \(TP_{c},FP_{c}\) and \(FN_{c}\) indicate true positive, false positive and false negative predictions for class \(c\). The mIoU is the average IoU over all classes:
\[\text{mIoU}=\frac{1}{N_{\text{cla}}}\sum_{c=1}^{N_{\text{cla}}}\text{IoU}_{c} \tag{11}\]
where \(N_{\text{cla}}\) stands for the number of classes. Additionally, the overall accuracy is also reported. It is defined as follows:
\[OA=\frac{\sum_{c=1}^{N_{\text{cla}}}TP_{c}}{N_{\text{points}}}\enspace, \tag{12}\]
where \(N_{\text{points}}\) is the total number of points.
**Implementation details.** HRNet [27] with object contextual representation [35] and a variant of RandLA-Net [10] are chosen as the backbones for A-branch and P-branch, respectively. These are detailed in the supplementary materials. AdamW optimizer [17] is used with a weight decay of 0.01 and a default learning rate of 0.001, while the learning rate of the P-branch is multiplied by a factor of 5. The learning rate decreases by 5% after each epoch. The network is trained for 200 epochs for SensatUrban, with a batch size of 32. During the training procedure, random rotation along z-axis, random flip along y-axis and random scale are performed for both grid-downsampled points and aerial images while the correspondences are preserved. For more efficient training, the data in the training set and validation set are cropped into 100m \(\times\) 100m patches approximately.
### Comparison with existing methods
**Quantitative results.** The comparison of our method and other existing methods on SensatUrban benchmark [8] are shown in Table 1. Remarkably, APNet surpasses all other methods, achieving an OA of 94.0% and a mIoU of 65.2%. Notably, APNet outperforms its backbone, RandLA-Net [10], by an impressive margin of 12.5%, affirming the beneficial impact of the A-branch on segmentation. Furthermore, APNet excels in specific categories, ranking first in both the traffic road and the footpath categories. Additionally, APNet attains a top-three position in 8 out of 13 categories, further validating its superior performance.
**Qualitative results.** Fig. 4 is a high-level visualization to qualitatively compare the prediction of APNet and the ground truth. As indicated by the OA, APNet predicts most of the points correctly and performs excellently in the two \(400m\times 400m\) blocks. Nevertheless, the primary source of inaccuracy in this figure is from the footpath, which presents problems due to its contextual and physical resemblance to the traffic road. Fig. 5 showcases a visual assessment of APNet against PushBoundary [5]. The middle column, the results of PushBoundary with the red dashed boxes, is taken directly from the original paper. Even though the target regions are chosen by other authors, our method shows comparable or superior performance compared to PushBoundary.
### Ablation studies
**Branch ablations.** We first compare A-branch, P-branch and APNet. For the single branch networks, the GAF strategy is not applied as the features are obtained from single representation. The output feature from A/P-branch is directly passed to the segmentation head and generates an intermediate prediction. For A-branch, the final prediction is generated through a bilinear interpolation based on four neighbouring pixels. For P-branch, the final prediction is obtained by coping prediction from the nearest neighbour point within downsampled point set. As shown in Table 2, the combined network outperforms every single branch on
OA, mIoU and most of the IoUs of all categories. In cases where APNet performs worse than single-branch networks, the difference is negligible. Notably, P-branch outperforms A-branch on OA, although the opposite is observed for most categories. This is because of the imbalanced distribution of categories in the dataset. Over 50% of the points are attributed to the three categories - ground, vegetation, and building, resulting in that a higher accuracy for these dominant categories will mask shortcomings in other categories for an overall metric.
**Fusion strategy.** We compare GAF module with two simple fusion strategies and the naive version of GAF in Table 3. The addition is the most intuitive way to combine two features. The concatenation increases the complexity slightly because a subsequent MLP is necessary to reduce the number of channels. These two combinations are point-wise and thus no neighbouring features are considered. Nevertheless, they outperform both single-branch networks. Naive GAF enhance its local adaptive capabilities by involving neighbour features at a downsampled points level. The proposed GAF improves the naive GAF by using the original points as query points and achieves the best performance on both OA and mIoU. Our GAF module yields enhanced outcomes, surpassing the simple fusion strategy by 1% OA and 2.5% mIoU, _e.g_. addition and concatenation. Ablation studies illustrate the effectiveness and necessity of each component in the proposed method.
## 5 Conclusion
We presented a semantic segmentation method that exploits the advantages of both point cloud-based and aerial image-based methods in a single network architecture with two separate domain branches. The reasoning about which branch is more effective for which class category and spatial location is learned by a geometry-aware fusion network that combines the output of both branches into a single estimate. Ablation studies and comparisons to state-of-the-art methods show clear benefits of the proposed architecture.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & & & & \\ Method & OA & mIoU & & & & & & & & & & & & & & & \\ \hline A-branch & 89.8 & 55.2 & 71.8 & 91.6 & 94.3 & 70.0 & 22.9 & 47.2 & **46.6** & 65.4 & 45.4 & 81.1 & **20.4** & 0.0 & 61.0 \\ P-branch & 90.2 & 52.1 & 75.0 & 95.4 & 93.3 & 52.4 & **27.4** & 40.7 & 23.3 & 59.3 & 34.3 & 80.6 & 18.2 & **12.4** & 65.2 \\ APNet (Ours) & **92.3** & **59.2** & **80.5** & **97.4** & **96.7** & **73.0** & 21.8 & **52.3** & 43.4 & **66.1** & **50.7** & **84.8** & 19.9 & 12.3 & **70.9** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation studies on branches.** This table compares the semantic labeling performance of the aerial image branch and the point cloud branch against the output of the geometry-aware fusion module. The benefit of the fusion module is apparent as is mostly yields better class-wise performances then the individual branches separately.
Figure 5: **Qualitative comparison with PushBoundary [5] on the SensatUrban [8] test set (No GT available).** The figures of the PushBoundary with the red dash boxes are directly taken from the original paper. APNet performs on par with PushBoundary in the first example (top row) and outperforms it in the second example (bottom row).
\begin{table}
\begin{tabular}{l l c c} \hline \hline Encoder & Fusion strategy & OA & mIoU \\ \hline A-branch & N/A & 89.8 & 55.2 \\ P-branch & N/A & 90.2 & 52.1 \\ \hline \multirow{4}{*}{Dual-encoder} & Addition & 91.3 & 56.7 \\ & Concatenation & 90.7 & 56.7 \\ \cline{1-1} & Naive GAF & 91.5 & 57.5 \\ \cline{1-1} & GAF & **92.3** & **59.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablation studies on geometry-aware fusion (GAF) module.** Compared to the simpler point-wise fusion approaches (addition, concatenation), the geometry-aware fusion includes spatial context into the reasoning yielding improved performance.
## 6 Acknowledgements
This work is financially supported by TomTom, the University of Amsterdam and the allowance of Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy. Fatemeh Karimi Nejadasl is financed by the University of Amsterdam Data Science Centre.
## Appendix A Implementation details
**A-branch.** We adopt HRNet [27] with object-contextual representations (OCR) [35], denoted as HRNet-OCR, as the backbone for A-branch. During training, the OCR loss is preserved while the original 2D segmentation head is removed. The intermediate features, also known as augmented representations as defined in the original paper, from HRNet-OCR are compressed to a total of 128 channels, thereby ensuring alignment with the output of the P-branch.
**P-branch.** We employ RandLA-Net [10] as the backbone for the P-branch and follow its official configuration for the SemanticKITTI dataset[1] with the following two modifications: Firstly, we double all feature channels in the RandLA-Net to accommodate the additional color features. Furthermore, we double the output channel for the last layer to ensure compatibility with the A-branch. Consequently, the encoder produces outputs with channel dimensions of 64, 128, 256, and 512, respectively. Secondly, we input the same point cloud to RandLA-Net twice and sum up the output features. Although the network does not change, the down-sampling within the network is random, leading to different features for the same point cloud in the end. This technique promotes the consistency of RandLA-Net.
**GAF module.** We adopt KPConv [26] as the point convolution in the GAF module and adhere to the configuration of the rigid KPConv. Accordingly, one single rigid KPConv encompasses a sphere with a radius of 0.5 meters, centered at the query point. Each kernel point exerts an influence on all support points within a sphere whose radius is 0.24 meters and centered on the kernel point.
|
2302.14719 | Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction | Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental
task in opinion mining that aims to extract the targets (or aspects) on which
opinions have been expressed. Recent work focus on cross-domain OTE, which is
typically encountered in real-world scenarios, where the testing and training
distributions differ. Most methods use domain adversarial neural networks that
aim to reduce the domain gap between the labelled source and unlabelled target
domains to improve target domain performance. However, this approach only
aligns feature distributions and does not account for class-wise feature
alignment, leading to suboptimal results. Semi-supervised learning (SSL) has
been explored as a solution, but is limited by the quality of pseudo-labels
generated by the model. Inspired by the theoretical foundations in domain
adaptation [2], we propose a new SSL approach that opts for selecting target
samples whose model output from a domain-specific teacher and student network
disagree on the unlabelled target data, in an effort to boost the target domain
performance. Extensive experiments on benchmark cross-domain OTE datasets show
that this approach is effective and performs consistently well in settings with
large domain shifts. | Kai Sun, Richong Zhang, Samuel Mensah, Nikolaos Aletras, Yongyi Mao, Xudong Liu | 2023-02-28T16:31:17Z | http://arxiv.org/abs/2302.14719v1 | # Self-training through Classifier Disagreement for Cross-Domain Opinion Target Extraction
###### Abstract.
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining that aims to extract the targets (or aspects) on which opinions have been expressed. Recent work focuses on cross-domain OTE, which is typically encountered in real-world scenarios, where the testing and training distributions differ. Most methods use domain adversarial neural networks that aim to reduce the domain gap between the labelled source and unlabelled target domains to improve target domain performance. However, this approach only aligns feature distributions and does not account for class-wise feature alignment, leading to suboptimal results. Semi-supervised learning (SSL) has been explored as a solution, but is limited by the quality of pseudo-labels generated by the model. Inspired by the theoretical foundations in domain adaptation [2], we propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagree on the unlabelled target data, in an effort to boost the target domain performance. Extensive experiments on benchmark cross-domain OTE datasets show that this approach is effective and performs consistently well in settings with large domain shifts.
domain adaptation, self-training, opinion mining +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
to reduce the domain shift between a labelled source and unlabelled target domain.
One typical line of work aims to reduce domain shifts via domain adversarial neural networks (DANN) [10]. Given a labelled source and unlabelled target domain data, DANNs attempt to learn representations that are discriminative on the source domain and invariant to the domain distribution. However, DANNs align the feature distributions of the source and target data inputs (i.e., aligning the marginal distribution), neglecting the feature alignment at the class-level [32]. As a consequence, the resulting target features are non-discriminative with respect to the class labels, which consequently leads to suboptimal target domain performance.
Semi-supervised learning (SSL) [4] has been explored to learn target discriminative features by generating pseudo-labels from the unlabelled target data. While SSL approaches have been heavily employed to boost domain adaptation in vision tasks [17, 33, 40], it has been lightly touched in cross-domain OTE [43, 45]. The state-of-the-art method Adaptive Hybrid Framework (AHF) [45] adapts a mean teacher (i.e., teacher and student networks) [33] into the task. The teacher is modelled as a feedforward network while the student is a DANN (i.e., developed by augmenting the feedforward network with a discriminator). Here, knowledge on the target's output of the teacher-student networks is shared among the networks to learn the target-discriminative features. Although AHF demonstrates the importance of SSL, the fundamental weakness of the mean-teacher cannot be ignored. Specifically, Ke et al. [17] provided theoretical and empirical proof to show that the weights of the teacher quickly converges to that of the student as training progresses, which consequently leads to a performance bottleneck.
These findings motivate us to decouple the student-teacher networks and optimize the networks through independent paths to prevent the networks from collapsing into each other [17]. We propose a novel SSL approach, which performs Self-training through Classifier Disagreement (SCD), to effectively explore the outputs of the student and teacher networks on the unlabelled target domain. SCD is inspired by the theory of domain adaptation [2], which allows us to detect high-quality pseudo-labelled target samples in the student feature space to self-train the student for cross-domain OTE. As demonstrated in Fig. 1, SCD achieves this by comparing the two target distributions induced separately by the student and teacher networks. These high-quality pseudo-labelled target samples are those that disagree (i.e., discrepancy in target predictions) with their correspondence in the teacher feature space. We perform extensive experiments and find that SCD not only achieves impressive performance but also performs consistently well in large domain shifts on cross-domain OTE.
Our contribution can be summarized as follows:
* We develop a novel SSL approach for cross-domain OTE, referred to as Self-training through Classifier Disagreement (SCD) which leverages high-quality pseudo-labelled target samples in the student feature space to improve target performance in cross-domain OTE.
* a key direction in the domain adaptation research.
* We perform extensive experiments and show that SCD achieves state-of-the-art results in nine out of ten transfer pairs for the cross-domain OTE task.
## 2. Related Work
There is a growing literature on OTE [21, 22, 23, 39, 41, 26] but they mostly focus on single domain learning. However, in real-world scenarios, the training distribution used by a classifier may differ from the test distribution, which is a big challenge for single domain learning methods.
Cross-domain learning has been explored for the OTE task. Traditional methods use hand-crafted domain-independent features and use Conditional Random Fields (CRFs) [7, 16, 20]. While hand-crafted features are useful, they are manually engineered and require human experts, making them time-consuming and expensive to obtain. So far, some neural models have been proposed for cross-domain OTE [6, 9, 11, 24, 38, 45, 43]. The common paradigm in prior work is to reduce the domain shift between the source and target domains. Among recent work, Ding et al. [9] proposed a hierarchical network trained with joint training (Hier-Joint). This method uses domain-independent rules to generate auxiliary labels and use a recurrent neural network to learn a domain-invariant hidden representation for each word. However, the manually defined rules have limited coverage. A similar method, namely, Recursive Neural Structural Correspondence Network (RNSCN) [38] introduces an opinion word extraction as an auxiliary task based on a critical assumption that associative patterns exist between aspect terms and opinion words irrespective of the domain. They use syntactic relations in dependency trees as the pivot to bridge the domain gap for cross-domain OTE. However, the external linguistic resources used are derived from traditional feature-engineered NLP systems which may propagate errors. More recent methods, including the Aspect Boundary Selective Adversarial Learning model (AD-SAL) [24] uses an adversarial network with attention mechanisms to learn domain-invariant features. Gong et al. [11] proposed BERT\({}_{\text{E}}\)-UDA to integrate BERT fine-tuned on domain information for the task.
Figure 1. Illustrative example of source and target distributions induced by a teacher and student network (Best viewed in color). Target samples that change class due to adversarial learning by the student network are selected to self-train the student.
Chen and Qian (Chen and Qian, 2015) proposed a Semantic Bridge network (SemBridge) which constructs syntactic and semantic bridges to transfer common knowledge across domains.
While significant progress has been made, the majority of the proposed models neglect the feature distribution alignment at the class-level. Hence, their performance cannot be guaranteed because they do not learn target discriminative features. Recently, a Cross-Domain Review Generation model based on BERT (BERT\({}_{\text{E}}\)-CDRG) (Zhou et al., 2017) generated target domain data with fine-grained annotations aiming to learn the target discriminative features. Perhaps, AHF (Zhou et al., 2017) is the first to use SSL in the task. AHF adapts a mean teacher in which the teacher and student networks are found to be tightly coupled during training, leading to a performance bottleneck (Kang et al., 2018). Elsewhere, researchers have delicately designed SSL approaches that allow individual models to iteratively learn from each other, thus, preventing these models from collapsing into each other (Chen and Qian, 2015; Kang et al., 2018; Kang et al., 2018). Such approaches have demonstrated substantial improvements over the mean-teacher.
## 3. Preliminaries
Our method is inspired by the theory of domain adaptation proposed by Ben-David et al. (Ben-David et al., 2016), which provides an upper bound on the target error in terms of the source error and the domain divergence. Suppose \(h\in\mathcal{H}\) is a hypothesis, Ben-David et al. (Ben-David et al., 2016) theorized that the target error \(\epsilon_{\mathcal{T}}(h)\) (which can also be viewed as the target performance) is bounded by the source error \(\epsilon_{\mathcal{S}}(h)\) (i.e., the source performance) and the symmetric difference hypothesis divergence \(\mathcal{H}\Delta\mathcal{H}\)-divergence between the source \(\mathcal{S}\) and target \(\mathcal{T}\) distributions, denoted as \(d_{\mathcal{H}\Delta\mathcal{H}}(\mathcal{S},\mathcal{T})\) (i.e., a measure of the domain shift). Formally,
\[\forall h\in\mathcal{H},\epsilon_{\mathcal{T}}(h)\leq\epsilon_{\mathcal{S}}(h )+\frac{1}{2}d_{\mathcal{H}\Delta\mathcal{H}}(\mathcal{S},\mathcal{T})+\beta \tag{1}\]
where \(\beta\) is the optimal joint error on the source and target domains which should be small for domain adaptation. Note, \(\beta\) is a constant which is independent of \(h\). To obtain a better estimate of \(\epsilon_{\mathcal{T}}(h)\), a learner can either reduce the source error \(\epsilon_{\mathcal{S}}(h)\) or/and the divergence \(d_{\mathcal{H}\Delta\mathcal{H}}(\mathcal{S},\mathcal{T})\), which can be estimated from finite samples of the source and target domains (Ben-David et al., 2016).
## 4. Problem Statement
The OTE task is formulated as a sequence labeling problem. Given the \(j\)-th input sentence \(\mathbf{x}_{j}=\{x_{ij}\}_{i=1}^{n}\) with \(n\) words, the word \(x_{ij}\) is represented as a feature vector. The goal is to predict the label sequence \(\mathbf{y}_{j}=\{y_{ij}\}_{i=1}^{n}\), with \(y_{ij}\in\mathcal{Y}=\{\mathrm{B},\mathrm{I},\mathrm{O}\}\), denoting the **B**eginning, **I**nside and **O**utside of an opinion target or aspect term.
In this paper, we focus on the cross-domain setting which is typically tackled through unsupervised domain adaptation (UDA). Particularly, UDA aims to transfer knowledge from a labelled source domain to an unlabelled target domain, whose data distribution has a considerable shift from that of the source domain. Formally, suppose a labelled source domain dataset with \(N_{\mathcal{S}}\) sentence and label pairs \(D_{\mathcal{S}}=\{(\mathbf{x}_{j}^{\mathcal{S}},\mathbf{y}_{j}^{\mathcal{S}}) \}_{j=1}^{N_{\mathcal{S}}}\), and an unlabeled dataset in a target domain with \(N_{\mathcal{T}}\) unlabelled sentences \(D_{\mathcal{T}}=\{(\mathbf{x}_{j}^{\mathcal{T}})\}_{j=1}^{N_{\mathcal{T}}}\). Our goal is to predict labels of testing samples in the target domain using a model trained on \(D_{\mathcal{S}}\cup D_{\mathcal{T}}\). 1
Footnote 1: Hereinafter, subscripts or superscripts are omitted for clarity, and the term “aspect” will be used instead of “opinion target” to avoid confusion with the target domain.
## 5. Methodology
Our method is based on a teacher-student network structure. Teacher \(A\) learns on the source data \(D_{\mathcal{S}}\); and Student \(B\) learns on both the source \(D_{\mathcal{S}}\) and target domain data \(D_{\mathcal{T}}\). Both trained networks generate pseudo-labelled target samples on the unlabelled target domain, which are then compared to detect high quality pseudo-labelled target samples to self-train the student for cross-domain OTE.
### Teacher Network
The teacher network \(A=\{A_{e},A_{I}\}\) is a neural network, consisting of a feature encoder \(A_{e}\) and a label classifier \(A_{I}\). In our work, \(A_{e}\) is modelled using a BiLSTM (Kang et al., 2018) or BERT (Chen and Qian, 2015) since they are both widely used approaches for sequence labelling problems. \(A_{I}\) on the other hand is modelled using a softmax function. Although the CRF (Kang et al., 2018) is a typical choice to model the label classifier for sequence labelling problems, the softmax offers comparable performance in cross-domain OTE (Kang et al., 2018). Hence, given the sentence \(\mathbf{x}_{j}=\{x_{ij}\}_{i=1}^{n}\), \(A_{e}\) extracts the context features \(\mathbf{f}_{j}^{A_{e}}=\{f_{ij}^{A_{e}}\}_{i=1}^{n}\). Now, for each word-level feature \(f_{ij}^{A_{e}}\), the label classifier \(A_{I}\) is applied to output the prediction probability \(P(\hat{y}_{ij}^{A_{I}})\) over the tag set \(\mathcal{Y}\). As the teacher is trained over the source data only, the classification loss by the teacher network is given by:
\[\mathcal{L}_{y}^{A}=\frac{1}{N_{\mathcal{S}}}\sum_{j=1}^{N_{\mathcal{S}}}\sum_{ i=1}^{n}\ell(P(\hat{y}_{ij}^{A_{I}}),y_{ij}) \tag{2}\]
where \(P(\hat{y}_{ij}^{A_{I}})\) is the probability prediction for the word \(x_{ij}\in\mathbf{x}_{j}^{\mathcal{S}}\) and \(y_{ij}\in\mathbf{y}_{j}^{\mathcal{S}}\) is the ground-truth of \(x_{ij}\). \(\ell\) is the cross-entropy loss function.
Now suppose \(\mathbf{F}_{\mathcal{S}}^{A_{e}}\) and \(\mathbf{F}_{\mathcal{T}}^{A_{e}}\) are fixed representations of the respective source and target domain data produced by the trained teacher \(A_{e}\). The upper bound on the target error \(\epsilon_{\mathcal{T}}(A_{I})\) of the label classifier \(A_{I}\) can be expressed as:
\[\epsilon_{\mathcal{T}}(A_{I})\leq\epsilon_{\mathcal{S}}(A_{I})+\frac{1}{2}d_{ \mathcal{H}\Delta\mathcal{H}}(\mathbf{F}_{\mathcal{S}}^{A_{e}},\mathbf{F}_{ \mathcal{T}}^{A_{e}})+\beta \tag{3}\]
It is easy to see that the teacher network simply reduces the source error \(\epsilon_{\mathcal{S}}(A_{I})\) by (2) while the domain shift \(d_{\mathcal{H}\Delta\mathcal{H}}(\mathbf{F}_{\mathcal{S}}^{A_{e}},\mathbf{F}_{ \mathcal{T}}^{A_{e}})\) remains large since the network does not have an appropriate component to reduce the domain shift. This leads to a suboptimal estimate for the bound of the target errors \(\epsilon_{\mathcal{T}}(A_{I})\).
### Student Network
As we have seen in the previous section, the teacher applies domain-specific knowledge (i.e., the source domain) for inference, which may underperform on the target domain due to difference in the data distribution. Ideally, the network should have the ability to perform in different domains. We introduce the student network as a solution.
The student network is analogous to a student who learns several subjects simultaneously in order to perform well in those subjects.
This is different from teachers who are normally experts in a single subject. This implies that the student network not only desires to be as excellent as the domain-specific teacher on the source data but also aims to perform well on the target data. To this end, the student network is developed by augmenting a teacher network with a discriminator (or domain classifier), following DANN [10]. Accordingly, the student network \(B=\{B_{e},B_{l},B_{d}\}\) consists of a feature encoder \(B_{e}\); label classifier \(B_{l}\); and domain classifier \(B_{d}\), which determines if the sample comes from the source or target domain. \(B_{e}\) extracts the context features \(\mathbf{f}_{f}^{B_{e}}\) from the sentence \(\mathbf{x}_{j}\in D_{\mathcal{S}}\cup D_{\mathcal{T}}\) and feeds to \(B_{l}\) to learn discriminative features on the source domain, following a similar classification loss with Eqn. (2). Formally, the classification loss is defined as:
\[\mathcal{L}_{y}^{B}=\frac{1}{N_{\mathcal{S}}}\sum_{j=1}^{N_{\mathcal{S}}}\sum_{ i=1}^{n}\ell(P(\hat{g}_{ij}^{B_{e}}),y_{ij}) \tag{4}\]
where \(P(\hat{g}_{ij}^{B_{i}})\) is the probability prediction for the word \(x_{ij}\in\mathbf{x}_{j}^{\mathcal{S}}\) and \(y_{ij}\in\mathbf{y}_{j}^{\mathcal{S}}\) is the ground-truth. At the same time, \(\mathbf{f}_{j}^{B_{e}}\) is fed to a domain classifier \(B_{d}\) to learn domain-invariant features through a gradient reversal layer (GRL) [10]. Formally, the GRL \(R_{\lambda}(\cdot)\) acts as an identity function in the forward pass, i.e., \(R_{\lambda}(\mathbf{f}_{j}^{B_{e}})=\mathbf{f}_{j}^{B_{e}}\), and backpropagates the negation of the gradient in the backward pass, i.e., \(\partial R_{\lambda}(\mathbf{f}_{j}^{B_{e}})/\partial\mathbf{f}_{j}^{B_{e}}=-\lambda I\). Consequently, \(B_{e}\) maximizes the domain-classification loss \(\mathcal{L}_{d}^{B}\) through the GRL while \(B_{d}\) minimizes \(\mathcal{L}_{d}^{B}\) to make \(\mathbf{f}_{j}^{B_{e}}\) domain-invariant. The domain classification loss \(\mathcal{L}_{d}^{B}\) is defined as follows:
\[\mathcal{L}_{d}^{B}=\sum_{j=1}^{N}d_{j}\log(P(\hat{d}_{j}^{B_{d}}))+(1-d_{j}) \log(1-P(\hat{d}_{j}^{B_{d}})) \tag{5}\]
where \(d_{j}=1\) indicates that the \(j\)-th sentence comes from the source domain, otherwise \(d_{j}=0\); \(P(\hat{d}_{j}^{B_{d}})\) is the domain probability prediction of the sentence-level feature \(\mathbf{x}_{j}\); \(N=N_{\mathcal{S}}+N_{\mathcal{T}}\).
Suppose \(\mathbf{F}_{\mathcal{S}}^{B_{e}}\) and \(\mathbf{F}_{\mathcal{T}}^{B_{e}}\) are fixed representations of the respective source and target domain data produced by the trained student encoder \(B_{e}\). The upper bound on the student label classifier \(B_{l}\) can be expressed as:
\[\epsilon_{\mathcal{T}}(B_{l})\leq\epsilon_{\mathcal{S}}(B_{l})+\frac{1}{2}d_{ \mathcal{H}\Delta\mathcal{H}}(\mathbf{F}_{\mathcal{S}}^{B_{e}},\mathbf{F}_{ \mathcal{T}}^{B_{e}})+\beta \tag{6}\]
The source error \(\epsilon_{\mathcal{S}}(B_{l})\) is comparable with \(\epsilon_{\mathcal{S}}(A_{l})\) since the student and teacher are trained on the source data using the same network pipeline (compara (2) and (4), and also empirically demonstrated in Table 5). But the student network has been shown to reduce the domain divergence with a theoretical guarantee via the GRL [10]. This means \(d_{\mathcal{H}\Delta\mathcal{H}}(\mathbf{F}_{\mathcal{S}}^{B_{e}},\mathbf{F}_{ \mathcal{T}}^{B_{e}})\) is relatively small, i.e., \(d_{\mathcal{H}\Delta\mathcal{H}}(\mathbf{F}_{\mathcal{S}}^{B_{e}},\mathbf{F}_ {\mathcal{T}}^{B_{e}})\leq d_{\mathcal{H}\Delta\mathcal{H}}(\mathbf{F}_{ \mathcal{S}}^{A_{e}},\mathbf{F}_{\mathcal{T}}^{A_{e}})\), and therefore leads to a better estimate of \(\epsilon_{\mathcal{T}}(B_{l})\). In other words, the student performs better than the domain-specific teacher on the target data due to the mitigation of the domain shift.
### Self-training through Classifier Disagreement
The student network improves target performance by aligning the source and target data distributions. It just so happens that it simply aligns the data distribution without considering the alignment at the class-level [32], leading to suboptimal performance. Such a situation occurs due to the lack of labelled target data to learn target discriminative features. The fundamental challenge is that we do not have access to labelled target data.
To this end, we introduce a strikingly simple approach to collect high-quality pseudo-labelled target samples to improve the class-level alignment of the student network. Fig 2 shows an overview of our approach, which we refer to as Self-training through Classifier Disagreement (SCD). Suppose the trained student and teacher networks (i.e., trained by Eqn. (2), (4) and (5)) assign pseudo-labels to the unlabelled target data. Eqns (3) and (6) indicate that the increase in target performance by the student can be explained by the target samples that have shifted toward the domain-invariant feature space (i.e., the student feature space). Our goal is to self-train the student network by leveraging the target samples responsible for the performance improvement in the target domain.
This strategy is only beneficial if the domain shift is large since this will lead to a large set of high-quality pseudo-labelled target samples. Otherwise, both networks will have comparable performance on the unlabelled target domain and the performance gain is minimal. To extend the approach to problems with close similarity between domains, we split the self-training learning problem by paying attention to: 1) \(D_{d}\), the target samples in the student feature space that _disagree_ with their counterpart in the teacher feature space; and 2) \(D_{a}\), the target samples in the student feature space that _agree_ with their counterpart in the teacher feature space.
Formally, let us suppose the student and teacher networks are already trained (i.e., without self-training). As we aim to self-train the Student network, we can rewrite the classification loss expressed in (4) as \(\mathcal{L}_{y}^{B(0)}\) to represent the initial classification loss of the Student network. Now, let us suppose the teacher and student networks assign the pseudo-labels \(\tilde{y}_{j}^{A_{t}}=\{y_{ij}^{A_{t}}\}_{i=1}^{n}\) and \(\tilde{y}_{j}^{B_{t}}=\{\tilde{g}_{ij}^{B_{t}}\}_{i=1}^{n}\) for
Figure 2. Overview of our SSL Approach. Both Teacher and Student networks have been earlier trained by Eqn. (2), (4) and (5). The Student network alone is further self-trained through classifier disagreement on the target domain. This figure is best viewed in color.
each sentence \(\mathbf{x}_{j}^{\mathcal{T}}\in D_{\mathcal{T}}\), respectively. Self-training is formulated as training the student network on the set \(D_{\mathcal{S}}\cup D_{d}\cup D_{a}\), where the sets \(D_{d}\) and \(D_{a}\) are defined as follows:
\[\begin{split} D_{d}&:=\{(\mathbf{x}_{j}^{\mathcal{T} },\bar{\mathbf{y}}_{j}^{B_{t}})|\exists x_{ij}\in\mathbf{x}_{j}^{\mathcal{T}} \text{ s.t. }\text{$\bar{y}_{ij}^{B_{t}}\neq\bar{y}_{ij}^{A_{t}}$}\}\\ D_{a}&:=\{(\mathbf{x}_{j}^{\mathcal{T}},\bar{\mathbf{y }}_{j}^{B_{t}})|\forall x_{ij}\in\mathbf{x}_{j}^{\mathcal{T}}\text{ s.t. }\text{$\bar{y}_{ij}^{B_{t}}=\bar{y}_{ij}^{A_{t}}$}\}\end{split} \tag{7}\]
Here, \(\bar{y}_{ij}^{A_{t}}\in\bar{\mathbf{y}}_{j}^{A_{t}}\) is the teacher network's pseudo-label assignment on \(\mathbf{x}_{ij}\in\mathbf{x}_{j}^{\mathcal{T}}\). Let \(r\) index the self-training round. Then the self-training loss for the student network at a specific self-training round \(r\) can be formulated as follows:
\[\begin{split}\bar{\mathcal{L}}_{y}^{B(r)}&=\mathcal{ L}_{y}^{B(r)}+\frac{1}{|D_{d}^{(r)}|}\sum_{(\mathbf{x}_{j}^{\mathcal{T}},\bar{y}_{j}^ {B_{t}})\in D_{a}^{(r)}}\sum_{x_{ij}\in\mathbf{x}_{j}^{\mathcal{T}}}\ell(P( \bar{y}_{ij}^{B_{t}}),\bar{y}_{ij}^{B_{t}})\\ &\qquad+\eta\frac{1}{|D_{a}^{(r)}|}\sum_{(\mathbf{x}_{j}^{ \mathcal{T}},\bar{y}_{j}^{B_{t}})\in D_{a}^{(r)}}\sum_{x_{ij}\in\mathbf{x}_{j }^{\mathcal{T}}}\ell(P(\bar{y}_{ij}^{B_{t}}),\bar{y}_{ij}^{B_{t}})\end{split} \tag{8}\]
where \(r\geq 1\), \(\eta\in[0,1]\) is a variable to control the weight of the loss on \(D_{a}^{(r)}\). Since the similarity between source and target domains can only be measured empirically, \(\eta\) is treated as a hyper-parameter to be tuned. \(\eta\) is expected to be large when the source and target domains are similar, otherwise small. Notice that when \(\eta=1\), \(\bar{\mathcal{L}}_{y}^{B(r)}\) becomes a special case of the pseudo-labelling loss function expressed in Eq. 15 of [19] with \(\alpha(t)=1\), which we refer to as a standard pseudo-labelling method.
The total loss function \(\mathcal{L}\) for SCD can now be formulated as
\[\mathcal{L}=\mathcal{L}_{d}^{B}+\mathcal{L}_{y}^{B(0)}+\sum_{r\geq 1}\bar{ \mathcal{L}}_{y}^{B(r)} \tag{9}\]
In each self-training round, \(D_{pl}^{(r)}=D_{d}^{(r)}\cup D_{a}^{(r)}\) is generated using the current trained student network. The self-training stops when \(D_{pl}^{(r)}\) is approximately equal in successive rounds.
## 6 Experiments and Results
### Experimental Setup
#### 6.1.1 Comparison Methods
We evaluate SCD as well as our BERT-based version BERT-SCD in this section. Comparison methods include, CRF [16], FEMA [42], Hier-Joint [9], RNSCN [38], AD-SAL [24], AHF [45] as well as the BERT-based models \(\text{BERT}_{\text{E}}\)-UDA [11] and \(\text{BERT}_{\text{E}}\)-\(\text{CDRG}\)[43]. Two strong single-domain OTE models \(\text{BERT}_{\text{B}}\) and \(\text{BERT}_{\text{E}}\)[11], which are trained only on the source-domain to investigate the capacity of BERT without domain adaptation. SemBridge [6] is excluded in our comparison since its dataset setup is different from that used in compared works.
#### 6.1.2 Datasets
We use benchmark datasets from four domains following previous work [24, 38]. The Laptop dataset consists of reviews in the laptop domain taken from the SemEval ABSA challenge 2014 [30]. The Restaurant dataset is the set of all restaurant reviews in SemEval ABSA challenge 2014, 2015 and 2016 [28, 29, 30]. The Device dataset, originally provided by [15] contains reviews in the device domain. The Service dataset, introduced by [35] contains reviews related to the web service domain. We use the preprocessed data provided by [24]. Dataset statistics are shown in Table 3.
#### 6.1.3 Evaluation Protocol
We follow prior work [11, 24] and evaluate on 10 transfer pairs \(D_{\mathcal{S}}\xrightarrow{}D_{\mathcal{T}}\) from the datasets. We use the test set of the source domain as a development set to tune our models. The test set of the target domain is used for evaluation purposes. We evaluate an exact match,2 and compute the Micro-F1 score. Reported results are the average over 5 runs.
Footnote 2: Exact Match: the predicted label sequence should exactly match the gold label sequence
#### 6.1.4 Implementation Details
Following Zhou et al. [45], we use 100-dim fixed pretrained Word2Vec emebeddings [27] or BERT-Mini embeddings for word features.3 We use Adam with \(1e^{-3}\) learning rate, 100 epochs for both Teacher and Student networks, and 50 epochs during self-training, word embedding dropout rate in \([0.3,0.5,0.7]\), BiLSTM dimensions in \([100,200,300]\), adaption rate \(\lambda\in[1.0,0.7,0.5,0.3]\), batch size in \([32,64,128]\) and \(\eta\in[0.0,0.1,\ldots,0.9,1.0,1e^{-2},1e^{-3}]\). Each batch contains half labeled source and half unlabelled target data. All sentences are padded to a max length \(n\). During self-training, we adopt repeated sampling on the labeled source data with the same size as the pseudo labeled target data in each epoch.
Footnote 3: We use BERT-Mini implementation from [https://github.com/google-research/bert](https://github.com/google-research/bert)
### Main Results
Table 1 summarizes our main results. We find that neural methods, including RNSCN and Hier-Joint surpass hand-crafted feature methods FEMA and CRF, highlighting the importance of leveraging neural networks for the task. We also find that adversarial methods such as AD-SAL and AHF outperforms both Hier-Joint and RNSCN, indicating that adversarial learning is effective in mitigating the domain shift to yield performance. However, by learning target discriminative features, the SOTA method AHF achieves a better performance over AD-SAL by about 5.45 F1 on average. We see similar performance on the SOTA BERT-based model \(\text{BERT}_{\text{E}}\)-CDRG that consider learning target discriminative features. Specifically, \(\text{BERT}_{\text{E}}\)-CDRG outperforms the previous SOTA \(\text{BERT}_{\text{E}}\)-UDA by about 4.47 F1 on average. This clearly shows the importance of learning target discriminative features. However, AHF considers the a mean teacher while \(\text{BERT}_{\text{E}}\)-CDRG considers a generation model to learn these target discriminative features. In contrast, we consider to learn an adversarial model (i.e., Student) based on self-training through classifier disagreement. Our results suggest the effectiveness of our approach where we outperform AHF and \(\text{BERT}_{\text{E}}\)-CDRG by an average F1 of 5.92 and 7.08. In particular, we obtain SOTA results on nine out of 10 transfer pairs with relative stability when compared to AHF.
### Ablation Study
We study the contribution of model components. Table 2 presents our results. The upper portion of the table shows the performance of different ablated models. The lower portion is the Maximum Mean Discrepancy (MMD) [12], which measures the distance between source and target domain distributions.4
First, we note that the Teacher and Student networks have comparable performance on the source domain (see results in Table 5). This means the performance of the Student over Teacher is due to the divergence (measured by MMD). Since Student(MMD) is lower than Teacher(MMD) for all transfer pairs, it is not surprising to see the Student network outperforming the Teacher network. Conversely, SCD(\(\eta=1.0\)) is simply standard pseudo-labelling. Although it improves performance, we find that SCD(\(\eta=0.0\)) offers comparable performance for the average F1 by focusing on learning only on pseudo-labelled samples with prediction disagreement with the Teacher network. Interestingly, we find that on pairs such as \(\mathbb{S}\rightarrow\mathbb{L}\) and \(\mathbb{S}\rightarrow\mathbb{D}\), Teacher(MMD) is already low. Although Student(MMD) becomes smaller due to adversarial learning, SCD(\(\eta=0.0\)) cannot leverage sufficient pseudo-labelled samples to achieve satisfactory performance. This is because Student can only shift few samples to the domain invariant-distribution to bring about a prediction disagreement. But we see the benefit of prediction disagreement on pairs such as \(\mathbb{D}\rightarrow\mathbb{R}\), where Teacher(MMD) is large and corresponding Student(MMD) is low, improving the Student network from 56.52 to 61.85 (i.e., performance on SCD(\(\eta=0.0\))).
These results indicate that the pseudo-labelled samples help to learn the discriminative features, achieving better performance as compared to recent works.
### Sensitivity of Hyperparameter \(\eta\)
We now study the sensitivity of our model for the hyperparameter \(\eta\). At \(\eta=0\), we pay attention to the learning of pseudo-labelled samples by the student network that disagree with those produced by the Teacher network. At \(\eta=1\), we are simply performing the standard pseudo-labelling. We study the sensitivity of \(\eta\), particularly on pairs that have a high or low MMD on the Teacher network. That is, the respective \(\mathbb{D}\rightarrow\mathbb{R}\) and \(\mathbb{S}\rightarrow\mathbb{D}\) pairs. With low MMD, the source and target domains are similar, but diverges with high MMD. The idea is to understand how the domain divergence affects \(\eta\).
Figure 3 shows the results on this experiment where we report the F1 performance for different values of \(\eta\) on the pairs. We find that on \(\mathbb{S}\rightarrow\mathbb{D}\), the learning problem moves toward standard pseudo-labelling since the best performance is achieved at \(\eta=1.0\). However, on \(\mathbb{D}\rightarrow\mathbb{R}\) the best performance is achieved at \(\eta=0\). These results suggest the importance of attention placed on the learning of these pseudo-labelled samples. Particularly, we observe that when the domain divergence is high it is beneficial to learn on pseudo-labelled samples that disagree with the Teacher network. On the other hand, when the source and target domains are similar, pseudo-labelling
\begin{table}
\begin{tabular}{l|c c c|c c c|c c|c c|c} \hline Model & \(\mathbb{S}\rightarrow\mathbb{R}\) & \(\mathbb{L}\rightarrow\mathbb{R}\) & \(\mathbb{D}\rightarrow\mathbb{R}\) & \(\mathbb{R}\rightarrow\mathbb{S}\) & \(\mathbb{L}\rightarrow\mathbb{S}\) & \(\mathbb{D}\rightarrow\mathbb{S}\) & \(\mathbb{R}\rightarrow\mathbb{L}\) & \(\mathbb{S}\rightarrow\mathbb{L}\) & \(\mathbb{R}\rightarrow\mathbb{D}\) & \(\mathbb{S}\rightarrow\mathbb{D}\) & AVG \\ \hline CRF & 17.00 & 17.00 & 2.50 & 8.80 & 8.60 & 4.50 & 10.90 & 11.60 & 9.00 & 9.70 & 9.96 \\ FEMA & 37.60 & 35.00 & 20.70 & 10.80 & 14.80 & 8.80 & 26.60 & 15.00 & 22.90 & 18.70 & 21.09 \\ Hier-Joint & 52.00 & 46.70 & 50.40 & 19.80 & 23.40 & 23.50 & 31.70 & 30.00 & 32.00 & 33.40 & 34.29 \\ RNSCN & 48.99 & 52.19 & 50.39 & 30.41 & 31.21 & 35.50 & 47.23 & 34.03 & 46.16 & 32.41 & 40.84 \\ AD-SAL & 52.05 & 56.12 & 51.55 & 39.02 & 38.26 & 36.11 & 45.05 & 35.99 & 43.76 & 41.21 & 43.91 \\ AHF & 54.98 & 58.67 & 61.11 & 40.33 & 47.17 & 45.78 & 56.58 & 36.62 & 48.24 & 44.16 & 49.36\(\pm\)3.23 \\ \hline SCD & **59.52** & **71.40** & **61.85** & **48.30** & **48.67** & **52.58** & **59.68** & **42.40** & **54.45** & **54.01** & **55.28\(\pm\)1.07** \\ \hline \hline BERT\({}_{\text{B}}\) & 54.29 & 46.74 & 44.63 & 22.31 & 30.66 & 33.33 & 37.02 & 36.88 & 32.03 & 38.06 & 37.60 \\ BERT\({}_{\text{E}}\) & 57.56 & 50.42 & 45.71 & 26.50 & 25.96 & 30.40 & 44.18 & 41.78 & 35.98 & 35.13 & 39.36 \\ BERT\({}_{\text{E}}\)-UDA & 59.07 & 55.24 & 56.40 & 34.21 & 30.68 & 38.25 & 54.00 & 44.25 & 42.40 & 40.83 & 45.53 \\ BERT\({}_{\text{E}}\)-CDRG & 59.17 & **68.62** & 58.85 & 47.61 & **54.29** & 42.20 & 55.56 & 41.77 & 35.43 & 36.53 & 50.00 \\ \hline BERT-SCD & **64.10** & 67.61 & **64.75** & **55.83** & 51.33 & **58.92** & **55.64** & **49.76** & **49.62** & **53.29** & **57.08\(\pm\)1.17** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of F1 performance. Best performance is in bold format.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c} \hline Model & \(\mathbb{S}\rightarrow\mathbb{R}\) & \(\mathbb{L}\rightarrow\mathbb{R}\) & \(\mathbb{D}\rightarrow\mathbb{R}\) & \(\mathbb{R}\rightarrow\mathbb{S}\) & \(\mathbb{L}\rightarrow\mathbb{S}\) & \(\mathbb{D}\rightarrow\mathbb{S}\) & \(\mathbb{R}\rightarrow\mathbb{L}\) & \(\mathbb{S}\rightarrow\mathbb{L}\) & \(\mathbb{R}\rightarrow\mathbb{D}\) & \(\mathbb{S}\rightarrow\mathbb{D}\) & AVG \\ \hline SCD & **59.52** & **71.40** & **61.85** & **48.30** & **48.67** & **52.58** & **59.68** & **42.40** & **54.45** & **54.01** & **55.28\(\pm\)1.07** \\ SCD(\(\eta=0.0\)) & 59.18 & **71.40** & **61.85** & 48.22 & 48.52 & 52.25 & 57.81 & 40.13 & 52.78 & 45.95 & 53.80\(\pm\)1.91 \\ SCD(\(\eta=1.0\)) & 57.76 & 67.49 & 59.06 & 47.83 & 46.13 & 51.03 & 55.62 & **42.40** & 53.80 & **54.01** & 53.51\(\pm\)0.96 \\ Student & 55.39 & 63.69 & 56.52 & 47.19 & 45.48 & 50.69 & 52.66 & 41.22 & 52.39 & 44.28 & 50.95\(\pm\)1.23 \\ Teacher & 52.10 & 57.46 & 48.02 & 24.88 & 28.48 & 33.09 & 48.08 & 40.92 & 50.75 & 45.35 & 42.87\(\pm\)1.10 \\ \hline \hline Student(MMD) & 0.041 & 0.040 & 0.046 & 0.035 & 0.094 & 0.080 & 0.054 & 0.042 & 0.045 & 0.043 & 0.052\(\pm\)0.009 \\ Teacher(MMD) & 0.215 & 0.197 & 0.415 & 0.364 & 0.170 & 0.263 & 0.198 & 0.134 & 0.158 & 0.106 & 0.222\(\pm\)0.023 \\ \hline \end{tabular}
\end{table}
Table 2: Ablation Study: F1 Performance of different ablated models (top). Student(MMD) (or Teacher(MMD)) is an estimate of the discrepancy between the learned source and target distributions by the Student (or Teacher).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Dataset & Domain & Sentence & Train & Test \\ \hline \(\mathbb{L}\) & Laptop & 1869 & 1458 & 411 \\ \(\mathbb{R}\) & Restaurant & 3900 & 2481 & 1419 \\ \(\mathbb{D}\) & Device & 1437 & 954 & 483 \\ \(\mathbb{S}\) & Service & 2153 & 1433 & 720 \\ \hline \end{tabular} \
seems sufficient for the problem. This model behaviour guides in the selection of \(\eta\).
### Quality of Pseudo-Labels
We perform additional experiments to study the quality of pseudo-labels generated by our method. Figure 5 shows the experiments, where we report the F1 performance for different models for the pairs under study; \(\mathbb{D}\rightarrow\mathbb{R}\) (left) and \(\mathbb{S}\rightarrow\mathbb{D}\) (right). Since SCD(\(\eta=0.0\)) and SCD have equivalent performance on \(\mathbb{D}\rightarrow\mathbb{R}\) and SCD(\(\eta=1.0\)) and SCD have equivalent performance on \(\mathbb{S}\rightarrow\mathbb{D}\), we omit the curves of SCD to clearly show the benefit of pseudo-labelled samples under different strategies. Other compared methods include AHF.
On \(\mathbb{D}\rightarrow\mathbb{R}\), we find that both SCD(\(\eta=0.0\)) and SCD(\(\eta=1.0\)) improves steeply but becomes unstable after the fifth and eight epochs respectively. However, the improvement of SCD(\(\eta=0.0\)) over SCD(\(\eta=1.0\)) is highly notable. This observation points us to the fact, with high Teacher(MMD), prediction disagreement offers high quality pseudo-labelled samples particularly in the early rounds of training to improve performance. However, when Teacher(MMD) is low such as on the \(\mathbb{S}\rightarrow\mathbb{D}\), we are not able to take advantage of pseudo-labelled samples with prediction disagreement. Hence, the standard pseudo-labelling can outperform prediction disagreement as seen in the figure. AHF on the other hand underperforms, indicating that our SSL approach is effective as compared to the mean teacher.
### Feature Visualization
Fig. 4 depicts the t-SNE [36] visualization of features learned using the Teacher, Student and SCD models on the transfer pair \(\mathbb{D}\rightarrow\mathbb{R}\) (1000 instances sampled randomly in each domain). As there are three class labels, namely BIO labels, an ideal model should clearly align the source and target data into three clusters. For the Teacher network, we can observe that the distribution of source samples is relatively far from the distribution of the target samples. Through domain adaptation, the Student network improves the alignment of the source and target samples. However, by learning target discriminative features through SCD, we gradually observe three clusters forming. The results indicate that SCD improves the class-level alignment.
### Case Study
To test the effectiveness of our approach, some case examples from the transfer pair with the largest domain divergence (\(\mathbb{D}\rightarrow\mathbb{R}\)) are selected for demonstration. Table 4 shows the aspect term extraction results on these case examples.
In the first case, we find that the Teacher, Student and SCD are all capable of identifying the aspect terms "service" and "space". As these aspect terms appear in both Device and Restaurant domains, domain adaptation is not necessary to extract the aspect terms. It is therefore not surprising to observe that all models identify the aspect terms in the Restaurant domain.
In the second case example, the aspect terms "ambience", "food" and "catfish" are found in the Restaurant domain and not the Device domain. However, the Teacher was able to extract the aspect terms "ambience" and "food". Introspecting further, we found that 81% of aspect terms extracted by the Teacher in the Restaurant domain are accompanied with opinion words (e.g., "great") that are also present in the Device domain. Hence, the Teacher was able to learn the correspondences between opinion words and aspect terms in the Device domain and use that knowledge to locate "ambience" and "food" in the Restaurant domain. However, both Teacher and Student networks fail to extract the aspect term "catfish". This highlights the importance of learning target discriminative features, as there is no correspondence between the word "delicious" and an aspect term to be learned in the Device domain but only in the Restaurant domain. SCD solves this problem by collecting high quality pseudo-labelled samples in the Restaurant domain. As a result, SCD is able to extract the aspect term "catfish".
In the third case example, we found that the Teacher network failed to identify the aspect terms "pasta primavera" and "veggies" as they do not exist in the Device domain. However, by reducing the domain shift between the two domains, the Student network is able to extract "pasta primavera" but not "veggies". Upon investigation, we found that the opinion word "fresh" which expresses an opinion on "veggies" frequently appears 83 times in the Restaurant dataset and 0 times in the Device dataset. Ideally, by learning target discriminative features, we can learn correspondences that exist between "fresh" and aspect terms. Such knowledge as learned by SCD offers supervisory training signals, enabling SCD to detect the aspect term "veggies".
Finally, in the fourth case example, both the Teacher and Student networks completely failed to detect the aspect term "martinis". While it is no surprise that the Teacher network fails (i.e., "martinis" is not seen during training), the failure of the Student network highlights the limitations of simply reducing the domain shift and suggests the importance of learning target discriminative features for successful cross-domain OTE.
### Performance Comparison on Source Domain
We argued that the difference between the target errors (or F1 performance) of the teacher and student networks can be explained by the \(\mathcal{H}\Delta\mathcal{H}\) divergence when the source errors of these networks are approximately equal. According to Ben-David et al. [2], the source error as well as the divergence can be estimated from finite samples of the source and target domains, under the assumption of the uniform convergence theory [37]. Table 5 therefore reports the F1 performance on the source test set. We discover that for each transfer pair, the F1 performance is approximately equal, comparing the Teacher and Student. This suggest that adversarial learning
performed by the Student to reduce the domain shift has little to no effect on the classification on the source data. Most importantly, the results suggest that the difference between the Teacher and Student on the target data is due to the target samples shifted to the domain-invariant space within the Student feature space.
## 7. Conclusion
We have proposed a Self-training through Classifier Disagreement for cross-domain OTE. We demonstrated that by simultaneously training a Teacher and a Student network, we can benefit from the information that comes from their predictions on the unlabelled target domain. Specifically, by leveraging pseudo-labelled samples that disagree between the Teacher and Student networks, the Student network is significantly improved, even in large domain divergences. This model behaviour however leads to the potential limitation. In cases of small domain shifts, the model tends to favor pseudo-labeling (Shou et al., 2018), an SSL approach that risks confirmation bias (Shou et al., 2018) (i.e., prediction errors are fit by the network). Nevertheless, small domain shifts have little to no interest in cross-domain learning since the source and target domains can be considered to be similar. In the future, we will consider data augmentation strategies to mitigate confirmation bias brought by pseudo-labelling in such situations (Beng et al., 2018). We believe our model is generic and can be applied to other cross-domain tasks such as cross-domain named entity recognition.
\begin{table}
\begin{tabular}{l
## Acknowledgements
This work was supported in part by the National Key R&D Program of China under Grant 2021ZD0110700, in part by the Fundamental Research Funds for the Central Universities, in part by the State Key Laboratory of Software Development Environment. In addition, SM and NA received support from the Leverhulme Trust under Grant Number: RPG#2020#148.
|
2309.11513 | Gene Expression Patterns of CsZCD and Apocarotenoid Accumulation during
Saffron Stigma Development | Crocus sativus L., otherwise known as saffron, is a highly prized plant due
to its unique triploid capability and elongated stigmas, contributing to its
status as the costly spice globally. The color and taste properties of saffron
are linked to carotenoid elements including cis- and trans-crocin, picrocrocin,
and safranal. In the research carried out, we dedicated our attention to the
gene CsZCD, an important player in the formation of apocarotenoids. Through the
application of real-time polymerase chain reaction to RNA purified from saffron
stigmas at various growth phases, it was determined that the peak expression of
the CsZCD gene coincided with the red stage, which is associated with the
highest concentration of apocarotenoids. The data showed a 2.69-fold
enhancement in CsZCD gene expression during the red phase, whereas a 0.90-fold
and 0.69-fold reduction was noted at the stages characterized by orange and
yellow hues, respectively. A noteworthy observation was that CsZCD's expression
was three times that of the CsTUB gene. Additionally, relative to CsTUB, CsLYC
displayed 0.7-fold and 0.3-times expression. Our investigation provides insight
into the governance of CsZCD during stigma maturation and its possible
influence on the fluctuation in apocarotenoid content. These discoveries carry
significance for the industrial production of saffron spice and underscore the
importance of additional studies on pivotal genes participating in the
synthesis of apocarotenoids. | Zohreh Shams | 2023-09-15T16:10:41Z | http://arxiv.org/abs/2309.11513v1 | Gene Expression Patterns of _CsZCD_ and Apocarotenoid Accumulation during Saffron Stigma Development
###### Abstract
_Crocus sativus_ L., otherwise known as saffron, is a highly prized plant due to its unique triploid capability and elongated stigmas, contributing to its status as the costly spice globally. The color and taste properties of saffron are linked to carotenoid elements including cis- and trans-crocin, picocrocin, and saffranal. In the research carried out, we dedicated our attention to the gene _CsZCD_, an important player in the formation of apocarotenoids. Through the application of real-time polymerase chain reaction to RNA purified from saffron stigmas at various growth phases, it was determined that the peak expression of the _CsZCD_ gene coincided with the red stage, which is associated with the highest concentration of apocarotenoids. The data showed a 2.69-fold enhancement in _CsZCD_ gene expression during the red phase, whereas a 0.90-fold and 0.69-fold reduction was noted at the stages characterized by orange and yellow hues, respectively. A noteworthy observation was that _CsZCD_'s expression was three times that of the _CsTUB_ gene. Additionally, relative to _CsTUB_, _CsLYC_ displayed 0.7-fold and 0.3-times expression. Our investigation provides insight into the governance of _CsZCD_ during stigma maturation and its possible influence on the fluctuation in apocarotenoid content. These discoveries carry significance for the industrial production of saffron spice and underscore the importance of additional studies on pivotal genes participating in the synthesis of apocarotenoids.
apocarotenoid gene expression, gene regulation, MVA pathway, red gold. +
Footnote †: journal: Journal of Pharmaceutical and Bio-Medical Science
ISSN(print): 2767-827X, ISSN(online): 2767-830X
Volume 03 Issue 09 September 2023
Page No: 460-466
DOI: [https://doi.org/10.47191/iipbms/v3-i9-04](https://doi.org/10.47191/iipbms/v3-i9-04), Impact Factor: 6.858
## 1 Introduction
Saffron (Crocus sativus) is a fragrant herb believed to have descended from C. _cartwrightians_ through natural evolution. Its cultivation for thousands of years has produced a valuable crop known for its triploid potency and long stigmas, making it the world's most expensive spice (Freeman et al., 1999; Livak et al., 2001; Bouvier et al., 2001). Saffron is best found in carotenoid components such as cis- and trans-crocin, picocrorosine and saffron, which give it its characteristic flavoring and coloring properties (Rubio et al., 2008; Mir et al., 2012b).
Clonal selection has emerged as an important strategy for improving saffron cultivation, as other methods such as mutagenesis and chromosome doubling have had limited success. Identification and propagation of the best clones with improved stain properties and improved carotenoid biosynthetic capacity are important for the improvement of saffron crops (Sanchez et al., 2013b), These strategies should be deployed more in these situations that global warming and drought stress put plants survival in jeopardy (Mir et al., 2012b: Jamshidi et al., 2022).
The intricate process involved in the development of saffron's distinctive color and flavor components is a result of the bio-oxidative cleavage of zeaxanthin. This process is orchestrated by critical enzymes encoded by genes such as PSY, LYC, CCD, BCH, and ZCD. The primary color compound, crocin, and the aromatic substance, safranal, are hypothesized to result from the bio-oxidative division of zeaxanthin through a 7,8 (7,8) cleavage reaction (Pfander and Schurtenberger, 1982). The apocarotenoid biosynthetic pathway encompasses numerous enzymes which catalyze reactions and are encoded by key genes such as LYC, PSY, BCH, ZCD, and, CCD. Lycopene\(\beta\)-cyclase (LYC) facilitates the cyclization of lycopene, leading to the formation of \(\beta\)-carotene with two rings. The \(\beta\)-carotene hydroxylation in the MVA pathway is facilitated by \(\beta\)-carotenoid hydroxylase, encoded by the BCH gene, resulting in the production of zeaxanthin (Castillo et al., 2005). The production of color and aroma in saffron results from the bio-oxidative cleavage of zeaxanthin (Rubio-
Moraga et al., 2009; Gomez-Gomez et al., 2010) at the 7,8 (7',8) sites by the zeaxanthin cleavage dioxygenase (_CsZCD_), producing crocetin dialdehyde and picorocrin. The terminal phase in C. sativus stigmas involves the glucosylation of the cleavage products of zeaxanthin by the glucosyltransferase 2 enzyme, which is encoded by the _CsUGT2_ gene within the chromoplast of stigmas. These products are then stored in the central vacuole of the fully matured stigmas (Bouvier et al., 2003). This understanding of the biosynthetic procedure can shed light on the maturation process of saffron (Baghalian et al., 2010; Ahrazem et al., 2010).
Soil electrical conductivity (EC) plays an important role in saffron cultivation, as lower EC levels are associated with higher yields (Mirbakhsh and Hosseinzadeh, 2013; Ahrazem et al., 2015; Abdulhabip et al., 2017). Examining the effect of soil electrical conductivity on bulb and flower properties, as well as the effect of saffron tissue culture on gene expression and physical activity, could improve saffron production to meet global demand (Baba et al., 2017; Bagheri et al., 2017; Mirbakhsh et al., 2023).
Given the many medicinal and commercial uses of saffron, the manufacturing process may not be adequate. For this reason, biotechnological methods such as tissue culture are being investigated for dissemination. Culture induced stigm-like structure (SLS) has become an important step in the production of coarse saffron, and studies on its effects have helped refine the process (Frusciante et al., 2014; Jain et al., 2016; Gomez-Gomez et al. 2017). Saffron's fascinating history and complex chemical composition, cultivation concept and propagation methods make it the world's most important spice. The aim of this study is to patterns of the expression of essential genes that are involved in apocarotenoid synthesis and evaluate accumulation of apocarotenoid by comparing _CsZCD_ gene expression during three stages of stigma development (Figure 1).
## MATERIAL AND METHODS
### Plant preparation
A total of twenty-five different saffron clones collected from the saffron germplasm of Mashhad province, Iran was used in this study. The clones were carefully preserved at the Central Temperate Pasteur Institute in Tehran, Iran. To preserve their integrity, freshly harvested heads were quickly removed in liquid nitrogen and stored at -80degC for RNA isolation for RNA isolation.
### Extraction procedure
#### RNA extraction
Stigmas in their three developmental phases (yellow, orange, and scarlet) were frozen and subsequently ground into a fine powder using a chilled, sterilized mortar and pestle. This finely ground material was then subjected to the extraction of total RNA employing a kit designed for RNA isolation, provided by Roche Applied Sciences, Penzberg, Germany. This process was conducted in strict accordance with the guidelines provided by the manufacturer in the kit.
### cDNA preparation
Each specimen was processed with 5 \(\upmu\)g of the whole RNA serving as a template, followed by the execution of first-strand cDNA synthesis. This procedure utilized an 18-bp oligo dT primer and a first-strand cDNA synthesis kit supplied by Roche Applied Science, located in Penzberg, Germany, all while strictly adhering to the instructions provided by the manufacturer. The synthesized cDNA was subsequently preserved at a temperature of -20degC, reserved for future utilization in gene expression research.
### Assessing the quality of the strands
The quality of the RNA that was extracted, evaluated by determining the absorbance at 260 and 280 nm by a NanoDrop spectrophotometer. RNA showcasing an optical density (OD) ratio of 260/280 between 1.2 and 1.5 was selected for cDNA synthesis. The first strand cDNA was synthesized using 5 \(\upmu\)g of the total RNA template and an 18 bp oligo-dT primer in conjunction with a cDNA synthesis kit supplied by Roche Applied Sciences, Penzberg, Germany, as per the supplier's guidelines. The resulting cDNA was preserved at -20degC for later utilization. The Beer-Lambert law, which provides a correlation between absorbance and concentration, was employed to ascertain the RNA concentration. An A260 reading of 1.0 corresponds to approximately 40 \(\upmu\)g/ml of RNA, and absorbance at 260 nm was utilized to gauge the RNA concentration in solution. The purity of the RNA preparation was assessed based on the absorbance ratio of pure RNA between 260 and 280 nm, where an A260/A280 ratio is 2.1. The Nano-Drop@ ND-1000 UV-Vis Spectrophotometer, which eliminates the necessity for cuvettes and capillaries, thereby reducing the required sample volume, was utilized for efficient analysis of small samples. To eliminate carotenoids, saffron stigmas were subjected to treatment with methanol followed by Tris-HCl (pH 7.5; containing 1 M NaCl). The precipitate was collected via centrifugation, ground again in acetone, and centrifuged. This procedure was repeated until the pellet was devoid of color. The supernatants were then amalgamated, evaporated, and the resulting dried residue was stored at -80degC for future use.
### Real Time-PCR
Real-Time PCR analysis was undertaken using Roche Diagnostics' Light-Cycler 480 real-time PCR instrument and Light-Cycler 480 SYBR Green I Master kit, structured in 96-well plates. The SYBR Green I dye in the reaction mixture has specificity for double-stranded DNA. At each DNA synthesis stage, this dye binds to the amplified PCR products, which emit fluorescence upon binding, enabling detection of the amplicon.
To enhance PCR yield, sensitivity, and specificity, we utilized Hot Start protocols in conjunction with the Light-Cycler 480 SYBR Green I Master kit. These protocols employ the FastStart Taq DNA Polymerase, which has certain amino acid residues blocked with heat-labile groups. These groups inactivate the enzyme at ambient temperatures, thereby preventing nonspecific binding during the primer
annealing phase. Pre-incubation at +95\({}^{\circ}\)C for 5 minutes activates the FastStart Taq DNA Polymerase, removing these groups and enabling DNA elongation during amplification. Each reaction was conducted in triplicate and included 5 \(\upmu\)l SYBR Green I Master, 2 \(\upmu\)l PCR-grade water, 2 \(\upmu\)l cDNA, and 0.5 \(\upmu\)l of each of the 10 \(\upmu\)M forward and reverse gene-specific primers, resulting in a total volume of 10 \(\upmu\)l. The reactions underwent a thermal cycling program involving an initial denaturation step at 95\({}^{\circ}\)C for 5 minutes, followed by 40 cycles of denaturation at 95\({}^{\circ}\)C for 15 seconds, annealing at 56.2\({}^{\circ}\)C for 15 seconds, and extension at 72\({}^{\circ}\)C for 20 seconds. We utilized the intercalating SYBR green assay as the reporter system in our study. The SYBR green intercalates between adjacent base pairs of double-stranded DNA and upon light excitation emits a fluorescent signal when bound to DNA. As the PCR cycles advance, the fluorescence signal intensifies, corresponding to the accumulation of amplicons. To ensure that the fluorescence signal only stemmed from the target templates and not from any nonspecific PCR product formation, we performed post-PCR dissociation curve analysis (melting curve analysis) ranging from 60 to 95\({}^{\circ}\)C. The fluorescence data was captured using the Light-Cycler 480 software (version 1.5; Roche Diagnostics). To enable advanced relative quantification across the genotypes and the three stages of stigma development, we employed the 2-\(\Delta\Delta\)Ct method proposed by Livak and Schmittgen in 2001. This method enables the comparison of gene expression levels based on Ct (cycle threshold) values and normalization against reference genes.
The amplified genes along with their forward and reverse primer sequences are displayed in Table 1. The CsTUB gene was amplified as an internal control, using the AMVRT cDNA kit from Roche Applied Science, Penzberg, Germany, as per the instructions in the user manual. For precision, the experiments were conducted twice. Following this, 5 \(\upmu\)l of the PCR products were loaded onto a 1.2% (w/v) agarose gel (Sigma-Aldrich, St. Louis, MO, USA).
## 4 Results and Discussion
### CsZCD expression during stigma development
Using RNA isolated from stigmas at distinct stages, this study employed reverse transcription and real-time PCR techniques. It predominantly delved into the CsZCD gene, investigating its mRNA fluctuation throughout the stigma's maturation process (refer to Figure 1). Notably, the zenith of the CsZCD gene expression was found during the scarlet phase, echoing the observations made by Castillo et al. (2005). Additionally, an examination of CsZCD's expression across each developmental point (as seen in Figure 1) suggests a tangible link with the concentration of apocarotonoids at those stages, hinting at a coordinated dance between the gene's expression and apocarotonoid build-up, a notion also proposed by Bustin (2000).
Expanding on this, real-time PCR was used to amplify both CsZCD and tubulin genes during the yellow, orange, and scarlet phases. The data revealed a 2.69-fold uptick in CsZCD gene activity compared to the tubulin gene in the scarlet phase. In contrast, the orange and yellow stages saw reductions of 0.90-fold and 0.69-fold, respectively, relative to tubulin. Moreover, the PCR data indicated an 8% rise in CsZCD activity from the yellow to orange stage and a noteworthy 33% jump from orange to scarlet. With the yellow phase expressing merely 25% of what was observed in the scarlet phase, there's a marked amplification in CsZCD activity transitioning from orange to scarlet. These observations mirror the findings from earlier reverse transcription PCR studies in saffron by Castillo et al. (2005) and Rubio et al. (2009).
However, this research distinguishes itself by introducing fold variations in CsZCD gene activity concerning an internal control during stigma evolution. Furthermore, it's the first to highlight a relationship between CsZCD activity and apocarotonoid aggregation. It's noteworthy that previous attempts to quantify the relative expression of CsZCD via real-time PCR in saffron are absent from literature. The implications of this study are profound, given its potential to steer commercial saffron spice production towards a desired apocarotonoid concentration. Grasping the gene dynamics of essential players in apocarotonoid synthesis during the development of stigmas is pivotal to maximize biotechnological avenues in boosting saffron yield. Moving forward, it's imperative to authenticate reference genes and delve into other significant genes like lycopene cyclase and \(\beta\)-carotene hydroxylase, setting the stage for congruent data in subsequent explorations.
### Mevalonate pathway (MVA) genes expression
RT-PCR was employed to investigate the semi-quantitative expression of CsZCD and CsLYC genes during the scarlet stage of stigma maturation, with CsTUB serving as the reference control. Notably, there were slight differences between the two genotypes; however, the CsZCD gene was more predominantly expressed than the CsLYC gene, as
\begin{table}
\begin{tabular}{l|l l l} _Primer_ & _Forward primer_ & _Reverse primer_ & _Amplicon size (bp)_ \\ \hline _CsZCD_ & GTCTCTCCCCGACATCCAGATC & CTCTATCCGGCCTG & 241 \\ _CsLYC_ & AGATGGTCTCTCATGGATTGGAG & ATCACACACCTCTCATCCTTC & 247 \\ _CsBCH_ & TCGAGTCGGCATCACATC & GCAATACCAAACAGCGTGATC & 495 \\ _CsGT2_ & GATCTGCCCGGTTCGATAAC & GATGCAGAGTTGGGGCCTTG & 400 \\ _CsTUPB_ & TGATTCCAACTCGACCAGTGTC & ATACTCATCACCCTGCATC & 225 \\ \end{tabular}
\end{table}
Table 1: **Sequence and amplicon size of primers used for real time PCR analysis.**
illustrated in Figure 2. Even though reverse transcription PCR primarily offers semi-quantitative data about gene expression, its importance cannot be understated in drawing comparative conclusions about gene expression levels. The method boasts impressive sensitivity and specificity, proving essential in pinpointing rare transcripts or in cases with limited sample availability.
Additionally, Quantitative real-time PCR (Q-PCR) techniques were also evaluated. This advanced method recognizes and measures target templates by observing the PCR product's growth, as evidenced by an accompanying fluorescence increase with every PCR cycle. This approach allows for accurate gene or transcript counts during the PCR amplification's exponential phase, correlating it directly with the preliminary counts of present target sequences. Unlike the traditional "end-point" PCR that only assesses amplicons after the PCR process is completed, measuring during the exponential phase mitigates potential issues (as discussed by Smith and Osborn, 2009). In terms of C. sativus, the developmental journey of stigma coincides with the switch from amyloidosis to chromoplasts, as well as the creation and storage of apocarotonoids. These are intrinsically connected to the expression trends of the CsZCD and CsLYC genes (as cited by Bouvier et al., 2003; Rubio-Moraga et al., 2009). This research scrutinized the accumulation trends of apocarotonoids, including key compounds like crocetin, picroroccin, and various forms of croccin. This was done in mature saffron stigmas, noting variations in crocin concentrations and the lengths of stigmas, detailed in Table 2
The CsZCD gene plays a central role in orchestrating the synthesis of crocetin glucosides and picroroccin, products that result from the action of zeaxanthin cleavage dioxygenase (Rubio-Moraga et al., 2009). Even though the build-up and makeup of carotenoids throughout stigma maturation are tightly regulated by the synchronized transcriptional activation of the genes related to carotenoid biosynthesis, there was an apparent discrepancy. The
\begin{table}
\begin{tabular}{l|l} _Selections_ & _Stigma length (cm)_ \\ \hline _CITH-S-125_ & 3.74 \(\pm\)0.04 \\ _CITH-S-123_ & 4.38 \(\pm\)0.06 \\ _CITH-S-124_ & 3.86\(\pm\)0.05 \\ _CITH-S-122_ & 3.98\(\pm\)0.04 \\ _CITH-S-12_ & 3.44\(\pm\)0.05 \\ _CITH-S-121_ & 4.14\(\pm\)0.05 \\ _CITH-S-107_ & 4.84\(\pm\)0.02 \\ _CITH-S-120_ & 3.86\(\pm\)0.05 \\ _CITH-S-104_ & 3.72 \(\pm\)0.04 \\ _CITH-S-117_ & 3.3\(\pm\)0.03 \\ _CITH-S-112_ & 3 \(\pm\)0.06 \\ _CITH-S-113_ & 3.16\(\pm\)0.05 \\ _CITH-S-119_ & 2.98\(\pm\)0.04 \\ _CITH-S-118_ & 3.22\(\pm\)0.07 \\ _CITH-S-10_ & 2.9\(\pm\)0.05 \\ _CITH-S-103_ & 3.04\(\pm\)0.04 \\ _CITH-S-43_ & 3.16\(\pm\)0.05 \\ _CITH-S-114_ & 3.3\(\pm\)0.11 \\ _CITH-S-115_ & 3.2\(\pm\)0.05 \\ _CITH-S-105_ & 3.08\(\pm\)0.04 \\ _CITH-S-106_ & 3.34\(\pm\)0.09 \\ _CITH-S-102_ & 3.14\(\pm\)0.08 \\ _CITH-S-108_ & 3.4\(\pm\)0.03 \\ _CITH-S-11_ & 3.3h\(\pm\)0.03 \\ _CITH-S-116_ & 2.86 \(\pm\)0.03 \\ _CITH-S-13_ & 3.42\(\pm\)0.05 \\ _CITH-S-101_ & 3.7 \(\pm\)0.03 \\ _CITH-S-3_ & 3.3h\(\pm\)0.03 \\ _CITH-S-111_ & 3.12\(\pm\)0.07 \\ _CITH-S-110_ & 2.92\(\pm\)0.04 \\ _CITH-S-76_ & 3.2\(\pm\)0.03 \\ \end{tabular}
\end{table}
Table 2: Variability in stigma length of different saffron (Cocus sativus) selections.
expression trends of the CsZCD and CsLYC genes did not mirror the storage patterns of apocarotenoid compounds. This indicates that the production of these molecules might be overseen by a different mechanism, possibly linked to the expression of carotenoid cleavage dioxygenase (referenced in Rubio-Moraga et al., 2008; Baghalian et al., 2010).
In saffron's apocarotenoid production process, \(\beta\)-carotene and zeaxanthin act as pivotal forrunners through the mevalonate (MVA) route. During the scarlet phase of stigma development, real-time PCR amplified the CsZCD, CsLYC, and Tubulin genes for both genotypes (as illustrated in Figure 2). Remarkably, CsZCD displayed an expression rate that was three times greater than that of the CsTUB gene. On the other hand, CsLYC showed expression levels at 0.7-fold and 0.3-fold when compared to CsTUB.
These findings align well with past studies, especially the one conducted by Mir et al. in 2012. That study observed a 2.69-fold increase in CsZCD gene expression compared to the tubulin gene during the scarlet phase, alongside 0.90-fold and 0.69-fold decreases at the orange and yellow stages, respectively. The pronounced expression of the CsZCD gene, coupled with the simultaneous surge in apocarotenoid concentration as the stigma matures, highlights its influential role in both apocarotenoid formation and saffron stigma development (as referenced in Mir et al., 2012a, b). This data suggests that the difference in apocarotenoid content between the two genotypes could be attributed to variations in expression patterns of the CsZCD and CsLYC genes.
## Conclusion
This study aimed to delve into the expression dynamics of the CsZCD gene as saffron stigma matures, especially its linkage to apocarotenoid production. Throughout this developmental journey, the CsZCD gene exhibited significant expression variations, reaching its zenith during the scarlet phase, emphasizing its critical role therein. A strong correlation was found between the gene's expression and apocarotenoid build-up at each developmental milestone, further substantiating its key regulatory role. Real-time PCR provided a detailed quantitative picture, showcasing an 8%
Figure 1: At three different developmental stages of saffron stigmas, gene expression levels of CsZCD (Lane 4–6) and the reference gene Tubulin (Lane 1–3) were assessed.
Figure 2: Semi-quantitative analysis through reverse transcription PCR (RT-PCR) was performed to compare the gene expressions of CsTUB (Lane 1 and 2), CsLYC (Lane 3 and 4), and CsZCD (Lane 5 and 6) in the PAM-S-116 and CITH-S-107 saffron genotypes, respectively. |
2309.11275 | Open-endedness induced through a predator-prey scenario using modular
robots | This work investigates how a predator-prey scenario can induce the emergence
of Open-Ended Evolution (OEE). We utilize modular robots of fixed morphologies
whose controllers are subject to evolution. In both species, robots can send
and receive signals and perceive the relative positions of other robots in the
environment. Specifically, we introduce a feature we call a tagging system: it
modifies how individuals can perceive each other and is expected to increase
behavioral complexity. Our results show the emergence of adaptive strategies,
demonstrating the viability of inducing OEE through predator-prey dynamics
using modular robots. Such emergence, nevertheless, seemed to depend on
conditioning reproduction to an explicit behavioral criterion. | Dimitri Kachler, Karine Miras | 2023-09-20T12:58:51Z | http://arxiv.org/abs/2309.11275v1 | # Open-endedness induced through a predator-prey scenario using modular robots
###### Abstract
This work investigates how a predator-prey scenario can induce the emergence of Open-Ended Evolution (OEE). We utilize modular robots of fixed morphologies whose controllers are subject to evolution. In both species, robots can send and receive signals and perceive the relative positions of other robots in the environment. Specifically, we introduce a feature we call a _tagging system_: it modifies how individuals can perceive each other and is expected to increase behavioral complexity. Our results show the emergence of adaptive strategies, demonstrating the viability of inducing OEE through predator-prey dynamics using modular robots. Such emergence, nevertheless, seemed to depend on conditioning reproduction to an explicit behavioral criterion.
Open-Ended Evolution, Predator-Prey, Evolutionary Robotics, Modular Robots
## I Introduction
The longest evolutionary experiment has been continually running on planet Earth for the past 3.7 billion years [1]: natural life. Throughout this long period, organisms have only been preying on each other for the last 1.2 billion years. Evolutionary Computation (EC), on the other hand, has only existed for the last 70 years [2] and has spawned a variety of different approaches. Nevertheless, the dominant paradigm of EC has been the inversion of the concept of fitness from a metric measured _a posteriori_ to a metric measured _a priori_: from being considered fit in case your traits allow you to survive and reproduce to being allowed to survive and reproduce in case you possess certain traits.
While effective in diverse domains, this paradigm is limited because it lacks crucial aspects that would allow the emergence of complexity [3]: not all beneficial processes translate into a numerical gain or are adequately represented by a singular scalar value.
To address the aforementioned challenges, a different paradigm has been explored in the literature, which is closer to natural evolution: Open-Ended Evolution (OEE) [4, 5]. Two core axioms unique to Open-Ended Evolution state that concepts of fitness and generations are applied implicitly rather than explicitly [6]. Firstly, there is no actual fitness function to judge an individual through selection. As a result, the selection process for inheriting genes must not directly discriminate against solutions; discrimination may only arise through indirect organic means - mechanisms that arise as a result of the system dynamics.
Specifically within EC, attempts at OEE started with the Artificial Life (ALife) community, using artificial worlds such as Tierra [7], where programs could self-replicate and compete for computation power and memory space. Another example was Polyworld [8], where agents could eat or mate with each other. Polyworld exhibited predator-prey dynamics with open-ended characteristics, but the underlying physical representation for the agent bodies was 2D polygons. Beyond 2D worlds, OEE has also been applied to wheeled robots in hardware [9] and even to modular robots [10], which are more challenging to work with than wheeled ones. However, we are unaware of any work combining modular robots and predator-prey dynamics in the context of OEE.
Therefore, the present work investigates how a predator-prey scenario can induce the emergence of OEE. Specifically, we introduce a novel systemic feature that we call _tagging system_: this system modifies how individuals can perceive each other and is expected to promote behavioral complexity.
Because the need for a _minimum criterion_ has been suggested before in the literature [3], we hypothesize that: _in the current predator-prey scenario, the emergence of OEE depends on the existence of an explicit behavioral criterion to allow reproduction_.
## II Methodology
The code to reproduce all experiments is available on GitHub1. All experiments were repeated 10 times for statistical significance. All parameters were chosen empirically. Each experiment was run for 6000 seconds with a timestep of 0.0012. A video showing robots during one of the experiments is available2.
Footnote 1: [https://github.com/NanoNero1/revolve2-multi](https://github.com/NanoNero1/revolve2-multi)
Footnote 2: [https://youtu.be/cxv-cwoAk0g](https://youtu.be/cxv-cwoAk0g)
### _Robot Body and Environment_
The experiments are simulated using the Mujoco physics engine, wrapped by a robot framework called Revolve [11, 12]. The robot bodies are modular bodies [13] constructed by connecting blocks and joints (motors). For an initial proof of
concept, the current experiments utilize a fixed body configuration (Fig. 1), but the long-term view of future work envisions allowing these configurations to evolve. In all experiments, the robots evolve in a square, flat plane surrounded by walls.
### _Robot Controller_
The robot controller comprises two components: the _targeted steering_ and the _cognitive brain_. The targeted steering influences the active behavior of the robot by controlling the motors. The cognitive brain influences both active and passive behaviors: it steers the gait that is generated with the targeted steering (active) and also changes a phenotypic trait that does not directly affect robot behavior (passive).
#### Iii-B1 Targeted Steering
The targeted steering controller allows a robot to locomote to a specific target. In this case, one single controller was evolved in pre-experiments, and every robot received one independent copy of it.
This controller is a combination of a Central Pattern Generator (CPG) [14] with a steering mechanism that adjusts the outputs of the CPG. CPGs are networks capable of producing coordinated rhythmic activity patterns without sensory feedback inputs. Given a timestep, the CPGs generate values used to set the rotations of the motors.
The usual approach in studies with predator-prey dynamics is to give predators direct access to the location of their closest prey and vice versa [15, 16]. In opposition, we provide individuals with only the relative angle of a nearby individual of the opposite species. The steering mechanism [17] adjusts the rotation of certain motors initially produced by the CPG using a target angle \(\alpha\) derived from this relative angle - this \(\alpha\) regards where the robot 'wants' to go. For example, if a robot has a positive \(\alpha\), it wants to go to the right. Thus, the robot should slow down motors on its right side by a scaling factor \(\delta\). In the current experiments, \(\delta\) is calculated with Eq. 1, and the \(\alpha\) is generated by the cognitive brain. To define what is left and right, an axis of symmetry is drawn diagonally (45deg) through the robot head. The determination of which side is designated as right or left adheres to a predefined frame of reference that establishes the front orientation.
\[\delta=(\frac{\pi-|\alpha|}{\pi})^{2} \tag{1}\]
The values of the CPG parameters were produced with Compositional Pattern Producing Networks (CPPNs) [18, 19] evolved using the following parameters: 20 generations, 20 parents, 20 children, 20 population size, round-robin selection tournament, and 50 seconds of simulation time. The fitness function was the displacement towards the negative x-axis.
#### Iii-B2 Tagging System
We introduce a tagging system that limits the ability of robots to perceive other robots. Individuals may only perceive each other if they have the same tag. One useful analogy for the concept of a tag is that an individual may change passive phenotypic traits perceivable by their adversary, e.g., change their skin color to a color visible or invisible to the eyes of the adversary.
The tagging system allows each individual to choose their tag from either -1 or 1. To avoid erratic behavior, there is a cool-down of 50 seconds before a robot may switch its tag again. The motivation behind this system is to add complexity to the hunting process: it introduces a challenge to both predators and prey. For example, it may create situations like such: a predator is hunting down a prey, but mid-chase, the prey changes tag, rendering itself invisible to the predator.
#### Iii-B3 Cognitive Brain
The cognitive brain is a Fully Connected Neural Network that decides the tag of the robot and the target angle to use with the targeted steering (Fig. 2). The activation function utilized is the hyperbolic tangent function.
InputsAll inputs are bound inside [-1, 1]. Inputs _Angle_ and _Distance_ concern the closest adversary of an individual, whereas the input _Tag Ratio_ concerns the population. The term adversary will be re-occurring in later sections, and we define it as the closest observable robot of the opposite species. Therefore, the adversary of a prey is the closest predator within the same tag. There might be a predator even closer, but on a different tag: this is not the adversary.
The _Angle_ input is a value set to -1 if the adversary is on the left side of the robot and 1 if it is on the right.
Fig. 1: A _spider-shaped_ robot body simulated in Mujoco.
Fig. 2: Architecture of the Cognitive Brain: a fully connected network.
The _Distance_ input is the distance to the adversary divided by the maximum terrain bounds.
The _Tag Ratio_ input is defined with Eq. 2 and calculates the ratio of how many robots are tagged as 1 relative to the total amount of robots. This variable informs individuals about the balance among the different tags within the population. We anticipate that this can be beneficial for a robot in making adaptive decisions regarding when to alter its tag. For instance, knowing that an excessive number of individuals share the same tag holds significance for a predator, as this disparity might indicate an overabundance of other predators within the same tag, resulting in heightened competition.
\[TR=\frac{P-\frac{N}{2}}{N} \tag{2}\]
OutputsThe _Target Angle_ output is set to 0.7 radians if its output neuron is positive and -0.7 radians if it is negative - this value is provided to the targeted steering. The _Tag_ output is set to 1 if its output neuron is positive and -1 if it is negative.
Furthermore, the cognitive brain is not queried for outputs at every timestep but every 2 seconds. Smaller intervals caused angle switches to be too erratic, while the chosen value produced smoother locomotion.
### _Birth_
There is a fixed number of robot bodies in the environment: thirty robots. At birth, a new controller is attributed to a robot body already in the environment: this is possible when the controller previously inhabiting that body dies. Robots can be born in different ways, as described below.
InitializationWhen the experiment starts, 30 cognitive brain controllers are initialized with entirely randomized weights between -1 and 1 drawn from a uniform distribution. Each controller is attributed to one of the available bodies: 16 are prey, and 14 are predators.
ReproductionThere are two forms of creating new controllers. First, there is a 1/3 chance of creating random controllers. This introduces new solutions to the gene pool, improving diversity. Second, there is a 2/3 chance of a new controller being implicitly sourced from an existing genotype. Implicit means that we do not use any explicit fitness function to evaluate individuals. When a predator catches a prey, the predator reproduces: the prey dies, and the offspring takes over the body avatar of the prey. As for prey reproduction, it happens when a predator dies: if any prey is within a certain minimum distance away from the predator, the closest prey to this predator reproduces. Similarly, the offspring of the prey takes over the body avatar of the predator. The offspring resulting from reproduction undergoes mutation by perturbing the network weights with values drawn from a normal distribution.
### _Death_
Prey DeathA prey dies when a predator catches it. This happens when they find themselves within one unit from each other, regardless of whether they are on the same tag (prey is caught despite not being seen). To avoid'spawn-killing', a newborn prey must first move a certain minimum distance away from any predators before it becomes active and is eligible to be caught. Before this condition is met, the prey wanders around the map in a state of inactivity and is invisible to all predators. By'spawn-killing' we mean that the prey might have been born and placed in the environment (spawned) too close to predators. Additionally, there is an alternative mechanism by which prey may die: if the number of predators is nearing extinction, i.e., less than 7 predators, a prey is chosen to be sacrificed entirely at random. Similarly, no prey can die if there are only 7 prey in the population. These two constraints guarantee none of the species will become extinct.
Predator DeathConversely, predators die based on a measure of hunger: the number of timesteps that have passed since the predator was born or since it last caught a prey. It is only possible for predators to die (death procedure) on certain timesteps, and the predator with the highest hunger is chosen to die.
The timesteps in which the death procedure should occur are defined using an interval \(\Delta\) (Eq. 3): at timestep 0, a \(\Delta\) is calculated based on the number of predators, and each next death procedure occurs after \(\Delta\) timesteps. Before the death procedure starts, the measure of hunger is updated, and the \(\Delta\) is updated after the death procedure ends. The \(\Delta\) depends on the number of predators: the more predators, the lower the interval. This is meant to tackle overcrowding. Conversely, under-crowding is tackled because \(\Delta\) also sets a limit for the death procedure to occur.
\[\Delta=25-p \tag{3}\]
where \(p\) is the number of predators. This equation guarantees that the \(\Delta\) is never below 2 (timesteps) because there must be a minimum of 7 prey in the population and therefore, a maximum of 23 predators.
Additionally, there is an exception to dying from a high hunger measure: if a predator is at a minimum certain distance away from an observable prey, it will not die. In this case, the next oldest predator dies. This measure was implemented for situations when a predator might need just a little more time when it is very close to catching prey. Interestingly, if the prey being hunted suddenly switches tags, it could create a scenario where the predator instantly dies because it no longer falls within this exception.
### _Metrics for system dynamics_
We utilize multiple metrics to analyze the system dynamics.
Attributionmeasures the performance of a species by calculating how much of the success or failure of the species can be attributed to selection pressure, as opposed to just an effect of randomness. The Attribution for the prey and predators is calculated differently. For prey, it means _failure_ in avoiding the predator; it is calculated through Eq. 4.
\[a=\frac{p_{i}}{p_{t}} \tag{4}\]
where \(p_{i}\) is the number of prey who were caught and had an inherited genotype, and \(p_{t}\) is the total number of prey that were caught.
For predators, it means _success_ in catching the prey; it is calculated through Eq. 5.
\[a=\frac{d_{i}}{d_{t}} \tag{5}\]
where \(d_{i}\) is the number of predators who caught any prey and had an inherited genotype, and \(d_{t}\) is the total number of predators that ever caught a prey.
VelocityThis metric tracks whether a robot moves in a way to get closer (predator chases) or further away (prey evades) from its adversary. We measure the Velocity by first calculating the distance between the position of a robot (P1) and the position of its adversary (A1). We verify if the robot moves closer or further away from the adversary position by checking the new position of the robot (P2) after 6 timesteps (12 seconds). If the distance between P2 and A1 is smaller than it was before, it means the robot moved in a way to get closer to its adversary and vice versa. The distance difference is then divided by the time delta, 12 seconds, to obtain the final value representing the Velocity (Eq. 6).
\[v=\frac{dist(P_{1},A_{1})-dist(P_{2},A_{1})}{12} \tag{6}\]
Sticking to Walls RatioDue to the terrain being enclosed by four walls and the agents being unaware of their surroundings, robots may get stuck or move closely along the walls. We deem being stuck to a wall as being within one unit of distance to any of the four walls. For each timestep, we take the fraction of robots stuck to walls for each species. As an example, at time = 1250s, the prey had 10 out of 15 of their robots stuck to a wall. Therefore, their Stuck to Wall Ratio was \(10/15\).
Tag SymmetryThe Tag Symmetry is of interest because it may support the existence of adaptive/reactive behavior. For instance, if predators have a consistently high tag average while the prey have a low tag average (or vice versa), this might mean that species are reacting to each other. For instance, the prey are trying to be invisible to the predator, so their tag is on average different from the average of the predators. It is important to highlight that this metric is unable to isolate active behavior from system dynamics. For example, we do not know if a certain value for this metric is due to the prey trying to be invisible or because all prey visible to the predator have been captured.
The average tag is the value calculated from either prey or predators by considering the mean value of their tags. For example, if we had four predators with their tags (-1,1,1,1), the tag ratio would be \((-1+1+1+1)/4=0.75\). The more positive this value, the more individuals are tagged as +1.
The Tag Symmetry is calculated in every timestep by summing the average tags of the two species (including only inherited genotypes), e.g., for tag averages 0.75 and -0.71: \(|0.75+(-0.71)|=0.04\). Values close to zero indicate higher symmetry. To establish a baseline, we contrast this symmetry score with a score obtained from 100,000 pairs of values uniformly distributed at random. This yielded a symmetry score of 0.66, equivalent to the average distance between two points on a line segment with a length of L=2 (from -1 to 1), which is L/3.
## III Results
### _Attribution_
This performance metric produces somewhat surprising results. Each robot, generated either through the random search or reproduction, could succeed (in catching or evading) by chance. When they succeed, they have a 66% chance of reproducing, while there is no guarantee that their offspring will be as successful as them. Therefore, even if only 66% of the individuals of a species succeeded, this could have been by chance - without any evidence that selection pressure took place towards a lineage of successful individuals. Therefore, we utilize 66% as our baseline. This baseline is shown as the red line in Fig. 3. To present increased performance, the predators would need to score higher than this baseline, while the prey would have to score lower. In the case of the predators, they have an average Attribution of 0.725. Therefore, the predators outperform the randomized solutions. On the contrary, prey performed indifferently to random solutions, with an Attribution of 0.65.
### _Velocity_
A very effective strategy for predators is to move faster toward their prey and for prey to move faster away from predators. While it was expected that both species would have evolved to become better at chasing or avoiding each other, this happened much more successfully for the predators: predators move faster than prey towards the expected direction (Fig. 4). Prey move on average at 0.67 cm/s from their adversaries,
Fig. 3: Differences in Attribution compared to a random baseline: the red line represents the random baseline. Higher values are better for predators, while lower values are better for prey.
while predators approach their adversaries on average at 1.95 cm/s - nearly three times. Furthermore, the evolving predators (generated through reproduction) become better than randomly generated predators after less than 1000 seconds and maintain this superiority until the end. On the other hand, the prey is not better than random at multiple eime
is a, no better than random.
Note that although the targeted steering is pre-evolved, a better cognitive brain can, to some extent, increase velocity through a more assertive angle control towards the expected direction.
### _Sticking to Walls Ratio_
Generally, prey favored sticking to nearby walls more than predators do (Fig. 5). The average ratio for predators, marked by the purple line, is 0.39. On the other hand, the average ratio for prey, marked by the blue line, is 0.60. This is about 50% more.
While the discrepancy in their Sticking to Walls Ratio is a concrete phenomenon, it is unclear whether this behavior is beneficial. Perhaps the difference in behavior is merely a downstream consequence of: predators learn to track down prey; the prey end up at the wall because they are trying to evade; the prey remain unable to dodge the wall.
### _Tag Symmetry_
The Tag Symmetry distribution across all ten runs is shown in Fig. 6. The obtained average tag symmetry is 0.43, which is lower (65%) than the random baseline of 0.66. Additionally, the minimum value is 0.33 and the maximum is 0.53, so that 0.66 falls completely outside the range: we can confidently conclude that the tag averages are more symmetric than random. However, an average of 0.43 is still not very high. Therefore, we do not use this as evidence to support the idea that species are reacting to each other.
## IV Conclusion
This work has demonstrated how open-ended evolution can take place in a predator-prey scenario using modular robots. We presented evidence to support the emergence of evolved behavior beneficial to the survival of a species: the predators evolved towards higher effectiveness in capturing the prey. This was achieved without directly appealing to explicit selection mechanisms. On the other hand, despite the adaptive process of predators having been supported by clear evidence, the same did not occur to the prey. The ability of the prey to evade the predator was not significantly better than random.
Notably, evading or chasing an adversary requires multiple partial behaviors, e.g., changing tags duly and moving away/toward the adversary (velocity). Therefore, it is possible that a species fails in accomplishing the behavior as a whole but succeeds in accomplishing sub-behaviors. Nevertheless, while the prey species presented some evidence of behavioral improvement regarding their velocity in evading the predator, this improvement was half of the time not significantly better than random.
One possible explanation for this shortcoming is the reproduction criterion utilized by the prey. To recapitulate, although there was no explicit goal applied through any selection mechanisms, the reproduction of the predators was conditioned to an explicit behavior: catching the prey. The reproduction of the prey, on the other hand, depended on an implicit behavior: being close to a predator when this one happens to die. These implicit versus explicit behaviors might have created different levels of selection pressure so that there was more pressure for the predators to improve than for the prey to improve: it is hard to determine if a prey reproduced because it had the ability to stay close enough to a predator without being caught, or if it was close enough to predator because it was unable to evade it. Additionally, the lack of an aging process for prey death might have influenced prey adaptation by creating less selection pressure for the prey.
At this point, it is important to delineate two relevant concepts: reactive behaviors - the behavior of a species A changes in reaction to a change in the behavior of a species B, but without the behavior of A necessarily becoming superior/dominant to the behavior B; and co-evolution: there is an arms-race in which behaviors of species A and B become alternately superior to each other. Critically, while the predators did improve their success, the prey did not improve comparably, and therefore, we can not claim that co-evolution was achieved. As for reactive behavior, co-evolution is not necessarily required for it to occur. Nonetheless, the Tag Symmetry metric, whose purpose was to explore whether reactive behavior occurred due to symmetry, did also not result in convincing evidence.
Future work should explore more pressure-creating conditions for prey reproduction through a reproduction criterion defined by explicit behaviors. This is expected to promote prey success and foster an arms-race. Furthermore, the locomotion abilities in the current experiments were pre-evolved, and not a result of OEE: future work should also include targeted locomotion as a behavior subject to open-ended emergence. Finally, while the current experiments used a fixed modular morphology, future experiments should allow morphological evolution.
To conclude, we have presented evidence that the initial hypothesis regarding _minimum criterion_ is true in the current system: considering that there are no explicit selection mechanisms, the emergence of OEE depended on including an explicit behavioral criterion to allow reproduction. At the same time, the experimental setup does not allow isolating the effects of reproduction mechanisms from the lack of a prey aging process.
|
2308.00051 | The Arc-Floer conjecture for plane curves | In arXiv:1911.08213 it was conjectured that the compactly supported
cohomology of the $m$-th restricted contact locus of an isolated hypersurface
singularity coincides, up to a shift, with the Floer cohomology of the $m$-th
iterate of the monodromy of the Milnor fiber. In this paper we give an
affirmative answer to this conjecture in the case of plane curves. | Javier de la Bodega, Eduardo de Lorenzo Poza | 2023-07-31T18:14:13Z | http://arxiv.org/abs/2308.00051v1 | # The arc-Floer conjecture for plane curves
###### Abstract.
In [4] it was conjectured that the compactly supported cohomology of the \(m\)-th restricted contact locus of an isolated hypersurface singularity coincides, up to a shift, with the Floer cohomology of the \(m\)-th iterate of the monodromy of the Milnor fiber. In this paper we give an affirmative answer to this conjecture in the case of plane curves.
## 1. Introduction
Let \(f\in\mathbb{C}\{z_{0},\ldots,z_{n}\}\) be a convergent power series that defines an isolated hypersurface singularity at the origin. The celebrated Monodromy Conjecture [7, SS2.4] aims to establish a connection between the _contact loci_ of \(f\) (of algebraic nature) and the _Milnor fiber_ of \(f\) (of topological nature). In this paper we follow the philosophy of [4] to strengthen the connection between these two invariants from a different point of view.
First let us recall what these objects consist on. For every integer \(m\geq 1\), we define the \(m\)_-th restricted contact locus_ of \(f\), denoted by \(\mathcal{X}_{m}\), to be the set of \(m\)-jets in \(\mathbb{C}^{n+1}\) centered at \(0\) with intersection multiplicity \(m\) with \(f\) and angular component \(1\). This family of arcs defines a complex affine algebraic variety (possibly singular).
On the other hand, it is a classical result by Milnor [15, Ch. 5] that there exist \(0<\delta\ll\varepsilon\ll 1\) such that
\[f:f^{-1}(\partial\mathbb{D}_{\delta})\cap\mathbb{B}_{\varepsilon}\longrightarrow \partial\mathbb{D}_{\delta}\]
is a \(C^{\infty}\)-locally trivial fibration, called the _Milnor fibration_. The fiber of this fibration is known as the _Milnor fiber_ of \(f\), denoted by \(\mathbb{F}\). A key observation is that \(\mathbb{F}\) has a natural symplectic manifold structure. More precisely, it is a Liouville domain when it is endowed with the restriction of the \(1\)-form
\[\lambda_{\text{std}}=\frac{1}{2}\sum_{i=0}^{n}(x_{i}dy_{i}-y_{i}dx_{i})\]
of \(\mathbb{C}^{n+1}\), where \(z_{i}=x_{i}+\mathrm{i}y_{i}\). Moreover, it admits a compactly supported exact monodromy \(\varphi:\mathbb{F}\to\mathbb{F}\) for the Milnor fibration, see [14, SS3], [11, SS5]. In this setting Seidel [18, SS4], [14, SS4] introduced a cohomology theory known as the _Floer cohomology of \(\varphi\)_, denoted by \(\operatorname{HF}^{*}(\varphi,+)\).
In [4] the authors conjectured that, up to a shift, the cohomology with compact support of the restricted \(m\)-th contact locus coincides with the Floer cohomology of \(\varphi^{m}\). The conjecture was made based on the following evidence:
1. As explained in [14, SS4], the Euler characteristic of \(\operatorname{HF}^{*}(\varphi^{m},+)\) is \((-1)^{n}\Lambda_{\varphi^{m}}\), where \(\Lambda_{\varphi^{m}}\) denotes the Lefschetz number of \(\varphi^{m}\). Hence the conjecture would imply that the Euler characteristic of \(\mathcal{X}_{m}\) is \(\Lambda_{\varphi^{m}}\). Indeed, this was proven by Denef and Loeser [8].
2. Using an \(m\)-separating log resolution, McLean [14] constructed a spectral sequence converging to \(\operatorname{HF}^{\bullet}(\varphi^{m},+)\). In turn, Budur, Fernandez de Bobadilla, Le and Nguyen [4] constructed an analogous spectral sequence converging to \(H^{\bullet}_{c}(\mathcal{X}_{m},\mathbb{Z})\).
3. Also in [4], the authors showed the conjecture holds when \(m\) equals the multiplicity of \(f\).
However, there was no family of singularities supporting the conjecture. The main purpose of this paper is to provide the first such example. Namely, we show the conjecture holds for \(n=1\), i.e. the case of plane curves:
**Theorem 1.1**.: _Let \(f\in\mathbb{C}\{x,y\}\) a reduced convergent power series such that \(f(0,0)=0\). For every integer \(m\geq 1\) there is an isomorphism_
\[H_{c}^{\bullet+2m+1}(\mathcal{X}_{m},\mathbb{Z}/2\mathbb{Z})\cong\ \operatorname{HF}_{ \bullet}(\varphi^{m},+).\]
As the reader may have noticed, the statement of Theorem 1.1 involves a homological version of Seidel's Floer cohomology. Indeed, based on the work of Uljarevic [20, SS2], Fernandez de Bobadilla and Pelka [11, SS6.2.3] introduced the _Floer homology_ of \(\varphi\), denoted by \(\operatorname{HF}_{\bullet}(\varphi,+)\).
**Remark 1.2**.: The shift in Theorem 1.1 differs from the shift of the conjecture in [4]. As explained in [11, Remark 7.4], this is due to a mistake in the formula [14, Theorem 5.41(3)].
In view of Theorem 1.1 the conjecture can be restated in terms of Floer homology, with the shift updated as explained in Remark 1.2:
**Conjecture 1.3** (Arc-Floer conjecture).: Let \(f\in\mathbb{C}\{z_{0},\dots,z_{n}\}\) be a convergent power series such that \(f(\mathbf{0})=0\) that defines an isolated hypersuface singularity at the origin. For every \(m\geq 1\) there is an isomorphism
\[H_{c}^{\bullet+n(2m+1)}(\mathcal{X}_{m},\mathbb{Z}/2\mathbb{Z})\cong \operatorname{HF}_{\bullet}(\varphi^{m},+).\]
This paper is devoted to the proof of Theorem 1.1, which is based on a direct computation of both (co)homologies. The structure of the paper is as follows. In Section 2 we establish the notation and basic numerical and combinatorial invariants of plane curve singularities which will be used throughout the paper. In Section 3 we determine the connected components of the contact loci in terms of the resolution graph of \(C\), and we study their topology. This essentially solves the embedded Nash problem [3, SS1.3], [10, Remark 2.8] for plane curves, generalizing [3, Theorem 1.21]. In Section 4 we study the degeneration properties of the McLean spectral sequence [11, Proposition 6.3], [14, Appendix C] of \(\varphi^{m}\), which converges to \(\operatorname{HF}_{\bullet}(\varphi^{m},+)\). We perform a deformation \(\overline{\varphi^{m}}\) of the monodromy iterate \(\varphi^{m}\) in a way that makes the McLean spectral sequence of \(\overline{\varphi^{m}}\) degenerate at the first page. Since the Floer homology is invariant under isotopies, this computes the Floer homology of \(\varphi^{m}\). Finally, in Section 5 we prove Theorem 1.1 by comparing the results of Sections 3 and 4.
### Acknowledgements
We are grateful to Javier Fernandez de Bobadilla for proposing us the problem. We are also grateful to Nero Budur, Javier Fernandez de Bobadilla, Tomasz Pelka and Pablo Portilla for useful discussions and answering our question during the development of the paper. J. de la Bodega was supported by PRE2019-087976 from the Ministry of Science of Spain and partially supported by G097819N, G0B3123N from FWO. E. de Lorenzo Poza was supported by 1187423N from FWO, Research Foundation, Flanders.
## 2. Preliminaries on plane curves
Let \(f=f^{[1]}\cdots f^{[b]}\in\mathbb{C}\llbracket x,y\rrbracket\) be the decomposition into irreducible factors of a reduced formal power series. Let \((C,0)=V(f)\) be the associated formal plane curve germ, which has an isolated singularity since \(f\) is reduced. Similarly, let \((C^{[j]},0)=V(f^{[j]}),j=1,\ldots,b\) be the irreducible components of \((C,0)\), also known as the _branches_ of \(C\).
**Notation 2.1**.: We will use a superscript \([j],j=1,\ldots,b\) to distinguish the data associated to the branch \((C^{[j]},0)\), and no superscript for the data of the curve \((C,0)\).
### Puiseux data and resolution data
For each \(j=1,\ldots,b\), choose a coordinate system adapted to \((C^{[j]},0)\) as follows. Let \(L_{0}^{[j]}\) be a smooth curve through the origin that has the same tangent direction as \((C^{[j]},0)\), and let \(L_{0}^{\prime[j]}\) be a smooth curve through the origin that meets \(L_{0}^{[j]}\) transversely. By the formal inverse function theorem, there is a coordinate system \((x_{0}^{[j]},y_{0}^{[j]})\) around the origin such that \(L_{0}^{[j]}=\{y_{0}^{[j]}=0\}\) and \(L_{0}^{\prime[j]}=\{x_{0}^{[j]}=0\}\). Without loss of generality, we may assume that branches with the same tangent direction are assigned the same coordinate system, i.e. that \((x_{0}^{[j]},y_{0}^{[j]})\) depends only on the tangent direction of \(C^{[j]}\).
By the Newton-Puiseux theorem, there exists a power series \(\varphi^{[j]}(\tau)=\sum\varphi_{k}^{[j]}\tau^{k}\in\mathbb{C}\llbracket\tau\rrbracket\) such that
\[f^{[j]}(x_{0}^{[j]},y_{0}^{[j]})=u^{[j]}(x_{0}^{[j]},y_{0}^{[j]})\prod_{\xi\in \boldsymbol{\mu}_{\text{mult}(f^{[j]})}}\left(y_{0}^{[j]}-\varphi^{[j]}(\xi( x_{0}^{[j]})^{1/\operatorname{mult}(f^{[j]})})\right), \tag{2.2}\]
where \(u^{[j]}\) is a unit in \(\mathbb{C}\llbracket x_{0}^{[j]},y_{0}^{[j]}\rrbracket\) and \(\boldsymbol{\mu}_{n}\) is the group of \(n\)-th roots of unity in \(\mathbb{C}\), see [6, Theorem 5.1.7]. In other words, we may parametrize \(C^{[j]}\) as \((\tau^{\text{mult}(f)},\varphi^{[j]}(\tau))\) in the coordinates \((x_{0}^{[j]},y_{0}^{[j]})\). We fix one such power series \(\varphi^{[j]}\) for each \(j=1,\ldots,b\) for the rest of this paper.
Let \(k_{1}^{[j]},\ldots,k_{g^{[j]}}^{[j]}\) be the characteristic exponents of \((C^{[j]},0)\), that is
\[r_{1}^{[j]}=\operatorname{mult}(f^{[j]}),\quad k_{1}^{[j]}=\min\left\{k\mid \varphi_{k}^{[j]}\neq 0\text{ and }r_{1}^{[j]}\text{ does not divide }k\right\},\]
and inductively
\[r_{i+1}^{[j]}=\gcd(k_{i}^{[j]},r_{i}^{[j]}),\quad k_{i+1}^{[j]}=\min\left\{k \mid\varphi_{k}^{[j]}\neq 0\text{ and }r_{i+1}^{[j]}\text{ does not divide }k\right\}.\]
The number \(g^{[j]}\) is the first integer such that \(k_{g^{[j]}+1}=\infty\), and the irreducibility of \(f^{[j]}\) implies that \(r_{g^{[j]}+1}^{[j]}=1\). We also set \(\kappa_{1}^{[j]}=k_{1}^{[j]},\;\kappa_{i}^{[j]}=k_{i}^{[j]}-k_{i-1}^{[j]}\) for \(i=2,\ldots,g^{[j]}\) and \((\hat{\kappa}_{i}^{[j]},\hat{r}_{i}^{[j]})=(\kappa_{i}^{[j]}/r_{i+1}^{[j]},r_{i} ^{[j]}/r_{i+1}^{[j]})\). The pairs \((\hat{\kappa}_{1}^{[j]},\hat{r}_{1}^{[j]}),\ldots,(\hat{\kappa}_{g^{[j]}}^{[j]},\hat{r}_{g^{[j]}}^{[j]})\) are called the _Newton pairs_ of the branch \((C^{[j]},0)\), see [5, p. 134]. Note that the condition that \(L^{[j]}\) has the same tangent direction as \(C^{[j]}\) implies \(\kappa_{1}^{[j]}>r_{1}^{[j]}\), but this is not necessarily true for \((\kappa_{i}^{[j]},r_{i}^{[j]})\), \(i\geq 2\).
Recall that a log resolution is a proper birational morphism \(\mu:(Y,\mathbf{E})\to(\mathbb{C}^{2},0)\) from a smooth variety \(Y\) such that \(\mu^{-1}(C)\) is a simple normal crossing divisor. Such a resolution may be obtained as a finite sequence of blow-ups. Let \(\mathscr{E}\) be the set of irreducible components of the exceptional divisor \(\mathbf{E}\coloneqq\mu^{-1}(0)\) and let \(\tilde{C}\coloneqq\mu^{-1}_{*}(C)\) be the strict transform of \(C\) by the resolution \(\mu\). We denote by \(\mathscr{S}=\{\tilde{C}^{[1]},\ldots,\tilde{C}^{[r]}\}\) the set of irreducible components of \(\tilde{C}\). For any \(E\in\mathscr{E}\cup\mathscr{S}\), let \(N_{E}\coloneqq\operatorname{ord}_{E}(f)\) and \(\nu_{E}\coloneqq\operatorname{ord}_{E}(K_{Y/X})+1\) be the _multiplicity_ and _log discrepancy_ at \(E\) associated to \(C\). Analogously, we define \(N_{E}^{[j]}\coloneqq\operatorname{ord}_{E}(f^{[j]})\) for each \(j=1,\ldots,b\).
Fix an integer \(m\geq 1\). If \(N_{E}\) divides \(m\) we say that \(E\) is an \(m\)_-divisor_ and denote \(m_{E}\coloneqq m/N_{E}\). The log resolution \(\mu\) is said to be \(m\)_-separating_ if \(N_{E}+N_{F}>m\) for any \(E,F\in\mathcal{E}\cup\mathcal{E}\) such that \(E\cap F\neq\varnothing\). We denote by \(\mu:(Y,\mathbf{E})\to(\mathbb{C}^{2},0)\) the _minimal_\(m\)-separating log resolution of \((C,0)\) -- this is the one obtained from the minimal resolution of the curve by repeatedly blowing up the intersection point of adjacent divisors which do not satisfy the \(m\)-separating condition, see [4, Lemma 2.9].
### Describing the resolution
As we mentioned above, both the minimal log resolution and the minimal \(m\)-separating log resolution can be obtained as a sequence of blow-ups. Some of these blow-ups are done iteratively on the point of interesection of exceptional components of previous blow-ups. The following result gives a convenient way to control this kind of divisors. Its proof is an easy computation, but we include it as we will be using this result repeatedly throughout the rest of the paper.
**Definition 2.3**.: Let \(L,L^{\prime}\) be prime divisors in a smooth surface intersecting transversely at a point \(P\). The _divisors between \(L\) and \(L^{\prime}\)_ are (the strict transforms of) the exceptional divisors that result from blowing up \(P\) and, inductively, further intersection points of \(L,L^{\prime}\) and previous exceptional divisors.
**Proposition 2.4**.: _Let \(L,L^{\prime}\) be prime divisors in a smooth surface intersecting transversely at a point \(P\). Set \(E_{(1,0)}=L,E_{(0,1)}=L^{\prime}\) and inductively define \(E_{(\kappa+\kappa^{\prime},r+r^{\prime})}\) to be the exceptional divisor of the blow-up of the intersection of \(E_{(\kappa,r)}\) and \(E_{(\kappa^{\prime},r^{\prime})}\). This establishes a bijection between pairs of coprime numbers \((\kappa,r)\) with \(\kappa,r\geq 1\), and divisors between \(L\) and \(L^{\prime}\)._
_Furthermore, if \(x,y\) are local coordinates around \(P\) for which \(L=\{y=0\}\) and \(L^{\prime}=\{x=0\}\) then a local equation for the minimal composition of blow-ups that makes \(E_{(\kappa,r)}\) appear is \((x,y)=(\tilde{x}^{r}\tilde{y}^{a},\tilde{x}^{\kappa}\tilde{y}^{b})\), where \(a,b\) are the unique integers such that \(a\kappa-br=(-1)^{n},0\leq a\leq r,0\leq b\leq\kappa\), and \(n\) is the number of divisions in the Euclidean algorithm to compute the greatest common divisor of \(\kappa\) and \(r\). In these coordinates \(E_{(\kappa,r)}=\{\tilde{x}=0\}\)._
Proof.: The fact that all pairs of coprime numbers can be obtained inductively by combining previous pairs \((\kappa,r)\) and \((\kappa^{\prime},r^{\prime})\) into \((\kappa+\kappa^{\prime},r+r^{\prime})\) is a classical construction known as the Stern-Brocot tree, see [12]. It can also be found in the literature under the name of Farey sequence or Farey sums. This establishes the first part of the theorem, since this is by definition how divisors between \(L\) and \(L^{\prime}\) are constructed.
Consider \((0,1)\) and \((1,0)\) to be the left and right generating nodes of the Stern-Brocot tree respectively, see [12, Figure 1]. The sequence of integers corresponding to the pair \((\kappa,r)\) is given by the quotients in the Euclidean algorithm of \(\kappa\) and \(r\) (or equivalently by the numbers defining the continued fraction expression for \(\frac{\kappa}{r}\)):
\[c_{j-1}=q_{j}c_{j}+c_{j+1},\text{ with }c_{0}=\kappa,c_{1}=r\text{ and }0\leq c_{j+1}<c_{j},\text{ until }c_{n-1}=q_{n}c_{n}.\]
We convene that \(c_{n+1}=0\). Recall that the Euclidean algorithm may be written in matrix form as follows:
\[\begin{pmatrix}c_{j-1}\\ c_{j}\end{pmatrix}=\begin{pmatrix}q_{j}&1\\ 1&0\end{pmatrix}\begin{pmatrix}c_{j}\\ c_{j+1}\end{pmatrix},\quad\text{for all }j=1,\ldots,n.\]
Under the bijection between coprime pairs and divisors, stepping right in the Stern-Brocot tree corresponds to taking local coordinates in which the blow-up is given by \((x,y)=(x^{\prime},x^{\prime}y^{\prime})\), and stepping left corresponds to taking local coordinates in which the blow-up is given by \((x,y)=(x^{\prime}y^{\prime},y^{\prime})\). Let us define the following matrix notation for the
exponents of a composition of blow-ups in this type of coordinates:
\[(x,y)=(\tilde{x}^{\alpha}\tilde{y}^{\beta},\tilde{x}^{\gamma}\tilde{y}^{\delta}) \quad\text{corresponds to the matrix}\quad\begin{pmatrix}\alpha&\beta\\ \gamma&\delta\end{pmatrix}.\]
The composition of \(q_{1}\) blow-ups of the form \((x,y)=(x^{\prime},x^{\prime}y^{\prime})\), followed by \(q_{2}\) blow-ups of the form \((x,y)=(x^{\prime}y^{\prime},y^{\prime})\), and so on corresponds to the following product of matrices:
\[M=\begin{pmatrix}1&0\\ q_{1}&1\end{pmatrix}\begin{pmatrix}1&q_{2}\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ q_{3}&1\end{pmatrix}\cdots\begin{pmatrix}1&0\\ q_{n}&1\end{pmatrix},\]
where the last matrix has to be transposed if \(n\) is even. Note that if \(n\) is odd, the equation of the last exceptional divisor is \(\tilde{x}=0\), whereas if \(n\) is even the equation is \(\tilde{y}=0\). To avoid this ambiguity, we convene that if \(n\) is even then will swap the variables after the last blow-up, so that the equation of last exceptional divisor is always \(\tilde{x}=0\). This alters the matrix of exponents by swapping the columns, i.e. by a multiplication on the right by the \(2\times 2\) matrix which has \(1\)'s on the anti-diagonal and \(0\)'s on the diagonal. Taking this into account, the matrix of exponents of the minimal composition of blow-ups that makes \(E_{(\kappa,r)}\) appear, with the convention of swapping the last variables to ensure that the equation of last exceptional divisor is always \(\tilde{x}=0\), is
\[M\begin{pmatrix}0&1\\ 1&0\end{pmatrix}^{n+1}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\begin{pmatrix}q_{1}&1\\ 1&0\end{pmatrix}\begin{pmatrix}q_{2}&1\\ 1&0\end{pmatrix}\cdots\begin{pmatrix}q_{n}&1\\ 1&0\end{pmatrix}=\begin{pmatrix}r&a\\ \kappa&b\end{pmatrix},\]
where the second equality comes from the Euclidean algorithm in matrix form, and \(a,b\) are simply integers coming from the matrix product. Taking determinants shows \(a\kappa-br=(-1)^{n}\), which determines \(a\) and \(b\) uniquely by the Bezout identity.
Let us recall how the resolution process of the plane curve \((C,0)\) is carried out, while we introduce notation that we will use later on. Fix a \(j=1,\dots,b\) and let \(E_{(1,0)}=L_{0}^{[j]},E_{(0,1)}=L_{0}^{f[j]}\), labeling the divisors coming from succesive blow-ups by \(E_{(\kappa,r)}\) as in Proposition 2.4. Note that we have a different labeling for each tangent direction of \(C\), so we cannot talk about "the divisor \(E_{(\kappa,r)}\)" unless we specify a branch \(C^{[j]}\) whose coordinates we are using. Also define \(R_{0}^{[j]}\coloneqq L_{0}^{\prime[j]}\) and \(Q_{0}^{[j]}\) to be the origin of \(\mathbb{C}^{2}\).
If \(C\) has just one branch, and said branch is smooth, then we are finished. Otherwise we blow-up the origin. The exceptional divisor of this first blow-up gets the label \((1,1)\) regardless of the coordinate system. After blowing up the intersection points of divisors of the form \(E_{(\kappa,r)}\) (note that there is a strip of such divisors for each tangent direction of \((C,0)\)) a finite amount of times, we obtain a birational map \(\mu_{1}:Y_{1}\to\mathbb{C}^{2}\) such that the strict transform of each branch \(C^{[j]}\) intersects precisely one exceptional component, which we denote by \(R_{1}^{[j]}\), at a point that we call \(Q_{1}^{[j]}\). If \(C^{[j]}\) was not smooth, \(R_{1}^{[j]}\) is the divisor between \(L_{0}^{[j]}\) and \(L_{0}^{\prime[j]}\) corresponding to the pair \((\kappa_{1}^{[j]},\hat{r}_{1}^{[j]})\).
The dual graph of the exceptional divisor of \(\mu_{1}\) is star-shaped in the following sense. There is a "central" vertex corresponding to the divisor \(E_{(1,1)}\), with valency equal to the number of tangent directions of \(C\) at the origin. Coming out of this vertex there are simple paths whose vertices --which correspond to the exceptional divisors of the successive blow-ups-- are labeled by coprime pairs \((\kappa,r)\). Along each of these paths the quotient \(\frac{\kappa}{r}\) strictly increases as we move away from the root.
Once we are in this setting, the resolution process starts once again at the points of intersection of the strict transform of \(C\) with the exceptional divisor. Let \(i\geq 2\) and suppose we have carried out the above process \(i-1\) times, so that we have a birational morphism \(\mu_{i-1}:Y_{i-1}\to\mathbb{C}^{2}\) such that the strict transform \(\mu_{i-1,*}^{-1}(C^{[j]})\) of each branch
intersects precisely one exceptional component, which we denote by \(R_{i-1}^{[j]}\), and we call the intersection point \(Q_{i-1}^{[j]}\). If all strict transforms are smooth and do not intersect other strict transforms, we are done. Otherwise do the following for the branches \(C^{[j]}\) that do not satisfy these conditions.
Let \(L_{i-1}^{\prime[j]}=R_{i-1}^{[j]}\) and, if \(\mu_{i-1,*}^{-1}(C^{[j]})\) is not tangent to \(R_{i-1}^{[j]}\), let \(L_{i-1}^{[j]}\) be a smooth curve through \(Q_{i-1}^{[j]}\) that has maximal contact with \(\mu_{i-1,*}^{-1}(C^{[j]})\). Otherwise let \(L_{i-1}^{[j]}\) be any smooth curve transversal to \(L_{i-1}^{\prime[j]}\). After finitely many blow-ups, the strict transform of \(C^{[j]}\) intersects precisely one exceptional component, which we denote by \(R_{i}^{[j]}\). Since either \(L_{i-1}^{[j]}\) or \(L_{i-1}^{\prime[j]}\) had maximal contact with the strict tranform of \(C^{[j]}\), if the strict transfrom of \(C^{[j]}\) at the end of the previous step was not smooth, then \(R_{i}^{[j]}\) is the divisor between \(L_{i-1}^{[j]}\) and \(L_{i-1}^{\prime[j]}\) corresponding to the pair \((\hat{\kappa}_{i}^{[j]},\hat{r}_{i}^{[j]})\), see [2, p. 512-515].
To obtain the dual graph of \(\mu_{i}^{-1}(C)\), we start by constructing new star-shaped graphs similarly to what we did in the first step. For each point \(P\) of the form \(Q_{i-1}^{[j]}\), draw a "central" vertex for the divisor \(E_{(1,1)}\), which is the exceptional divisor of the blow-up of \(P\). For each tangent direction of the strict transform of \(C\) at \(Q_{i-1}^{[j]}\) there is a simple path coming out of the central vertex. Unlike in step \(i=1\), it is possible that one of the paths --the one corresponding to branches tangent to \(R_{i}^{[j]}\), if any-- may have labels \((\kappa,r)\) with \(\frac{\kappa}{r}<1\), and the value of this quotient _decreases_ as we move away from \(E_{(1,1)}\). At the end of this path (or connected directly to \(E_{(1,1)}\) if there is no such path) we add a node corresponding to the divisor \(E_{(0,1)}=R_{i}^{[j]}\). The dual graph of \(\mu_{i}\) is obtained by gluing the vertex corresponding to \(R_{i}^{[j]}\) in the dual graph of \(\mu_{i-1}\) with the vertex corresponding to \(R_{i}^{[j]}\) in the graph we have just constructed.
From Proposition 2.4 we may obtain explicit formulas for the compositions of blow-ups that were done in the \(i\)-th step of the above process. Indeed, if \((x_{i}^{[j]},y_{i}^{[j]})\) are coordinates around \(Q_{i}^{[j]}\) such that \(L_{i}^{[j]}=\{y_{i}^{[j]}=0\}\) and \(L_{i}^{\prime[j]}=\{x_{i}^{[j]}=0\}\), then they are related with the coordinates in the previous step by the formula
\[(x_{i-1}^{[j]},y_{i-1}^{[j]})=\left((x_{i}^{[j]})_{i}^{\nu_{i}^{[j]}}(A_{i}^{[ j]}+B_{i}^{[j]}x_{i}^{[j]}+y_{i}^{[j]})_{i}^{a_{i}^{[j]}},\ (x_{i}^{[j]})_{i}^{\hat{\kappa}_{i}^{[j]}}(A_{i}^{[j]}+B_{i}^{[j]}x_{i}^{[j]}+y _{i}^{[j]})_{i}^{b_{i}^{[j]}}\right). \tag{2.5}\]
Here \(a_{i}^{[j]},b_{i}^{[j]}\) are the unique integers with
\[a_{i}^{[j]}\hat{\kappa}_{i}^{[j]}-b_{i}^{[j]}\hat{r}_{i}^{[j]}=(-1)^{n_{i}^{[j ]}},\quad 0\leq a_{i}^{[j]}<\hat{r}_{i}^{[j]},\quad 0\leq b_{i}^{[j]}<\hat{ \kappa}_{i}^{[j]},\]
and \(n_{i}^{[j]}\) is the number of divisions in the Euclidean algorithm of \((\hat{\kappa}_{i}^{[j]},\hat{r}_{i}^{[j]})\). The constants \(A_{i}^{[j]}\in\mathbb{C}^{\times},B_{i}^{[j]}\in\mathbb{C}\) denote the \(y\)-coordinate and the direction at which \(L_{i}^{[j]}\) intersects \(R_{i}^{[j]}\) in the partial resolution. More precisely, if we considered the coordinates that we would get by applying Proposition 2.4 directly, the curve \(L_{i}^{[j]}\) would have a parametrization \((t+O(t^{2}),-A_{i}-B_{i}t+O(t^{2}))\). Thus the effect of the translation by \(A_{i}^{[j]}+B_{i}^{[j]}x_{i}^{[j]}\) is to set \(Q_{i}^{[j]}\) as the origin and make \(L_{i}^{[j]}\) tangent to the \(x_{i}^{[j]}\)-axis.
After finitely many iterations, this process finishes, and we obtain the minimal embedded resolution of \((C,0)\), see for instance [5, SS1.4], [6, SS5.3, SS5.4] or [2, p. 522-531]. The extra divisors that appear in the minimal \(m\)-separating log resolution come from further blowing up the intersection of exceptional divisors and strict transforms in the minimal resolution, so they can also be described in terms of the \((\kappa,r)\) pairs associated to the coordinates that we already have.
Now that we have described the resolution, let us define some concepts that will be useful in the following sections.
**Definition 2.6**.: Let \(E\in\mathscr{E}\) be a divisor in the minimal \(m\)-separating log resolution \(\mu:Y\to\mathbb{C}^{2}\) (or more generally any divisor that appears after a finite sequence of blow-ups \(\mu:Y\to\mathbb{C}^{2}\)). Choose a smooth curve \(\tilde{D}\subset Y\) that intersects \(E\) transversely and meets no other divisor. The blow-down \(D\) of \(\tilde{D}\) via \(\mu\) is an irreducible germ of plane curve, and hence it has some Newton pairs associated to it (see the beginning of this section). These pairs do not depend on the choice of \(\tilde{D}\); we call them the _Newton pairs_ associated to \(E\) and denote them by \((\hat{\kappa}_{i}^{E},\hat{r}_{i}^{E}),i=1,\ldots,g^{E}\). We also define \(r_{i}^{E},\kappa_{i}^{E},k_{i}^{E}\) and \(R_{i}^{E}\) for \(i\leq g^{E}\), and \(Q_{i}^{E}\) for \(i\leq g^{E}-1\), to be equal to the corresponding data of \(D\).
Note that \(R_{g^{E}}^{E}=E\). We fix the notation \((\hat{\kappa}^{E},\hat{r}^{E})\coloneqq(\hat{\kappa}_{g^{E}}^{E},\hat{r}_{g^{ E}}^{E})\), since we will use those numbers frequently. Finally, the resolution of \(D\) gives rise to coordinates around each \(Q_{i}^{E}\) which we denote by \((x_{i}^{E},y_{i}^{E}),i=1,\ldots,g^{E}-1\), and they are related with each other by equation (2.5). For the coordinates \((x_{g^{E}}^{E},y_{g^{E}}^{E})\) we ignore the shift, since that is an arbitrary choice depending on \(D\). In other words,
\[(x_{g^{E}-1}^{E},y_{g^{E}-1}^{E})=\left((x_{g^{E}}^{E})^{\hat{r}^{E}}(y_{g^{E }}^{E})^{a_{g^{E}}^{E}},\ (x_{g^{E}}^{E})^{\hat{\kappa}^{E}}(y_{g^{E}}^{E})^{b_{g^{E}}^{E}} \right). \tag{2.7}\]
Note that the Newton pairs associated to \(E\in\mathscr{E}\) are _not_ enough to distinguish \(E\) from the other divisors -- in order to do that we need the extra information of the points that we are blowing up and the coordinate axes we are using to define the \((\kappa,r)\) coordinates. Nevertheless, the Newton pairs are enough to compute the data we need.
**Definition 2.8**.:
1. We say \(E\in\mathscr{E}\) is a _rupture divisor_ if it intersects at least three other divisors in \(\mathscr{E}\cup\mathscr{S}\), i.e. if the corresponding vertex in the dual graph of \(\mu\) has valency greater or equal than \(3\). We denote by \(\mathscr{R}\subset\mathscr{E}\) the set of rupture divisors. Among these, we denote by \(\mathscr{R}_{1}\subset\mathscr{R}\) the set of rupture divisors \(R\) such that \((\hat{\kappa}^{R},\hat{r}^{R})=(1,1)\), and \(\mathscr{R}_{0}\coloneqq\mathscr{R}\setminus\mathscr{R}_{1}\).
2. Following a simple path in the dual graph that starts at a rupture divisor \(R\in\mathscr{R}\), we may encounter two situations. 1. If the path reaches a vertex of valency \(1\) which is not a strict transform (sometimes called a _leaf_), we say that the path is an _end_ of the dual graph, and denote by \(\mathscr{F}_{R}\subset\mathscr{E}\) the set of divisors in the path between the rupture divisor \(R\) and the divisor of valency \(1\), _including_ both \(R\) and the leaf. The notation \(\mathscr{F}_{R}\) is well defined because, other than one exception that we cover in the next paragraph, it follows from the resolution process that each rupture divisor has at most one end coming out of it. If there are no ends attached to \(R\) we simply set \(\mathscr{F}_{R}=\{R\}\). The exceptional case happens when all branches \(C^{[j]}\) share the same first rupture divisor, that is \(R_{1}^{[j]}=R_{1}^{[j^{\prime}]}\) for all \(j,j^{\prime}=1,\ldots,r\). In this case, the divisor \(R=R_{1}^{[j]}\) has two ends coming out of it. We will write \(\mathscr{F}_{R,1}\) and \(\mathscr{F}_{R,2}\) for the divisors in each of the ends when we need to distinguish between them, and we let \(\mathscr{F}_{R}\coloneqq\mathscr{F}_{R,1}\cup\mathscr{F}_{R,2}\).
3. If the path reaches another rupture divisor or a strict transform, i.e. an element \(S\in\mathscr{R}\cup\mathscr{S}\), then we say that the path is a _trunk_ of the dual graph, and denote by \(\mathscr{F}_{R,S}\subset\mathscr{E}\) the set of divisors in the path, _not_ including either \(R\) or \(S\). If \(R,S\in\mathscr{R}\cup\mathscr{S}\) are not connected by a path of divisors of valency \(2\), we set \(\mathscr{S}_{R,S}=\varnothing\). We denote by \(\mathscr{N}_{R}\) the set of \(S\in\mathscr{R}\cup\mathscr{S}\) such that \(R\) is connected to \(S\) by a path of divisors of valency \(2\), and call the \(S\in\mathscr{N}_{R}\) the _neighbors_ of \(R\).
* If some divisors in an end \(\mathscr{F}_{R}\) are \(m\)-divisors, there is one of them which is closest to \(R\) (which could be \(R\) itself if it is an \(m\)-divisor). We denote said divisor by \(E_{\operatorname{dom}}(m,R)\). This is also well defined in the special case where all branches of \(C\) share the same divisor \(R_{1}^{[j]}\), since in this case either \(R_{1}^{[j]}\) is an \(m\)-divisor or there are \(m\)-divisors in at most one of the two ends, see [3, Proposition 7.16] or Proposition 2.17. We denote by \(\mathscr{F}_{m,R}\) the divisors that are "at least as far away" from \(R\) as \(E_{\operatorname{dom}}(m,R)\), that is, the divisors in the path from \(E_{\operatorname{dom}}(m,R)\) to the divisor of valency \(1\). If \(R\) is an \(m\)-divisor then \(\mathscr{F}_{m,R}=\mathscr{F}_{R}\) (also in the exceptional case of a shared \(R_{1}^{[j]}\)). If there are no \(m\)-divisors in \(\mathscr{F}_{R}\), we set \(\mathscr{F}_{m,R}=\varnothing\).
**Definition 2.9**.: Let \(R,S\in\mathscr{R}\cup\mathscr{S}\) be neighbors and let \(E,F\in\mathscr{F}_{R,S}\cup\{R,S\}\) be two adjacent divisors, that is, such that \(E\cap F\neq\varnothing\). We define \(d(\mathscr{F}_{R,S})\) to be the greatest common divisor of \(N_{E}\) and \(N_{F}\). This does not depend on the choice of \(E\) and \(F\) but only on the trunk \(\mathscr{F}_{R,S}\), see the proof of Proposition 4.2 below.
Similarly, if \(E,F\in\mathscr{F}_{R}\) are two adjacent divisors we define \(d(\mathscr{F}_{R})\coloneqq\gcd(N_{E},N_{F})\), which does not depend on \(E\) or \(F\) either, see _loc. cit_. In the exceptional case where \(R\) has two ends attached to it, we define \(d(\mathscr{F}_{R_{1}})\) and \(d(\mathscr{F}_{R_{2}})\) analogously.
Finally, for each rupture divisor \(R\in\mathscr{R}\) we define \(d(R)\) to be the greatest common divisor of \(N_{R}\) and all the \(N_{E}\), where \(E\) runs through the divisors adjacent to \(R\).
**Definition 2.10**.: Given an exceptional divisor \(E\in\mathscr{E}\), we split the strict transforms of \(C\) into groups as follows. Let \(\Gamma\) be the dual graph associated to the resolution \(\mu\), and denote by \(\Gamma\setminus E\) the graph obtained by deleting the vertex corresponding to \(E\).
* If \(E\) is not the root of \(\Gamma\) (i.e. the vertex corresponding to the exceptional divisor of the first blow-up), then we call the connected component of \(\Gamma\setminus E\) that contains the root the _left_ component of \(\Gamma\setminus E\). We say that the \(\tilde{C}^{[j]}\) that lie on the left component of \(\Gamma\setminus E\) are _to the left_ of \(E\). We denote by \(\mathscr{S}_{\leftarrow}^{E}\) the set of \(j\in\{1,\dots,b\}\) such that \(\tilde{C}^{[j]}\) lies to the left of \(E\).
* If \(E\not\in\mathscr{R}_{1}\), then there is a unique component of \(\Gamma\setminus E\) containing divisors \(F\) with \(\kappa_{g^{E}}^{F}/r_{g^{E}}^{F}>\kappa_{g^{E}}^{E}/r_{g^{E}}^{E}\). We call this component the _right_ component of \(\Gamma\setminus E\). The \(\tilde{C}^{[j]}\) that lie on the right component of \(\Gamma\setminus E\) are said to be _to the right_ of \(E\). We denote by \(\mathscr{S}_{\rightarrow}^{E}\) the set of \(j\in\{1,\dots,b\}\) such that \(\tilde{C}^{[j]}\) lies to the right of \(E\).
* Let \(U^{E}\) be the set of connected components of \(\Gamma\setminus E\) different from the left and the right components that we just defined. Clearly \(U^{E}\neq\varnothing\) if and only if \(E\) is a rupture divisor. We denote by \(\mathscr{S}_{u}^{E}\) the set of \(j\in\{1,\dots,b\}\) such \(\tilde{C}^{[j]}\) lies in the component \(u\in U^{E}\). If \(j\in\mathscr{S}_{u}^{E}\) for some \(u\in U^{E}\) we say that \(\tilde{C}^{[j]}\) lies _above_ E.
### Computing the resolution data
A key ingredient in our computations is to combine the above explicit description of an embedded resolution with the Puiseux decomposition. The following lemma, which is part of the proof of [3, Proposition 7.10], gives a formula for the lift of the Puiseux factors \(y_{0}^{[j]}-\varphi^{[j]}(\xi(x_{0}^{[j]})^{1/\operatorname{mult}(f^{[j]})})\) to different steps of the resolution.
**Notation 2.11**.: Let \(h\in\mathbb{C}[\![x,y]\!]\) be a power series and let \(\gamma(t)=(\alpha(t),\beta(t))\) be an arc or parametrized curve in the plane. We will denote by \(h(\gamma(t))\), or even \(h(t)\) when the arc is understood from context, the result of substituting the series \(\alpha(t),\beta(t)\) in \(h\). The parameter \(t\) will be used for arcs, and the parameter \(\tau\) will be reserved for the curve \(C\).
**Lemma 2.12**.: _Let \((C^{[j]},0)\) be an irreducible component of \((C,0)\). We omit the superindices \([j]\) from the notation for clarity. Let \(\varphi\in\mathbb{C}[\![t]\!]\) be the series in the Puiseux
decomposition (2.2) and for each \(\xi\in\boldsymbol{\mu}_{r_{1}}\) let \(P_{\xi,0}\coloneqq y_{0}-\varphi(\xi x_{0}^{1/r_{1}})\), which is a power series in \(x_{0}^{1/r_{1}}\) and \(y_{0}\). Use (2.5) to see \(x_{i},y_{i}\) as polynomials in \(x_{i^{\prime}},y_{i^{\prime}}\) for all \(0\leq i\leq i^{\prime}\leq g\). Then for each \(i=1,\ldots,g\) and each \(\xi\in\boldsymbol{\mu}_{r_{i}}\) we may express_
\[P_{\xi,0}=y_{0}\cdots y_{i-1}P_{\xi,i},\]
_where \(P_{\xi,i}\) is a power series in \(x_{i}^{1/r_{i+1}},y_{i}\). If \(\xi\not\in\boldsymbol{\mu}_{r_{i+1}}\) then \(P_{\xi,i}(0,0)\neq 0\). Otherwise, if \(\xi\in\boldsymbol{\mu}_{r_{i+1}}\), then the only vertices in the Newton polygon of \(P_{\xi,i}\) are those corresponding to \(x_{i}^{\kappa_{i+1}/r_{i+1}}\) and \(y_{i}\)._
Proof.: We proceed by induction on \(i\), where the base case \(i=0\) is the trivial fact that for any \(\xi\in\boldsymbol{\mu}_{r_{1}}\) the only vertices of the Newton polygon of \(P_{\xi,0}\) are the ones corresponding to \(x_{0}^{\kappa_{1}/r_{1}}\) and \(y_{0}\). Suppose that \(\xi\in\boldsymbol{\mu}_{r_{i}},1\leq i\leq g\). By the induction hypothesis \(P_{\xi,0}=y_{0}\cdots y_{i-2}P_{\xi,i-1}\), where \(P_{\xi,i-1}\) is a power series in \(x_{i-1}^{1/r_{i}},y_{i-1}\) and the only vertices in the Newton polygon of \(P_{\xi,i-1}\) are those corresponding to \(x_{i-1}^{\kappa_{i}/r_{i}},y_{i-1}\).
From (2.5) it follows that
\[\frac{x_{i-1}^{\kappa_{i}/r_{i}}}{y_{i-1}}=\frac{x_{i}^{\hat{\kappa}_{i}\kappa _{i}/r_{i}}(A_{i}+B_{i}x_{i}+y_{i})^{a_{i}\kappa_{i}/r_{i}}}{x_{i}^{\hat{\kappa }_{i}}(A_{i}+B_{i}x_{i}+y_{i})^{b_{i}}}=(A_{i}+B_{i}x_{i}+y_{i})^{(-1)^{n_{i} r_{i+1}/r_{i}}}, \tag{2.13}\]
which is a power series in \(x_{i},y_{i}\) by the binomial theorem. By the condition on the vertices of the Newton polygon of \(P_{\xi,i-1}\), any monomial that appears in \(P_{\xi,i-1}\) with a nonzero coefficient is a multiple of either \(x_{i-1}^{\kappa_{i}/r_{i}}\) or \(y_{i-1}\). Therefore we may divide \(P_{\xi,i-1}\) by \(y_{i-1}\) and conclude that \(P_{\xi,i-1}=y_{i-1}P_{\xi,i}\) with \(P_{\xi,i}\) a power series in \(x_{i}^{1/r_{i+1}},y_{i}\).
Consider the parametrization \((x_{i}(\tau),y_{i}(\tau))\) of the strict transform of \(C^{[j]}\) at the moment of the resolution in which it intersects \(R_{i}\), which results from lifting the original parametrization, \((x_{0}(\tau),y_{0}(\tau))=(\tau^{r_{1}},\varphi(\tau))\), through the blow-ups (2.5). Let \(P_{\xi,i}(\tau)\) be the result of substituting said parametrization into \(P_{\xi,i}\), see Notation 2.11. Then,
\[P_{\xi,0}(\tau)=\varphi(\tau)-\varphi(\xi\tau)=\sum_{k\geq k_{1}}(1-\xi^{k}) \varphi_{k}\tau^{k}.\]
Note that \(\boldsymbol{\mu}_{r_{i}}=\boldsymbol{\mu}_{r_{1}}\cap\boldsymbol{\mu}_{k_{1} }\cap\cdots\cap\boldsymbol{\mu}_{k_{i-1}}\). If \(\xi\not\in\boldsymbol{\mu}_{r_{i+1}}\) then \(\operatorname{ord}_{\tau}P_{\xi,0}(\tau)=k_{i}\). It follows from the resolution algorithm [2, p. 512-516] that \(\operatorname{ord}_{\tau}x_{j}(\tau)=r_{j+1},\operatorname{ord}_{\tau}y_{j}( \tau)=\kappa_{j+1}\) for all \(j=0,\ldots,g\). Thus
\[\operatorname{ord}_{\tau}P_{\xi,i}(\tau)=\operatorname{ord}_{\tau}P_{\xi,0}( \tau)-\sum_{j=0}^{i-1}\operatorname{ord}_{\tau}y_{j}(\tau)=k_{i}-(\kappa_{1} +\cdots+\kappa_{i})=0,\]
meaning \(P_{\xi,i}\) has nonzero constant term. On the other hand, if \(\xi\in\boldsymbol{\mu}_{r_{i+1}}\) then \(\operatorname{ord}_{\tau}P_{\xi,0}\geq k_{i+1}\), and therefore \(\operatorname{ord}_{\tau}P_{\xi,i}\geq k_{i+1}-(\kappa_{1}+\cdots+\kappa_{i})= \kappa_{i+1}\). The only monomials in \(P_{\xi,i}\) that may attain order \(\kappa_{i+1}\) when we substitute the parametrization of the strict transform of \(C^{[j]}\) are \(x_{i}^{\kappa_{i+1}/r_{i+1}}\) and \(y_{i}\).
Now we prove that the coefficients of \(x_{i}^{\kappa_{i+1}/r_{i+1}}\) and \(y_{i}\) are nonzero. More generally, for every \(0\leq i^{\prime}\leq n\leq g\) denote by \(c_{\xi,x}(i^{\prime},n)\) the coefficient of \(x_{i^{\prime}}^{(k_{n}-k_{\ell})/r_{i^{\prime}+1}}\) in \(P_{\xi,i^{\prime}}\) and by \(c_{\xi,y}(i^{\prime})\) the coefficient of \(y_{i^{\prime}}\) in \(P_{\xi,i^{\prime}}\). Here we convene that \(k_{0}=0\). Then, from equation (2.5) we obtain the relations
\[c_{\xi,x}(0,n)=-\varphi_{k_{n}}\xi^{k_{n}}, c_{\xi,x}(i^{\prime}+1,n)=c_{\xi,x}(i^{\prime},n)A_{i^{\prime}+1}^{a_{i^ {\prime}+1}(k_{n}-k_{\ell^{\prime}})/r_{i^{\prime}+1}-b_{i^{\prime}+1}},\] \[c_{\xi,y}(0)=1, c_{\xi,y}(i^{\prime}+1)=c_{\xi,x}(i^{\prime},i^{\prime}+1)a_{i^{ \prime}+1}\kappa_{i^{\prime}+1}A_{i^{\prime}+1}^{(-1)^{w(i^{\prime}+1)}r_{i^{ \prime}+2}/r_{i^{\prime}+1}-1}/r_{i^{\prime}+1}. \tag{2.14}\]
From these formulas it is clear that all of these coefficients are nonzero. In particular, \(x_{i}^{\kappa_{i+1}/r_{i+1}}\) and \(y_{i}\) are the only vertices in the Newton polygon, concluding the induction and the proof.
**Proposition 2.15**.: _Let \(E\in\mathscr{E}\) be an exceptional divisor with Newton pairs \((\hat{\kappa}_{i}^{E},\hat{r}_{i}^{E}),i=1,\dots,g^{E}\). Then_
\[\nu_{E}=k_{g^{E}}^{E}+r_{1}^{E}.\]
Proof.: Let \(X\to\mathbb{C}^{2}\) be a composition of blow-ups and suppose \(E_{(1,0)}\) and \(E_{(0,1)}\) are two divisors on \(X\) intersecting transversely at a point \(Q\in X\), as in Proposition 2.4, with log discrepancies \(\nu_{(1,0)}\) and \(\nu_{(0,1)}\) respectively. Then the log discrepancy of \(E_{(\kappa,r)}\) is
\[\nu_{(\kappa,r)}=\kappa\nu_{(1,0)}+r\nu_{(0,1)}.\]
We prove this by induction on the pair \((\kappa,r)\). The formula is clearly true for the base cases \(E_{(1,0)}\) and \(E_{(0,1)}\). Now suppose that the formula holds true for divisors \(E_{(\kappa^{\prime},r^{\prime})}\) and \(E_{(\kappa^{\prime\prime},r^{\prime\prime})}\) over \(Q\) intersecting transversely at a point. We are going to prove that the formula also holds for \(E_{(\kappa,r)}=E_{(\kappa^{\prime}+\kappa^{\prime\prime},r^{\prime}+r^{ \prime\prime})}\), the divisor that appears when we blow up the point \(E_{(\kappa^{\prime},r^{\prime})}\cap E_{(\kappa^{\prime\prime},r^{\prime \prime})}\). Let \(W\to X\to\mathbb{C}^{2}\) be the minimal composition of blow-ups that makes \(E_{(\kappa^{\prime},r^{\prime})}\) and \(E_{(\kappa^{\prime\prime},r^{\prime\prime})}\) appear (i.e. such that the centers of the valuations associated to \(E_{(\kappa^{\prime},r^{\prime})}\) and \(E_{(\kappa^{\prime\prime},r^{\prime\prime})}\) are divisors), and let \(\sigma:\widetilde{W}\to W\) be the blow-up of the intersection point of \(E_{(\kappa^{\prime},r^{\prime})}\) and \(E_{(\kappa^{\prime\prime},r^{\prime\prime})}\). Then,
\[\operatorname{ord}_{E_{(\kappa,r)}}K_{\widetilde{W}/\mathbb{C}^{ 2}} =\operatorname{ord}_{E_{(\kappa,r)}}(\sigma^{*}K_{W/\mathbb{C}^{ 2}}+E_{(\kappa,r)})\] \[=\operatorname{ord}_{E_{(\kappa^{\prime},r^{\prime})}}K_{W/ \mathbb{C}^{2}}+\operatorname{ord}_{E_{(\kappa^{\prime\prime},r^{\prime \prime})}}K_{W/\mathbb{C}^{2}}+1\]
and hence
\[\nu_{E_{(\kappa,r)}}=\nu_{E_{(\kappa^{\prime},r^{\prime})}}+\nu_{E_{(\kappa^{ \prime\prime},r^{\prime\prime})}}=(\kappa^{\prime}+\kappa^{\prime\prime})\nu _{(1,0)}+(r^{\prime}+r^{\prime\prime})\nu_{(0,1)}=\kappa\nu_{(1,0)}+r\nu_{(0,1 )}.\]
Applying this formula we see that
\[\nu_{R_{i}^{E}}=\hat{\kappa}_{i}^{E}+\hat{r}_{i}^{E}\nu_{R_{i-1}^{E}}\quad \text{for all }i=1,\dots,g^{E},\]
since \(R_{i}^{E}\) is the divisor corresponding to the pair \((\hat{\kappa}_{i}^{E},\hat{r}_{i}^{E})\) between a non-exceptional curve (which therefore has log discrepancy \(1\)) and \(R_{i-1}^{E}\). In the case \(i=1\), a non-exceptional curve plays the role of "\(R_{0}^{E}\)". The result follows by recalling that \(E=R_{g^{E}}^{E}\) and expanding the inductive formula.
**Definition 2.16**.: Given \(j=1,\dots,b\) and \(E\in\mathscr{E}\), we denote by \(\omega=\omega(E,j)\) the maximal integer \(0\leq i\leq g^{E}-1\) such that \(Q_{i}^{E}=Q_{i}^{[j]}\). If \(R_{\omega+1}^{E}\) is a divisor between \(L_{\omega}^{[j]}\) and \(L_{\omega}^{\prime[j]}\), we say that \(E\)_belongs to the resolution of \(C^{[j]}\)_. The reason for this is that under this condition the divisor \(E\) may be obtained by successively blowing up intersection points of exceptional divisors from the minimal log resolution of the branch \(C^{[j]}\).
**Proposition 2.17**.: _Let \(j=1,\dots,b\) and let \(E\in\mathscr{E}\) be an exceptional divisor with Newton pairs \((\hat{\kappa}_{i}^{E},\hat{r}_{i}^{E}),i=1,\dots,g^{E}\). If \(E\) belongs to the resolution of \(C^{[j]}\), then_
\[N_{E}^{[j]}=\kappa_{1}^{E}r_{1}^{[j]}+\dots+\kappa_{\omega}^{E}r_{\omega}^{[j] }+\min\left\{\kappa_{\omega+1}^{[j]}r_{\omega+1}^{E},\kappa_{\omega+1}^{E}r_{ \omega+1}^{[j]}\right\}.\]
_Otherwise, if \(E\) does not belong to the resolution of \(C^{[j]}\), then_
\[N_{E}^{[j]}=\kappa_{1}^{E}r_{1}^{[j]}+\dots+\kappa_{\omega}^{E}r_{\omega}^{[j] }+r_{\omega+1}^{E}r_{\omega+1}^{[j]}.\]
Proof.: Let \(D\) be a curve in \(\mathbb{C}^{2}\) whose lift \(\tilde{D}\) to the resolution \(\mu\) intersects the divisor \(E\) transversely at a point different from the intersection points of \(E\) with other exceptional divisors or with the strict transform of \(C\). We may express \(N_{E}^{[j]}\) as an intersection number:
\[N_{E}^{[j]}=N_{E}^{[j]}(E\cdot\tilde{D})=\mu^{*}(f^{[j]})\cdot\tilde{D}=f^{[j]} \cdot D.\]
In turn, we may compute this intersection multiplicity as a sum of _fractional intersection multiplicities_,
\[f^{[j]}\cdot D=\sum_{\xi\in\boldsymbol{\mu}_{r_{1}^{[j]}}}P_{\xi,0}^{[j]} \cdot D.\]
These were introduced in [3, Definition 7.5]. For our current purposes we just need to know that these are a well-defined list of rational numbers that add up to the usual intersection multiplicity, and that they are computed by substituting a parametrization of \(D\) into the series \(P_{\xi,0}^{[j]},\ \xi\in\boldsymbol{\mu}_{r_{1}^{[j]}}\) (which involves choosing a root of a power series). The idea of using fractional arcs is not new, see for instance [9] and [16].
Let \(t\) be a parameter for \(D\), see Notation 2.11. Since \(\tilde{D}\) is transversal to \(E\) (and by definition of the coordinates \((x_{g^{E}}^{E},y_{g^{E}_{E}}^{E})\)), we have
\[\operatorname{ord}_{t}x_{g^{E}}^{E}(t)=1,\quad\operatorname{ord}_{t}y_{g^{E}}^ {E}(t)=0.\]
Applying first (2.7) and then (2.5) repeatedly, we find
\[\operatorname{ord}_{t}x_{i}^{E}(t)=r_{i+1}^{E},\quad\operatorname{ord}_{t}y_{i }^{E}(t)=\kappa_{i+1}^{E},\quad\text{for all }i=0,\dots,g^{E}-1.\]
Now substitute these into the Puiseux series decomposition using Lemma 2.12. By definition of \(\omega\), the coordinates \((x_{i}^{[j]},y_{i}^{[j]})\) and \((x_{i}^{E},y_{i}^{E})\) coincide for all \(i=1,\dots,\omega-1\), and they coincide for \(i=\omega\) if and only if \(E\) is a divisor between \(L_{\omega}^{[j]}\) and \(L_{\omega}^{[j]}\), i.e. if \(E\) belongs to the resolution of \(C^{[j]}\). Hence if \(\xi\in\boldsymbol{\mu}_{r_{i}^{[j]}}\setminus\boldsymbol{\mu}_{r_{i+1}^{[j]}}\) for some \(i=1,\dots,\omega\), then
\[\operatorname{ord}_{t}P_{\xi,0}^{[j]}(t)=\operatorname{ord}_{t}y_{0}^{E}(t)+ \dots+\operatorname{ord}_{t}y_{i-1}^{E}(t)+\operatorname{ord}_{t}P_{\xi,i}^{[j ]}(t)=\kappa_{1}^{E}+\dots+\kappa_{i}^{E}+0=k_{i}^{E}.\]
On the other hand, if \(\xi\in\boldsymbol{\mu}_{r_{\omega+1}^{[j]}}\), then the only vertices in the Newton polygon of \(P_{\xi,\omega}^{[j]}\) are those corresponding to \((x_{\omega}^{[j]})^{\kappa_{\omega+1}^{[j]}/r_{\omega+1}^{[j]}}\) and \(y_{\omega}^{[j]}\). Thus,
\[\operatorname{ord}_{t}P_{\xi,\omega}^{[j]}(t)=\min\left\{\frac{\kappa_{\omega +1}^{[j]}}{r_{\omega+1}^{[j]}}\operatorname{ord}_{t}x_{\omega}^{[j]}(t), \operatorname{ord}_{t}y_{\omega}^{[j]}(t)\right\}.\]
Summing over all possible values of \(\xi\in\boldsymbol{\mu}_{r_{1}^{[j]}}\) we obtain
\[N_{E}^{[j]} =f^{[j]}\cdot D=\operatorname{ord}_{t}f^{[j]}(t)=\sum_{\xi\in \boldsymbol{\mu}_{r_{1}^{[j]}}}\operatorname{ord}_{t}P_{\xi,0}^{[j]}(t)=\] \[=(r_{1}^{[j]}-r_{2}^{[j]})k_{1}^{E}+\dots+(r_{\omega}-r_{\omega+1 })k_{\omega}^{E}+r_{\omega+1}^{[j]}\left(k_{\omega}^{E}+\operatorname{ord}_{t} P_{\xi,\omega}^{[j]}(t)\right)=\] \[=\kappa_{1}^{E}r_{1}^{[j]}+\dots+\kappa_{\omega}^{E}r_{\omega}^{[ j]}+\min\left\{\kappa_{\omega+1}^{[j]}\operatorname{ord}_{t}x_{\omega}^{[j]}(t),r_{ \omega+1}^{[j]}\operatorname{ord}_{t}y_{\omega}^{[j]}(t)\right\}.\]
If \(E\) belongs to the resolution of \(C^{[j]}\), then
\[\operatorname{ord}_{t}x_{\omega}^{[j]}(t)=\operatorname{ord}_{t}x_{\omega}^{E} (t)=r_{\omega+1}^{E},\quad\operatorname{ord}_{t}y_{\omega}^{[j]}(t)= \operatorname{ord}_{t}y_{\omega}^{E}(t)=\kappa_{\omega+1}^{E}.\]
Otherwise, if \(E\) does not belong to the resolution of \(C^{[j]}\), we must have \(\hat{\kappa}_{\omega+1}^{E}>\hat{\tau}_{\omega+1}^{E}\) and \(\hat{\kappa}_{\omega+1}^{[j]}>\hat{\tau}_{\omega+1}^{[j]}\) -- if either of these two inequalitites did not hold, then we could change \(L_{\omega}^{E}\) or \(L_{\omega}^{[j]}\) so that \(E\) belongs to the resolution of \(C^{[j]}\). Hence
\[\operatorname{ord}_{t}x_{\omega}^{[j]}(t)=\operatorname{ord}_{t}y_{\omega}^{[j] }(t)=\min\left\{\kappa_{\omega+1}^{E},r_{\omega+1}^{E}\right\}=r_{\omega+1}^ {E}.\]
## 3. Topology of the contact loci
We begin by recalling the basic definitions and results concerning contact loci. We restrict the definitions to our setting of plane curves; for the general definitions and basic properties of contact loci the reader may consult [10].
**Definition 3.1**.:
1. For any positive integer \(m\) we define the _\(m\)-th jet space_ of \(\mathbb{C}^{2}\) to be the scheme \(\mathscr{L}_{m}(\mathbb{C}^{2})\coloneqq\operatorname{Spec}\mathbb{C}[\gamma _{1,0},\ldots,\gamma_{1,m},\gamma_{2,0},\ldots,\gamma_{2,m}]\), which is characterized by the property that \(\mathscr{L}_{m}(\mathbb{C}^{2})(A)\cong\mathbb{A}^{2}_{\mathbb{C}}(A[t]/(t^{ m+1}))\) for all \(\mathbb{C}\)-algebras \(A\).
2. The _arc space_ of \(\mathbb{C}^{2}\) is the scheme \(\mathscr{L}_{\infty}(\mathbb{C}^{2})\coloneqq\operatorname{Spec}\mathbb{C}[ \gamma_{1,0},\gamma_{1,1},\ldots,\gamma_{2,0},\gamma_{2,1},\ldots]\), characterized by the property that \(\mathscr{L}_{\infty}(\mathbb{C}^{2})(A)\cong\mathbb{A}^{2}_{\mathbb{C}}(A[t])\) for all \(\mathbb{C}\)-algebras \(A\). There are natural _truncation morphisms_\(\pi_{m}:\mathscr{L}_{\infty}(\mathbb{C}^{2})\to\mathscr{L}_{m}(\mathbb{C}^{2})\) for all \(m\in\mathbb{Z}_{>0}\).
Here we only work with the closed points of arc spaces and jet spaces. In particular for us an arc is a morphism \(\gamma:\operatorname{Spec}\mathbb{C}[\![t]\!]\to\mathbb{C}^{2}\).
**Definition 3.2**.: Let \(f\in\mathbb{C}[\![x,y]\!]\) and let \(m\) be a positive integer. The _\(m\)-th restricted contact locus_ of \(f\) at the origin is the space
\[\mathcal{X}_{m}\coloneqq\mathcal{X}_{m}(f,0)\coloneqq\{\gamma\in\mathscr{L}_{ m}(\mathbb{C}^{2})\mid\gamma(0)=0,\ f(\gamma(t))=t^{m}\mod t^{m+1}\}.\]
Its lift to the arc space will be denoted by \(\mathcal{X}_{m}^{\infty}=\pi_{m}^{-1}(\mathcal{X}_{m})\). The coefficient of the leading term of a power series \(\psi\in\mathbb{C}[\![t]\!]\) is called its _angular component_, and denoted by \(\operatorname{ac}(\psi)\). The condition for an arc \(\gamma\) to be in the restricted \(m\)-contact locus splits in two:
\[\operatorname{ord}_{t}f(\gamma(t))=m,\quad\operatorname{ac}f(\gamma(t))=1.\]
**Definition 3.3**.: Let \(f\in\mathbb{C}[\![x,y]\!]\) and let \(m\) be a positive integer. Let \(\mu:(Y,\mathbf{E})\to(\mathbb{C}^{2},0)\) be the minimal \(m\)-separating log resolution of \(C=V(f)\) and recall that we are denoting by \(\mathscr{E}\) the set of irreducible components of the exceptional divisor \(\mathbf{E}\). Every arc \(\gamma\in\mathcal{X}_{m}^{\infty}\) can be lifted to a unique arc \(\tilde{\gamma}:\operatorname{Spec}\mathbb{C}[\![t]\!]\to Y\) in the resolution by the valuative criterion of properness.
1. For each exceptional component \(E\in\mathscr{E}\) we define \(\mathcal{X}_{m,E}^{\infty}\) to be the set of arcs in \(\mathcal{X}_{m}^{\infty}\) whose lift to the resolution intersects \(E\) and no other divisor, that is, \[\mathcal{X}_{m,E}^{\infty}\coloneqq\{\gamma\in\mathcal{X}_{m}^{\infty}\mid \tilde{\gamma}(0)\in E^{\circ}\}\,,\] where \(E^{\circ}\coloneqq E\setminus\cup_{E\neq F\in\mathcal{E}\cup\mathcal{E}}F\).
2. For each rupture divisor \(R\in\mathscr{R}\) we define \(\mathfrak{Z}_{m,R}^{\infty}\) to be the set of arcs in \(\mathcal{X}_{m}^{\infty}\) that lift to the end \(\mathscr{F}_{R}\) associated to \(R\), see Definition 2.8. That is, \[\mathfrak{Z}_{m,R}^{\infty}\coloneqq\bigcup_{E\in\mathscr{F}_{R}}\mathcal{X}_ {m,E}^{\infty}.\]
Note that the \(m\)-separating condition on \(\mu\) already implies that \(\tilde{\gamma}(0)\) cannot lie in the intersection of two or more exceptional components for \(\gamma\in\mathcal{X}_{m}^{\infty}\). Furthermore, \(\mathcal{X}_{m,E}^{\infty}\) is nonempty if and only if \(E\) is an \(m\)-divisor, i.e. if \(N_{E}\) divides \(m\).
### Decomposing the contact locus
Even though each \(\mathfrak{Z}_{m,R}^{\infty}\) is a disjoint union of sets of the form \(\mathcal{X}_{m,E}^{\infty}\), its topology (as a subspace of \(\mathcal{X}_{m}^{\infty}\)) is not the disjoint union topology. In fact, it may happen that for two different divisors \(E,F\in\mathfrak{E}\) we have \(\overline{\mathcal{X}_{m,E}^{\infty}\cap\overline{\mathcal{X}_{m,F}^{\infty} }\neq\varnothing}\), in which case we say that arcs can _jump_ between \(E\) and \(F\). The following result, which already appeared in [3, Theorem 1.21] in the case where \(C\) is irreducible, gives a decomposition of \(\mathcal{X}_{m}^{\infty}\) into disjoint components, saying precisely which jumps are possible.
**Theorem 3.4**.: _Let \(f\in\mathbb{C}[\![x,y]\!]\) be a a reduced plane curve with \(m\)-separating log resolution \(\mu:(Y,\mathbf{E})\to(\mathbb{C}^{2},0)\) whose dual graph we divide into parts as in Definition 2.8. Then \(\mathcal{X}_{m}^{\infty}\) decomposes as a topologically disjoint union_
\[\mathcal{X}_{m}^{\infty}=\left(\bigsqcup_{R\in\mathscr{R}}\mathfrak{Z}_{m,R} ^{\infty}\right)\sqcup\left(\bigsqcup_{R,S\in\mathscr{R}\cup\mathscr{S}} \bigsqcup_{E\in\mathscr{T}_{R,S}}\mathcal{X}_{m,E}^{\infty}\right).\]
_Moreover, each \(\mathfrak{Z}_{m,R}^{\infty}\) is nonempty if and only if there are \(m\)-divisors in the end \(\mathscr{F}_{R}\), and in that case \(\mathfrak{Z}_{m,R}^{\infty}=\overline{\mathcal{X}_{m,E_{\mathrm{dom}}(m,R)}^{ \infty}}\), where the closure is taken inside \(\mathcal{X}_{m}^{\infty}\)._
Proof.: Let \(E,F\in\mathscr{E}\) be \(m\)-divisors. Suppose \(\gamma_{s},\,s\in\mathbb{C}\) is a continuous family of arcs such that \(\gamma_{s}\in\mathcal{X}_{m,E}^{\infty}\) for \(s\neq 0\) and \(\gamma_{0}\in\mathcal{X}_{m,F}^{\infty}\). Note that
\[m=\gamma_{s}\cdot C=\sum_{j=1}^{r}\gamma_{s}\cdot C^{[j]}\quad\text{for all $s\in\mathbb{C}$}.\]
For each \(j=1,\ldots,b\), the value \(m^{[j]}\coloneqq\gamma_{s}\cdot C^{[j]}\) must be constant for all \(s\in\mathbb{C}\) because of the upper semicontinuity of the intersection product. Therefore the family of arcs \(\gamma_{s}\) lies in the \(m^{[j]}\)-th contact locus of \(f^{[j]}\) for each \(j=1,\ldots,b\).
Suppose that \(E\) and \(F\) do not belong to a common end \(\mathscr{F}_{R}\). In that case there exists a \(j=1,\ldots,b\) such that the centers of \(E\) and \(F\) in the minimal \(m^{[j]}\)-separating log resolution of \(C^{[j]}\) lie in different trunks or ends, or in different divisors of the same trunk. The results in [3, SS7.3] show that it is not possible for arcs in the \(m^{[j]}\)-th contact locus of the irreducible curve \(f^{[j]}\) to jump between such divisors (i.e. between divisors in different trunks or ends, or divisors in the same trunk) -- the idea of the proof is to use the semicontinuity of the fractional intersection multiplicities that we computed in Proposition 2.17. Hence the family \(\gamma_{s}\) cannot exist, and it is not possible to jump between \(E\) and \(F\).
To finish the proof we just need to show that all arcs in \(\mathcal{X}_{m}^{\infty}\) that lift to an end \(\mathscr{F}_{R}\) can be reached jumping from \(E_{\mathrm{dom}}(m,R)\), i.e. that \(\mathfrak{Z}_{m,R}^{\infty}=\overline{\mathcal{X}_{m,E_{\mathrm{dom}}(m,R)}^ {\infty}}\). To do this, fix \(\mathscr{F}_{R}\) and observe that \(N_{E}^{[j]}/\hat{r}^{E}\) is and integer independent of \(E\) for all \(E\in\mathscr{F}_{R}\) by Proposition 2.17. (Importantly, this is not true for divisors in a trunk, since in that case there is at least one \(j\) for which \(\hat{\kappa}^{E}\) appears in the formula for \(N_{E}^{[j]}\) instead of \(\hat{r}^{E}\), and therefore the argument does not apply.) Since the independence holds for all \(j=1,\ldots,b\), the integer \(N_{E}/\hat{r}^{E}\) is also independent of \(E\) for \(E\in\mathscr{F}_{R}\). Let \(\gamma\in\mathfrak{Z}_{m,R}^{\infty}\) be an arc lifting to \(E\in\mathscr{F}_{R}\). Then
\[m^{[j]}\coloneqq\gamma\cdot C^{[j]}=N_{E}^{[j]}(\tilde{\gamma}\cdot E)=\frac{ N_{E}^{[j]}}{N_{E}}N_{E}(\tilde{\gamma}\cdot E)=\frac{N_{E}^{[j]}}{N_{E}}( \gamma\cdot C)=\frac{N_{E}^{[j]}}{N_{E}}m=\frac{N_{E}^{[j]}}{N_{E}/\hat{r}_{g ^{E}}^{E}}m\]
is independent of \(E\in\mathscr{F}_{R}\). Note that there is at least one \(j=1,\ldots,b\) such that \(R\) is also a rupture divisor in the minimal log resolution \(\mu^{[j]}\) of \(f^{[j]}\). Since \(m^{[j]}\) is independent of \(E\in\mathscr{F}_{R}\), we see that \(\mathfrak{Z}_{m,R}^{\infty}\) is precisely the set of arcs in the \(m^{[j]}\)-th contact locus of \(f^{[j]}\) that lift to the end associated to \(R\) in \(\mu^{[j]}\). Therefore the problem reduces to the case of
the irreducible curve \(C^{[j]}\). Once again, the irreducible case was already proved in [3, SS7.3] -- the idea of the proof is to use approximate roots to reduce the situation to a curve with one Puiseux pair, and then apply [13, Lemma 3.11] for the divisorial valuations, which are now toric.
Theorem 3.4 provides information about the topology of the lift of the contact loci to the arc space. To bring the result back down to the jet space, note that the truncation map \(\pi_{m}:\mathscr{L}_{\infty}(\mathbb{C}^{2})\to\mathscr{L}_{m}(\mathbb{C}^{2})\) is the projection of infinite-dimensional affine space to some of its coordinates. Therefore \(\mathscr{X}_{m}=\pi_{m}(\mathscr{X}_{m}^{\infty})\), and disjoint unions are preserved.
**Corollary 3.5**.: _Define \(\mathscr{X}_{m,E}\coloneqq\pi_{m}(\mathscr{X}_{m,E}^{\infty}),\ \mathfrak{Z}_{m,R} \coloneqq\pi_{m}(\mathfrak{Z}_{m,R}^{\infty})\). Then the \(m\)-th contact locus \(\mathscr{X}_{m}^{\infty}\) of \(f\) decomposes as a topologically disjoint union_
\[\mathscr{X}_{m}=\left(\bigsqcup_{R\in\mathscr{R}}\mathfrak{Z}_{m,R}\right) \sqcup\left(\bigsqcup_{R,S\in\mathscr{R}\cup S}\bigsqcup_{E\in\mathscr{F}_{R,S }}\mathscr{X}_{m,E}\right).\]
_Moreover, each \(\mathfrak{Z}_{m,R}\) is nonempty if and only if there are \(m\)-divisors in the end \(\mathscr{F}_{R}\), and in that case \(\mathfrak{Z}_{m,R}=\overline{\mathscr{X}_{m,E_{\mathrm{dom}}(m,R)}}\), where the closure is taken inside \(\mathscr{X}_{m}\)._
This shows that in order to compute the topology of the contact locus \(\mathscr{X}_{m}\) we only need to understand the topology of each of the sets in the decomposition. We are going to give explicit equations for each of these sets, and finally we will use [4, Lemma 2.6] to count the number of free variables.
**Lemma 3.6**.: _Let \(\gamma(t)\) be an arc in \(\mathbb{C}^{2}\) such that \(\mathrm{ord}_{t}\,f(t)=m\), and whose lift to the resolution \(\mu\) intersects the exceptional divisor \(E\in\mathscr{R}\) with Newton pairs \((\hat{\kappa}_{i}^{E},\hat{r}_{i}^{E}),i=1,\dots,g^{E}\). In the \((x_{g^{E}}^{E},y_{g^{E}}^{E})\) coordinates, see Definition 2.6, this lift is parametrized by_
\[x_{g^{E}}^{E}(t)=\alpha t^{m_{E}}+O(t^{m_{E}+1}),\quad y_{g^{E}}^{E}(t)=\beta+ O(t),\]
Figure 1. Resolution graph of an \(m\)-separating log resolution of an irreducible curve. All arcs lifting to divisors in the same gray region belong to the same component of the decomposition of \(\mathscr{X}_{m}^{\infty}\) given in Theorem 3.4.
_where \(\alpha\in\mathbb{C}^{\times}\) and \(\beta\in\mathbb{C}\) avoids the values that would make \(\tilde{\gamma}\) intersect other exceptional divisors or strict transforms. Using these coordinates, the leading coefficient of \(f^{[j]}(t)\) is_
\[\operatorname{ac}f^{[j]}(t)=\begin{cases}\star\alpha^{N^{[j]}_{E}}\beta^{N^{[j] }_{F}}&\text{if $\tilde{C}^{[j]}$ lies to the left of $E$},\\ \star\alpha^{N^{[j]}_{E}}\beta^{N^{[j]}_{F}}&\text{if $\tilde{C}^{[j]}$ lies to the right of $E$},\\ \star\alpha^{N^{[j]}_{E}}\beta^{N^{[j]}_{F}}\left(z_{u}+\beta\right)^{r^{[j]} _{\omega+2}}&\text{if $\tilde{C}^{[j]}$ lies in $u\in U$ above $E\in\mathscr{R}_{0}$},\\ \star\alpha^{N^{[j]}_{E}}\left(z_{u}+\beta\right)^{r^{[j]}_{\omega+1}}&\text{ if $\tilde{C}^{[j]}$ lies in $u\in U$ above $E\in\mathscr{R}_{1}$}.\end{cases}\]
_Here the stars \(\star\) represent nonzero complex numbers which do not depend on \(\alpha,\beta\); \(z_{u}\) is the \(y\)-coordinate at which the divisors in \(u\in U^{E}\) meet \(E\), see Definition 2.10; and \(F\) is the divisor between \(L^{E}_{g^{E}-1}\) and \(L^{\prime E}_{g^{E}-1}\) corresponding to the pair \((b^{E}_{g^{E}},a^{E}_{g^{E}})\)._
Proof.: The computation is essentially the same as the one in the proof of Proposition 2.17, but keeping track of the angular components instead of the orders. For a cleaner notation, let \((b^{E},a^{E})\coloneqq(b^{E}_{g^{E}},a^{E}_{g^{E}})\). Using (2.7) and (2.5) we get
\[\operatorname{ac}x^{E}_{g^{E}-1}(t)=\alpha^{r^{E}}\beta^{a^{E}},\quad \operatorname{ac}y^{E}_{g^{E}-1}(t)=\alpha^{\kappa^{E}}\beta^{b^{E}},\]
\[\operatorname{ac}x^{E}_{i}(t)=\star\alpha^{r^{E}_{i+1}}\beta^{r^{E}_{i+1}a^{E }/r^{E}},\quad\operatorname{ac}y^{E}_{i}(t)=\star\alpha^{\kappa^{E}_{i+1}} \beta^{\kappa^{E}_{i+1}a^{E}/r^{E}},\quad\text{for all $i=1,\dots,g^{E}-2$}.\]
The stars denote some products of powers of the constants \(A^{E}_{i}\), which are nonzero and do not depend on \(\alpha,\beta\). For each \(j=1,\dots,b\) we know the vertices of the Newton polygon of \(P^{[j]}_{\xi,\omega}\) by Lemma 2.12, and hence we know that
\[\operatorname{ac}P^{[j]}_{\xi,\omega}(t)=\operatorname{ac}\left(\star x^{[j] }_{\omega}(t)^{\kappa^{[j]}_{\omega+1}/r^{[j]}_{i+1}}+\star y^{[j]}_{\omega}(t )\right).\]
The angular component of a sum of power series is the angular component of the one with smaller order, or the sum of angular components if both of them have the same order. Note that which of them has a smallest order will depend on (i) whether \(\omega<g^{E}-1\) or \(\omega=g^{E}-1\); (ii) whether \(E\) belongs to the resolution of \(C^{[j]}\) or not; and (iii) whether \(\kappa^{[j]}_{\omega+1}/r^{[j]}_{\omega+1}\) is bigger, smaller or equal to \(\kappa^{E}_{\omega+1}/r^{E}_{\omega+1}\). It is straightforward to see which of the cases given by these three conditions correspond to \(\tilde{C}^{[j]}\) lying to the left, right, or above \(E\).
Since the full proof would involve doing the same computation over and over again, we only show here how to tackle the case where \(E\in\mathscr{R}_{0}\) and \(\tilde{C}^{[j]}\) lies above \(E\), which contains the difficulties that appear in all other cases. This corresponds to the case where (i) \(\omega=g^{E}-1\), (ii) \(E\) belongs to the resolution of \(C^{[j]}\), and (iii) we have \(\kappa^{[j]}_{\omega+1}/r^{[j]}_{\omega+1}=\kappa^{E}_{\omega+1}/r^{E}_{\omega +1}\), in terms of the possibilities that we described. Note that we have \((x^{[j]}_{i},y^{[j]}_{i})=(x^{E}_{i},y^{E}_{i})\) for all \(i=1,\dots,\omega\) and also
\[(x^{[j]}_{\omega+1},y^{[j]}_{\omega+1})=(x^{E}_{\omega+1},y^{E}_{\omega+1}+z_ {u}+\star x^{E}_{\omega+1})\]
where \(z_{u}\) is the \(y\)-coordinate of the point in \(E\) that has to be blown up to continue the resolution of \(C^{[j]}\) and \(\star\) is a complex number that we don't care about. Therefore \(\operatorname{ord}_{t}x^{[j]}_{\omega+1}(t)>0\) and \(\operatorname{ord}_{t}y^{[j]}_{\omega+1}(t)=0\), and thus
\[\operatorname{ac}P^{[j]}_{\xi,\omega+1}(t)=\star\operatorname{ac}y^{[j]}_{ \omega+1}(t)=\star(z_{u}+\beta).\]
Let \(F\) be the divisor between \(L^{E}_{g^{E}-1}\) and \(L^{\prime E}_{g^{E}-1}\) corresponding to the pair \((b^{E},a^{E})\). Then we have \(\kappa^{F}_{i}=\kappa^{E}_{i}a^{E}/\hat{r}^{E}\) for all \(i=1,\dots,\omega\) and \(\kappa^{F}_{\omega+1}=b^{E}\). Using the decomposition
given in Lemma 2.12 and the formula of Proposition 2.17, we compute
\[\operatorname{ac}f^{[j]}(t) =\operatorname{ac}\left(y_{0}^{[j]}(t)^{r_{1}^{[j]}}\cdots y_{i-1}^{ [j]}(t)^{r_{\omega}^{[j]}}y_{\omega}^{[j]}(t)^{r_{\omega+1}^{[j]}}\prod_{ \begin{subarray}{c}\xi\in\mu_{[j]}\\ \nu+2\end{subarray}}P_{\xi\omega+1}^{[j]}(t)\right)=\] \[=\star\alpha^{\kappa_{1}^{E}r_{1}^{[j]}+\cdots+\kappa_{\omega}^{ E}r_{\omega}^{[j]}+\kappa_{\omega+1}^{E}r_{\omega+1}^{[j]}}\beta^{\frac{aE}{E} }\left(\kappa_{1}^{E_{1}^{[j]}+\cdots+\kappa_{\omega}^{E}r_{\omega}^{[j]}} \right)+b^{E_{1}^{[j]}}\nu_{\omega+2}^{[j]}=\] \[=\alpha^{N_{E}^{[j]}}\beta^{N_{E}^{[j]}}(z_{u}+\beta)^{r_{\omega+ 2}^{[j]}},\]
as we wanted to show. The other cases are similar.
**Theorem 3.7**.: _Let \(E\) be an exceptional divisor in the trunk \(\mathcal{T}_{R,S}\) for some \(R,S\in\mathscr{R}\cup\mathcal{S}\). Then,_
\[\mathcal{X}_{m,E}\cong\bigsqcup^{d(\mathcal{T}_{R,S})}\mathbb{C}^{\times} \times\mathbb{C}^{2m-m_{E}\nu_{E}}.\]
Proof.: Let \(\mu_{\infty}:\mathscr{L}_{\infty}(Y)\to\mathscr{L}_{\infty}(\mathbb{C}^{2})\) be the map induced on arcs by the resolution. Note that \(\mu_{\infty}^{-1}(X_{m,E}^{\infty})\) is the set of arcs \(\tilde{\gamma}(t)\) in the \((x_{g^{E}}^{E},y_{g^{E}}^{E})\) plane such that
\[x_{g^{E}}^{E}(t)=\alpha t^{m_{E}}+O(t^{m_{E}+1}),\quad y_{g^{E}}^{E}(t)=\beta +O(t)\quad\text{and}\quad\operatorname{ac}(f\circ\mu)(t)=1.\]
Since \(E\) is in a trunk, \(E\) is not a rupture component. Therefore any \(\tilde{C}^{[j]}\) lies either to the left or to the right of \(E\). In this situation, Lemma 3.6 says that
\[\operatorname{ac}(f\circ\mu)(t)=\prod_{j=1}^{r}\operatorname{ac}(f^{[j]}\circ \mu)(t)=\alpha^{N_{E}}\beta^{N_{F}},\]
where \(F\) is the divisor between \(L_{g^{E}-1}^{E}\) and \(L_{g^{E}-1}^{\prime E}\) corresponding to the pair \((b_{g^{E}}^{E},a_{g^{E}}^{E})\). In the language of the proof of Proposition 2.4, the divisor \(E\) corresponds to a certain sequence \((\mu_{1},\ldots,\mu_{n})\) in the Stern-Brocot tree, which indicates the sequence of blow-ups making \(E\) appear starting from \(L_{g^{E}-1}^{E}\) and \(L_{g^{E}-1}^{\prime E}\). The proof of 2.4 shows that \(F\) is then the divisor corresponding to the sequence \((\mu_{1},\ldots,\mu_{n-1})\). In particular, \(E\) is the exceptional divisor of the blow-up of the intersection of \(F\) and another divisor. In other words, \(E\) and \(F\) have been adjacent at some point of the resolution process. Therefore \(d(\mathcal{T}_{R,S})=\gcd(N_{E},N_{F})\). Note that the equation \(\alpha^{N_{E}}\beta^{N_{F}}=1\) defines \(\gcd(N_{E},N_{F})\) disjoint copies of \(\mathbb{C}^{\times}\).
Now we have to bring this topology down from arcs to jets, and from the resolution to \(\mathbb{C}^{2}\). The second proof of [4, Lemma 3.1] shows that, provided we look at jets with constant order of contact with an exceptional divisor, the map induced by a single blow-up on jets is just the truncation of some of the higher-order variables of one of the components. Since all \(\tilde{\gamma}\in\mu_{\infty}^{-1}(\mathcal{X}_{m,E}^{\infty})\) intersect \(E\) with order \(m_{E}\), we may apply this result for all blow-ups of the resolution. Note that the only coefficients of \(\tilde{\gamma}\) that have to satisfy an equation are the initial ones, \(\alpha\) and \(\beta\), which never get truncated. The coefficients of the higher order terms are free.
More precisely, denoting by \(\mu_{m}:\mathscr{L}_{m}(Y)\to\mathscr{L}_{m}(\mathbb{C}^{2})\) the map induced by the resolution on jets, _loc. cit._ says that \(\mu_{m}^{-1}(\mathcal{X}_{m,E})=\pi_{m}(\mu_{\infty}^{-1}(\mathcal{X}_{m,E}^ {\infty}))\), and that the map \(\mu_{m}\) just truncates some of the free variables. Therefore \(\mathcal{X}_{m,E}\) is isomorphic to \(\sqcup^{d(\mathcal{T}_{R,S})}\mathbb{C}^{\times}\times\mathbb{C}^{M}\) for some \(M\in\mathbb{N}\). Now we just need to know the dimension of \(\mathcal{X}_{m,E}\), which can be found in [4, Lemma 2.6]. We conclude that \(M=2(m+1)-m_{E}\nu_{E}-2\).
**Theorem 3.8**.: _Let \(R\in\mathscr{R}\) be a rupture divisor._
1. _If_ \(R\) _is an_ \(m\)_-divisor, then_ \[\mathfrak{Z}_{m,R}\cong\bigsqcup\limits^{d(R)}T\times\mathbb{C}^{2m-m_{R}\nu_{R}},\] _where_ \(T\) _is a compact Riemann surface with_ \(\sum_{S\in\mathscr{A}_{R}}d(\mathscr{F}_{R,S})\) _points removed and Euler characteristic_ \[\chi(T)=\begin{cases}\frac{1}{d(R)}N_{R}(2-\operatorname{val}(R))&\text{if $R$ has no end attached},\\ \frac{1}{d(R)}(N_{R}(2-\operatorname{val}(R))+d(\mathscr{F}_{R}))&\text{if $R$ has one end attached},\\ \frac{1}{d(R)}(N_{R}(2-\operatorname{val}(R))+d(\mathscr{F}_{R,1})+d(\mathscr{ F}_{R,2}))&\text{if $R$ has two ends attached}.\end{cases}\]
2. _If_ \(R\) _is not an_ \(m\)_-divisor but there are_ \(m\)_-divisors in_ \(\mathscr{F}_{R}\)_, then_ \[\mathfrak{Z}_{m,R}\cong\bigsqcup\limits^{d(\mathscr{F}_{R})}\mathbb{C}^{2m-m_ {E_{\operatorname{dom}(m,R)}\nu_{E_{\operatorname{dom}(m,R)}}}}.\] _In the exceptional case in which_ \(R\) _has two ends attached, the disjoint union is taken over_ \(d(\mathscr{F}_{R,i})\)_, where_ \(i=1,2\) _is such that_ \(\mathscr{F}_{R,i}\) _is the end containing_ \(E_{\operatorname{dom}(m,R)}\)_._
3. _Otherwise, if there are no_ \(m\)_-divisors in_ \(\mathscr{F}_{R}\)_, then_ \(\mathfrak{Z}_{m,R}=\varnothing\)_._
Proof.: By the same arguments as in the proof of the previous Theorem 3.7, for each \(E\in\mathscr{F}_{R}\) the leading term \(\operatorname{ac}(f\circ\mu)(t)\) is a polynomial in \(\alpha,\beta\) that we can compute using Lemma 3.6, and the space \(\mathscr{Z}_{m,E}\) is isomorphic to \(\{(\alpha,\beta)\in\mathbb{C}^{2}\mid\operatorname{ac}(f\circ\mu)(t)=1\} \times\mathbb{C}^{M}\). The number \(M\) can be computed using [4, Lemma 2.6], so all we need to do is study the curve given by \(\{\operatorname{ac}(f\circ\mu)(t)=1\}\).
Since the computations for \(R\) in \(\mathscr{R}_{0}\) and \(\mathscr{R}_{1}\) are completely analogous, let us focus in the case \(R\in\mathscr{R}_{0}\). By Lemma 3.6,
\[\operatorname{ac}(f\circ\mu)(t)=\alpha^{N_{R}}\beta^{N_{F}}\prod_{u\in U^{R}}( z_{u}+\beta)^{\sum_{j\in\mathscr{s}_{R}}r_{\omega+2}^{[j]}},\]
where \(F\) is the divisor between \(L^{R}_{g^{R}-1}\) and \(L^{\prime R}_{g^{R}-1}\) corresponding to the pair \((b^{R}_{g^{R}},a^{R}_{g^{R}})\). The curve \(\{\operatorname{ac}(f\circ\mu)(t)=1\}\) is smooth (which can be checked by differentiating with respect to \(\alpha\)) and has as many connected components as the greatest common divisor of the exponents in the equation above, which we compute now.
For each \(u\in U^{R}\) and each \(j\in\mathscr{S}_{u}^{R}\), consider the divisor \(E_{(\kappa,r)}\) between \(L^{[j]}_{\omega+1}\) and \(L^{\prime[j]}_{\omega+1}=R\). Using Proposition 2.17 we get the formula
\[N_{E_{(\kappa,r)}}^{[j]}=rN_{R}^{[j]}+\kappa r_{\omega+2}^{[j]}.\]
The divisor in \(u\) which is adjacent to \(R\), call it \(E_{u}\), necessarily has \(\kappa_{u}=1\) in its corresponding coprime pair \((\kappa_{u},r_{u})\). In particular it is the same divisor for all \(j\in\mathscr{S}_{u}^{R}\). Since for \(j\not\in\mathscr{S}_{u}^{R}\) we have \(N_{E_{(\kappa,r)}}^{[j]}=rN_{R}^{[j]}\), we conclude
\[N_{E_{u}}=r_{u}N_{R}+\sum_{j\in\mathscr{S}_{u}^{[j]}}r_{\omega+2}^{[j]},\]
and therefore \(\gcd(N_{R},\sum_{j\in\mathscr{S}_{u}^{[j]}}r_{\omega+2}^{[j]})=\gcd(N_{R},N_{ E_{u}})=d(\mathscr{F}_{R,S})\), where \(S\in\mathscr{N}_{R}\) is the neighbor of \(R\) living in the component \(u\in U^{R}\). On the other hand, just like in the proof of Theorem 3.7 we have that \(F\) is adjacent to \(R\) (although we don't know whether \(F\) lies in the left or in the right component of \(\Gamma\setminus R\)). Therefore the gcd of the exponents in \(\operatorname{ac}(f\circ\mu)(t)\) is equal to the gcd of \(R\) and all the \(N_{E}\), where \(E\) runs through all divisors adjacent to \(R\)_except for one_. Nevertheless, the sum of all \(N_{E}\), where \(E\) runs through all divisors adjacent to \(R\), is a multiple of \(N_{R}\). Hence removing one of the \(N_{E}\) from the
computation of the gcd does not change the result, and we conclude that \(\{\operatorname{ac}(f\circ\mu)(t)=1\}\) has \(d(R)\) connected components.
The equation of each connected component is simply
\[\alpha^{N_{R}/d(R)}\beta^{N_{F}/d(R)}\prod_{u\in U^{R}}(z_{u}+\beta)^{\left( \sum_{j\in\mathscr{A}}R_{u\omega+2}^{[j]}\right)/d(R)}=\zeta,\]
where \(\zeta\) runs over the \(d(R)\)-th roots of unity. Projecting to the \(\beta\) coordinate we obtain a covering of degree \(N_{R}/d(R)\) over \(\mathbb{C}\setminus(\{0\}\cup\{z_{u}\mid u\in U\})\), i.e. over \(\mathbb{P}^{1}\) with \(\operatorname{val}(R)\) points removed, from which we can obtain the Euler characteristic using the Riemann-Hurwitz formula.
The ramification index at the point \(z_{u}\) is the greatest common divisor of the exponents of \(\alpha\) and \((z_{u}+\beta)\), which we have already shown to be \(d(\mathcal{F}_{R,S})/d(R)\), where \(S\in\mathscr{N}_{R}\) is the neighbor of \(R\) living in the component \(u\in U^{R}\). The ramification index at \(0\) is the greatest common divisor of \(N_{R}/d(R)\) and \(N_{F}/d(R)\), which \(1/d(R)\) is the greatest common divisor associated to the part of the resolution graph (trunk or end) where \(F\) lives. Finally, the ramification index at infinity is the greatest common divisor of \(N_{R}/d(R)\) and the sum of all other exponents, which is \(1/d(R)\) times the greatest common divisor associated to the part of the resolution graph _opposite_ to where \(F\) lies (once again, this is because the sum of all \(N_{E}\) where \(E\) runs through the divisors adjacent to \(R\) is a multiple of \(N_{R}\)). This way we have counted the points that \(\{\operatorname{ac}(f\circ\mu)(t)=1\}\) is missing to be compact.
Recall from Theorem 3.5 that \(\mathfrak{Z}_{m,R}=\overline{\mathcal{X}_{m,E_{\operatorname{dom}}(m,R)}}\). The effect of this closure in the topology is to "fill in" the missing points of \(\{\operatorname{ac}(f\circ\mu)(t)=1\}\) that lie above the point which corresponds to the intersection with the end \(\mathscr{F}_{R}\). This can be seen for instance by repeating the computation of Lemma 3.6 but considering the coefficients in the \((x_{g^{R}-1}^{R},y_{g^{R}-1}^{R})\). From here the result follows for \(R\in\mathscr{R}_{0}\). The computation for \(R\in\mathscr{R}_{1}\) is totally analogous.
## 4. Floer homology of the Milnor fibration
In this section assume \(f\) is a reduced convergent power series. As explained in the Introduction, the Milnor fiber \(\mathbb{F}\) is naturally a Liouville domain, and moreover the Milnor fibration admits a monodromy \(\varphi:\mathbb{F}\to\mathbb{F}\) so that \((\mathbb{F},\lambda,\varphi)\) is a graded abstract contact open book. The goal of this section is to compute the Floer homology groups \(\operatorname{HF}_{*}(\varphi^{m},+)\) for every \(m\geq 1\).
The main tools we use to do so are the symplectic monodromy at radius zero of Fernandez de Bobadilla and Pelka [11], which enhances A'Campo's model of the Milnor fiber [1] with a symplectic structure and a symplectic monodromy with good dynamical properties; and the McLean spectral sequence [14, SS4], [11, Proposition 6.3], which makes use of those dynamical properties to compute the Floer homology.
### The A'Campo representative of the monodromy
We begin by recalling A'Campo's description of the monodromy in terms of an embedded resolution of singularities [1] in the case of a plane curve, see also [21, Ch. 9]. Fix a log resolution \(\mu:(Y,\mathbf{E})\to(\mathbb{C}^{2},0)\) of \(f\), which we will later on assume to be \(m\)-separating, see Section 2. Recall that \(\mathscr{E}\) is the set of irreducible components of \(\mathbf{E}\), and \(\mathcal{S}\) is the set of irreducible components of the strict transform \(\tilde{C}=\mu_{*}^{-1}(C)\). The Milnor fiber \(\mathbb{F}\) admits a decomposition
\[\mathbb{F}=\bigcup_{E\in\mathscr{U}\cup\mathcal{S}}\mathbb{F}_{E}\cup\bigcup_ {E,F\in\mathscr{U}\cup\mathcal{S}}\mathbb{F}_{E,F}, \tag{4.1}\]
where the pieces of the decomposition are described as follows.
For each \(E\in\mathscr{E}\cup\mathscr{S}\) there is a branched covering \(\tilde{E}\to E\) of degree \(N_{E}\) whose branch locus is contained the points of intersection of \(E\) with other divisors in \(\mathscr{E}\cup\mathscr{S}\), i.e. \(E\setminus E^{\circ}\) where \(E^{\circ}\coloneqq E\setminus\cup_{E\neq F\in S}F\), see [7, Lemma 3.2.3]. The point \(E\cap F\) has \(N_{E}/\gcd(N_{E},N_{F})\) preimages. The set \(\mathbb{F}_{E}\subset\mathbb{F}\) is the real oriented blow-up of \(\tilde{E}\) at the preimages of the points in \(E\setminus E^{\circ}\), i.e. \(\mathbb{F}_{E}\) is the result of replacing each point of \(\tilde{E}\) lying over \(E\setminus E^{\circ}\) by a copy of \(\mathbb{S}^{1}\).
It follows that for any \(E,F\in\mathscr{E}\cup\mathscr{S}\) such that \(E\cap F\neq\varnothing\), the spaces \(\mathbb{F}_{E}\) and \(\mathbb{F}_{F}\) have (possibly many) boundary components above this point, each of them isomorphic to \(\mathbb{S}^{1}\). The set \(\mathbb{F}_{E,F}\) is a disjoint union of cylinders (i.e. copies of \([0,1]\times\mathbb{S}^{1}\)), each connecting one of the boundary components of \(\mathbb{F}_{E}\) with one of the boundary components of \(\mathbb{F}_{F}\).
For a proof of the decomposition (4.1) see [1] or [11]. Figure 2 shows a schematic picture of the decomposition for the Milnor fiber of an irreducible plane curve. The following result gives the topology of each piece of the decomposition, using the description of the dual graph given in Definition 2.8.
**Proposition 4.2**.: _The topology of each \(\mathbb{F}_{E}\) can be described as follows:_
1. _For every_ \(j=1,\ldots,b\)_, the space_ \(\mathbb{F}_{\tilde{C}[j]}\) _is a cylinder._
2. _If_ \(E\in\mathscr{E}\) _has valency 1, then it belongs to some end_ \(\mathscr{F}_{R}\) _(resp._ \(\mathscr{F}_{R,i}\) _in the exceptional case, see Definition_ 2.8_), and_ \(\mathbb{F}_{E}\) _is a disjoint union of_ \(d(\mathscr{F}_{R})\) _(resp._ \(d(\mathscr{F}_{R,i})\)_) disks._
3. _If_ \(E\) _has valency 2, then it belongs either to a trunk_ \(\mathscr{T}_{R,S}\) _or an end_ \(\mathscr{F}_{R}\) _(or_ \(\mathscr{F}_{R,i}\) _in the exceptional case), and_ \(\mathbb{F}_{E}\) _is a disjoint union of_ \(d(\mathscr{T}_{R,S})\) _or_ \(d(\mathscr{F}_{R})\) _(or_ \(d(\mathscr{F}_{R,i})\)_) cylinders, respectively._
4. _If_ \(E\) _has valency_ \(\geq 3\)_, then it is a rupture component_ \(E=R\in\mathscr{R}\)_, and_ \(\mathbb{F}_{R}\) _is a disjoint union of_ \(d(R)\) _compact orientable surfaces, each of them with Euler characteristic_ \(\frac{N_{R}}{d(R)}(2-\operatorname{val}(R))\) _and_ \(d(\mathscr{F}_{R})+\sum_{S\in\mathscr{F}_{R}}d(\mathscr{F}_{R,S})\) _boundary components (if_ \(R\) _has no end attached to it, then_ \(d(\mathscr{F}_{R})=0\)_; and in the exceptional case where_ \(R\) _has two ends attached to it, the term_ \(d(\mathscr{F}_{R})\) _is replaced by_ \(d(\mathscr{F}_{R,1})+d(\mathscr{F}_{R,2})\) _in the formula for the boundary components)._
Proof.: Note that each strict transform \(\tilde{C}^{[j]}\) is homeomorphic to a closed disk, as it is the intersection of a smooth curve with a closed ball. Therefore \(B_{\tilde{C}[j]}\) is the real oriented blow-up of a covering of \((\tilde{C}^{[j]})^{\circ}\) of degree \(N_{\tilde{C}[j]}=1\), and hence it is topologically a cylinder.
If \(E\) has valency 1, then \(E^{\circ}\) is isomorphic to \(\mathbb{P}^{1}\setminus\{\text{1 point}\}\cong\mathbb{C}\), which is simply connected. Hence the covering space \(\tilde{E}\) is a disjoint union of copies of \(E\), and the real oriented blow-up replaces the missing point by a copy of \(\mathbb{S}^{1}\). Thus \(\mathbb{F}_{E}\) is homeomorphic to a disjoint union of disks.
If \(E\) has valency 2, then \(E^{\circ}\) is isomorphic to \(\mathbb{P}^{1}\setminus\{\text{2 points}\}\cong\mathbb{C}^{\times}\), and the only covering spaces are once again copies of itself. After attaching an \(\mathbb{S}^{1}\) to each of the missing points, we obtain that \(\mathbb{F}_{E}\) is a disjoint union of cylinders.
In both cases we can find out the number of connected components of \(\mathbb{F}_{E}\) by counting the boundary components. Indeed, when \(\mathbb{F}_{E}\) is a union of disks there are as many connected components as boundary components; and when \(\mathbb{F}_{E}\) is a union of cylinders, there is a connected component for every two boundary components. The number of boundary components is in turn equal to the number of connected components of \(\mathbb{F}_{E,F}\), where \(F\) ranges over the divisors intersecting \(E\) (of which there are only one or two). This is because every boundary component has a connecting cylinder attached to it. Note that
in the case where \(E\) meets two divisors \(F,F^{\prime}\), both \(\mathbb{F}_{E,F}\) and \(\mathbb{F}_{E,F^{\prime}}\) will necessarily have the same number of components, since cylinders have precisely one boundary component on each side. Hence, regardless of whether \(E\) has valency \(1\) or \(2\), the number of connected components of \(\mathbb{F}_{E}\) is precisely the number of connected components of \(\mathbb{F}_{E,F}\) for any divisor \(F\) with \(E\cap F\neq\varnothing\).
Let \((z_{E},z_{F})\) be local coordinates around the intersection point of \(E\) and \(F\) in which \(f(z_{E},z_{F})=uz_{E}^{N_{E}}z_{F}^{N_{F}}\), where \(u\) is a nonvanishing function. Passing to polar coordinates \(z_{E}=r_{E}e^{2\pi\theta_{E}},z_{F}=r_{FE}e^{2\pi\theta_{F}}\), the equation \(f(z_{E},z_{F})=\delta\) that defines \(\mathbb{F}\) becomes
\[|u|r_{E}^{N_{E}}r_{F}^{N_{F}}=\delta\in\mathbb{R},\quad\arg(u)+N_{E}\theta_{E} +N_{F}\theta_{F}=0\in\mathbb{R}/\mathbb{Z}.\]
For a small enough neighborhood \(U\), the value of \(u\) does not wind around the origin, so the number of connected components of \(U\cap\{f=\delta\}\) is \(\gcd(N_{E},N_{F})\). From here one can conclude that \(\mathbb{F}_{E,F}\) has \(\gcd(N_{E},N_{F})\) connected components. Note that this, together with the discussion from the previous paragraph, explains why \(d(\mathcal{F}_{R,S})\) and \(d(\mathcal{F}_{R})\) do not depend on the choice of divisors to take the greatest common divisor, which had been left open in Definition 2.9. Of course, one could also check it numerically using for example Proposition 2.17, but this topological argument suffices.
If \(E\) has valency \(\geq 3\), then \(E=R\in\mathscr{R}\) is a rupture divisor. In this case \(R^{\circ}\cong\mathbb{P}^{1}\setminus\{\text{val}(R)\text{ points}\}\), and the same study in local coordinates as above shows that the covering \(\tilde{R}\to R^{\circ}\) has \(d(R)=\gcd(N_{R},\{N_{F}\mid F\in\mathscr{E}\cup\mathscr{S}\text{ adjacent to }R\})\) connected components. Each connected component gives a covering of degree \(N_{R}/d(R)\), which proves the formula for the Euler characteristic, and the ramification index at the point of intersection \(R\cap F\) is \(\gcd(N_{R},N_{F})\), which by definition takes the value \(d(\mathcal{F}_{R,S})\) or \(d(\mathcal{F}_{R})\) or \(d(\mathcal{F}_{R,i})\) depending on whether \(F\) lies in a trunk or an end.
The monodromy iterates \(\varphi^{m}:\mathbb{F}\to\mathbb{F}\) act on each \(\mathbb{F}_{E}\) as the covering transformations. That is, locally around each intersection \(E\cap F\), the maps \(\varphi^{m},\ m\in\mathbb{Z}_{\geq 0}\), cyclically permute the boundary components of \(\mathbb{F}_{E}\) that intersect \(\mathbb{F}_{E,F}\). When \(\gcd(N_{E},N_{F})\) divides \(m\), each
Figure 2. Schematic picture of the partition of the Milnor fiber of an irreducible curve according to (4.1).
boundary component is mapped to itself rotated by a certain angle. The \(m\) th iterate of the monodromy action induces a cyclic permutation on the set of connecting tubes \(\mathbb{F}_{E,F}\), and when \(\gcd(N_{E},N_{F})\) divides \(m\) each tube is mapped to itself with a twist that interpolates between the rotations of the boundary components. More precisely, for each component \(K\) of \(\mathbb{F}_{E,F}\) there is a parametrization \((v,\theta):K\to[0,1]\times\mathbb{S}^{1}\) such that \(v^{-1}(0)=K\cap\mathbb{F}_{E},\ v^{-1}(1)=K\cap\mathbb{F}_{F}\) and such that, if \(\gcd(N_{E},N_{F})\) divides \(m\), then the map \(\varphi^{m}\) in these coordinates is
\[(v,\theta)\mapsto\left(v,\theta+\frac{ma_{F}}{N_{E}}(1-v)-\frac{ma_{E}}{N_{F}} v\right),\]
where \(a_{E},a_{F}\) are the coefficients in the Bezout identity \(\gcd(N_{E},N_{F})=a_{E}N_{E}+a_{F}N_{F}\), see [11, Ex. 4.16], [11, Figure 4(b)], [11, Eq. (123)].
In [11], Fernandez de Bobadilla and Pelka constructed a symplectic representative of A'Campo's monodromy. More specifically, they constructed a smooth manifold with boundary that they call the _A'Campo space_\(A\), and a smooth map \(f_{A}:A\to\mathbb{C}_{\log}\) such that the restriction \(f_{A}|_{\partial A}:\partial A\to\partial\mathbb{C}_{\log}\cong\mathbb{S}^{1}\) is a locally trivial fibration isomorphic to the Milnor fibration in the tube. Here \(\mathbb{C}_{\log}\cong\mathbb{R}_{\geq 0}\times\mathbb{S}^{1}\) is the real oriented blow-up of \(\mathbb{C}\) at the origin. The space \(A\) is endowed with a smooth \(1\)-form \(\lambda_{A}\) that makes \(f_{A}|_{\partial A}\) a Liouville fibration, and from there one obtains a graded abstract contact open book \((\mathbb{F},\lambda,\varphi)\) (in particular \(\varphi:\mathbb{F}\to\mathbb{F}\) is exact with respect to \(\lambda\)) isotopic to the one associated to the Milnor fibration in the tube, see [11, SS7.2].
The dynamical properties of \((\mathbb{F},\lambda,\varphi)\) are the same as the dynamical properties of A'Campo's classical description of the monodromy which we have just recalled in the case of a plane curve. More specifically, there is a smooth map \(\pi_{A}:A\to Y\) and the pieces in decomposition (4.1) are realized as
\[\mathbb{F}_{E}=\overline{\pi_{A}^{-1}(E^{\circ})}\cap f_{A}^{-1}(0,0),\quad \mathbb{F}_{E,F}=\pi_{A}^{-1}(E\cap F)\cap f_{A}^{-1}(0,0).\]
**Proposition 4.3** ([11, Prop 7.2]).: _Let \(m\geq 1\) be an integer and let \(\mu:(Y,\mathbf{E})\to(\mathbb{C}^{2},0)\) be an \(m\)-separating log resolution of \(f\). Then, the fixed points of \(\varphi^{m}\) decompose as_
\[\operatorname{Fix}(\varphi^{m})=\bigsqcup_{\begin{subarray}{c}E\in\mathcal{E }\cup\mathcal{E}\\ N_{E}|m\end{subarray}}\mathbb{F}_{E},\]
_and each \(\mathbb{F}_{E}\) is a disjoint union of codimension zero families of fixed points of \(\varphi^{m}\). These sets satisfy the following properties:_
1. _Close to any point in a boundary component of_ \(\mathbb{F}_{E}\)_, the monodromy_ \(\varphi^{m}\) _is the time-one flow associated to a time-independent_ non-negative Hamiltonian. In the notation of_ _[_11_, Eq. (95)]__, this means_ \(\partial^{-}\mathbb{F}_{E}=\varnothing\) _for all_ \(E\in\mathcal{E}\cup\mathcal{E}\)_._
2. _The Conley-Zehnder index of each point in_ \(\mathbb{F}_{E}\) _equals_ \(\operatorname{CZ}(\mathbb{F}_{E})=2m(\frac{\nu_{E}}{N_{E}}-1)\)_._
### Deforming the monodromy
After endowing the A'Campo monodromy with a symplectic structure, Fernandez de Bobadilla and Pelka use the McLean spectral sequence to deduce some properties about its Floer homology. This is good enough for their purposes, but not for the explicit computation that we want to carry out. Indeed, it will follow from Theorem 1.1 that the McLean spectral sequence associated to the graded abstract contact open book \((\mathbb{F},\lambda,\varphi^{m})\) does not degenerate at the first page in general. Our strategy is to deform \(\varphi^{m}\) to obtain an isotopic abstract contact open book --which will therefore have the same Floer homology-- whose associated McLean spectral sequence does degenerate at the first page.
Recall from Definition 2.8 that, given a rupture divisor \(R\in\mathscr{R}\), if there are any \(m\)-divisors in \(\mathscr{F}_{R}\) then there is one which is closest to \(R\), and we denote it by \(E_{\operatorname{dom}}(m,R)\)
Let \(B_{m,R}\) be the piece of the Milnor fiber corresponding to divisors in \(\mathscr{F}_{R}\) which are "at least as far away" from \(R\) as \(E_{\mathrm{dom}}(m,R)\), that is
\[B_{m,R}\coloneqq\bigcup_{E\in\mathscr{F}_{m,R}}\mathbb{F}_{E}\cup\bigcup_{E,F \in\mathscr{F}_{m,R}}\mathbb{F}_{E,F}.\]
We may describe the topology of \(B_{m,R}\) in terms of Proposition 4.2. Indeed, if there are no \(m\)-divisors in \(\mathscr{F}_{R}\), then \(B_{m,R}=\varnothing\). If there are \(m\)-divisors in \(\mathscr{F}_{R}\) but \(R\) itself is not an \(m\)-divisor, then \(E_{\mathrm{dom}}(m,R)\) has valency \(2\) and each connected component of \(B_{m,R}\) is a finite chain of cylinders with a disk attached at the end, corresponding to the final divisor of valency \(1\). In other words, \(B_{m,R}\) is a disjoint union of \(d(\mathscr{F}_{R})\) closed disks (or \(d(\mathscr{F}_{R,i})\) disks in the exceptional case where \(R\) has two ends attached to it, where \(\mathscr{F}_{R,i}\) is the end that contains \(E_{\mathrm{dom}}(m,R)\)). Finally, if \(R\) is an \(m\)-divisor then \(B_{m,R}\) is the result of capping \(d(\mathscr{F}_{R})\) boundary components of \(\mathbb{F}_{R}\) with the disks coming from the divisors in the end (or \(0\) components if \(R\) has no ends attached, or \(d(\mathscr{F}_{R,1})+d(\mathscr{F}_{R,2})\) components in the exceptional case). From there one easily computes the number of boundary components and the Euler characteristic of each connected component starting from the analogous information about \(\mathbb{F}_{R}\).
The restriction of \(\varphi^{m}\) to \(B_{m,R}\) is isotopic to the identity. Indeed, if \(B_{m,R}\neq\varnothing\) then the set \(\mathbb{F}_{E_{\mathrm{dom}}(m,R)}\) is a neighborhood of the boundary of \(B_{m,R}\) such that \(\varphi^{m}|_{\mathbb{F}_{E_{\mathrm{dom}}(m,R)}}=\mathrm{id}\) and \(B_{m,R}\setminus\mathbb{F}_{E_{\mathrm{dom}}(m,R)}\) is a homeomorphic to a disjoint union of disks. Since the restriction of \(\varphi^{m}\) to each of the connected components of \(B_{m,R}\setminus\mathbb{F}_{E_{\mathrm{dom}}(m,R)}\) is a diffeomorphism of the \(2\)-disk that fixes the boundary, it is smoothly isotopic to the identity by a theorem of Smale, see [19]. In fact the following result shows that the smooth isotopy \(\psi_{t}\) between the identity and \(\varphi^{m}\) may be chosen to be an exact symplectomorphism for all \(t\).
**Lemma 4.4**.: _Let \((M,\lambda)\) be a \(2\)-dimensional Liouville domain. Then the space of compactly supported exact symplectomorphisms (see [11, Definition 5.2]), denoted \(\mathrm{Exact}_{c}(M,\lambda)\), is a deformation retract of the space of orientation-preserving compactly supported diffeomorphisms, denoted \(\mathrm{Diff}_{+,c}(M)\)._
Proof.: First we show a version of Moser's trick for compactly supported exact symplectomorphisms: Let \(\lambda_{t}\) be a family of Liouville forms the surface \(M\) such that \(\lambda_{t}\) is independent of \(t\) in a neighborhood of the boundary \(\partial M\), and let \(\omega_{t}=d\lambda_{t}\). We can find a compactly supported isotopy \(\psi_{t}\) such that \(\psi_{t}^{*}\lambda_{t}-\lambda_{0}\) is an exact \(1\)-form for all \(t\).
Indeed, the isotopy \(\psi_{t}\) is going to be the flow of a family of vector fields \(X_{t}\), that is, \(\psi_{0}=\mathrm{id}\) and \(\frac{d}{dt}\psi_{t}=X_{t}\circ\psi_{t}\). Note that
\[\frac{d}{dt}(\psi_{t}^{*}\lambda_{t}-\lambda_{0})=\psi_{t}^{*}\left(\frac{d}{ dt}\lambda_{t}+X_{t}\mathbin{\lrcorner}d\lambda_{t}+d(X_{t}\mathbin{ \lrcorner}\lambda_{t})\right),\]
so choosing \(X_{t}\) to be the unique vector field such that \(\frac{d}{dt}\lambda_{t}+X_{t}\mathbin{\lrcorner}\omega_{t}=0\) makes the \(1\)-form \(\frac{d}{dt}(\psi_{t}^{*}\lambda_{t}-\lambda_{0})\) exact. Note that \(\frac{d}{dt}\lambda_{t}=0\) in a neighborhood of \(\partial M\), and hence \(X_{t}=0\) for all \(t\) in said neighborhood. This is necessary so that the vector fields can be integrated to a (compactly supported) isotopy \(\psi_{t}\). Since \(\psi_{t}^{*}\lambda_{t}-\lambda_{0}\) is obviously exact for \(t=0\) and exact forms are a linear subspace of \(\Omega^{1}(M)\), we conclude that \(\psi_{t}^{*}\lambda_{t}-\lambda_{0}\) is exact for all \(t\), as we wanted.
Now let us return to our Liouville surface \((M,\lambda)\) and let \(\varphi\) be any compactly supported orientation-preserving diffeomorphism of \(M\). Define \(\lambda_{t}\coloneqq(1-t)\lambda+t\varphi^{*}\lambda\), and notice that \(\lambda_{t}\) is independent of \(t\) in a neighborhood of the \(\partial M\). Since \(\varphi\) is orientation preserving, \(\lambda_{t}\) is a Liouville form for all \(t\) -- the Liouville vector field stays pointing outwards, and the forms \(d\lambda_{t}\) are pointwise non degenenrate, since they interpolate between \(d\lambda\)
and \(\varphi^{*}d\lambda\), which are area forms for the same orientation. Therefore by Moser's trick we obtain an isotopy \(\psi_{t}\) as above. Since \((\varphi\circ\psi_{1})^{*}\lambda-\lambda=\psi_{1}^{*}\lambda_{1}-\lambda_{0}\) is exact, we have found a path \(\varphi\circ\psi_{t}\) between \(\varphi\) and the exact symplectomorphism \(\varphi\circ\psi_{1}\). This shows that the space of compactly supported exact symplectomorphisms, \(\operatorname{Exact}_{c}(M)\) is a deformation retract of \(\operatorname{Diff}_{+,c}(M)\), the space of orientation-preserving compactly supported diffeomorphisms.
Applying this procedure to all ends \(\mathscr{F}_{R}\) (or \(\mathscr{F}_{R,i}\) in the exceptional case) we obtain an isotopy \(\psi_{t}:\mathbb{F}\to\mathbb{F}\) such that \(\psi_{0}=\varphi^{m},\psi_{t}\) is an exact symplectomorphism for all \(t\in[0,1]\) and \(\overline{\varphi^{m}}\coloneqq\psi_{1}\) has the following dynamical properties:
**Proposition 4.5**.: _The fixed points of \(\overline{\varphi^{m}}\) decompose as_
\[\operatorname{Fix}(\overline{\varphi^{m}})=\left(\bigsqcup_{R\in\mathscr{R}} B_{m,R}\right)\sqcup\left(\bigsqcup_{R,S\in\mathscr{R}\cup\delta}\bigsqcup_{ \begin{subarray}{c}E\in\mathscr{F}_{R,S}\\ N_{E}|m\end{subarray}}\mathbb{F}_{E}\right)\sqcup\left(\bigsqcup_{j=1}^{r} \mathbb{F}_{\tilde{C}^{[j]}}\right).\]
_and each \(B_{m,R},\mathbb{F}_{E},\mathbb{F}_{\tilde{C}^{[j]}}\) is a disjoint union of codimension zero families of fixed points of \(\overline{\varphi^{m}}\). These sets satisfy the following properties:_
1. _There is a neighborhood of any boundary component of_ \(B_{m,R},\mathbb{F}_{E}\) _or_ \(\mathbb{F}_{\tilde{C}^{[j]}}\) _in which, the map_ \(\overline{\varphi^{m}}\) _is the time-one flow associated to a time-independent_ non-negative Hamiltonian._
2. _The Conley-Zehnder index at each family of fixed points equals_ \[\operatorname{CZ}(B_{m,R}) =2m\left(\frac{\nu_{E_{\operatorname{dom}}(m,R)}}{N_{E_{ \operatorname{dom}}(m,R)}}-1\right)\text{ for all }R\in\mathscr{R},\] \[\operatorname{CZ}(\mathbb{F}_{E}) =2m\left(\frac{\nu_{E}}{N_{E}}-1\right)\text{ for all }E\in \mathscr{E}\cup\mathscr{S}.\]
3. _The topology of these sets is the following:_ 1. _Each_ \(\mathbb{F}_{\tilde{C}^{[j]}}\) _is a cylinder, i.e. a copy of_ \(\mathbb{S}^{1}\times[0,1]\)_._ 2. _For each_ \(E\in\mathscr{F}_{R,S}\) _such that_ \(N_{E}\) _divides_ \(m\)_, the set_ \(\mathbb{F}_{E}\) _is a disjoint union of_ \(d(\mathscr{F}_{R,S})\) _cylinders._ 3. _For each_ \(R\in\mathscr{R}\)_, if there are no_ \(m\)_-divisors in_ \(\mathscr{F}_{R}\) _then_ \(B_{m,R}\) _is empty. If there are_ \(m\)_-divisors in_ \(\mathscr{F}_{R}\) _but_ \(N_{R}\) _does not divide_ \(m\)_, then_ \(B_{m,R}\) _is a disjoint union of_ \(d(\mathscr{F}_{R})\) _closed disks (or_ \(d(\mathscr{F}_{R,i})\) _disks in the exceptional case where_ \(R\) _has two ends attached to it, where_ \(\mathscr{F}_{R,i}\) _is the end that contains_ \(E_{\operatorname{dom}}(m,R)\)_). Finally, if_ \(R\) _is an_ \(m\)_-divisor, then_ \(B_{m,R}\) _is the disjoint union of_ \(d(R)\) _compact orientable surfaces, each with_ \(\sum_{S\in\mathscr{A}_{R}}d(\mathscr{F}_{R,S})\) _boundary components and Euler characteristic_ \(\chi\)_, where_ \[\chi=\begin{cases}\frac{1}{d(R)}N_{R}(2-\operatorname{val}(R))&\text{if $R$ has no end attached},\\ \frac{1}{d(R)}(N_{R}(2-\operatorname{val}(R))+d(\mathscr{F}_{R}))&\text{if $R$ has one end attached},\\ \frac{1}{d(R)}(N_{R}(2-\operatorname{val}(R))+d(\mathscr{F}_{R,1})+d( \mathscr{F}_{R,2}))&\text{if $R$ has two ends attached}.\end{cases}\]
Proof.: The decomposition of the fixed points follows from the construction. Items (i) and (ii) follow from 4.3, since the Conley-Zehnder index is locally constant in codimension \(0\) families of fixed points and there are points in \(\mathbb{F}_{E_{\operatorname{dom}}(m,R)}\subset B_{m,R}\) which have not been altered by the deformation. Item (iii) just collects the description of the \(B_{m,R}\) that we made in the construction above.
The family \(\{(\mathbb{F},\lambda_{\operatorname{std}},\psi_{t})\}_{t\in[0,1]}\) is an isotopy of abstract contact open books, see [11, Definition 5.2] (it follows from the proof of Lemma 4.4 that \(\psi_{t}\) depends smoothly on
\(t\)). Moreover, the isotopy can be used to transport the grading of \((\mathbb{F},\lambda_{\operatorname{std}},\varphi^{m})\) to every \((\mathbb{F},\lambda_{\operatorname{std}},\psi_{t})\), so that the isotopy becomes an isotopy of graded abstract contact open books. By [11, Proposition 6.2] the Floer homologies of both graded abstract contact open books coincide:
\[\operatorname{HF}_{\bullet}(\varphi^{m},+)\simeq\operatorname{HF}_{\bullet}( \widetilde{\varphi^{m}},+).\]
### Degeneration of the McLean spectral sequence
McLean [14, Appendix C] constructed a spectral sequence that converges to the Floer cohomology of a graded abstract contact open book with good dynamical properties. In this paper, we use a slightly more general version of it that appeared in [11] and converges to the Floer homology. Once again, for the definitions and conventions on Floer homology used in this section, we refer the reader to [11, SS6].
**Proposition 4.6** ([11, Proposition 6.3]).: _Let \((M,\lambda,\phi)\) be a graded abstract contact open book, \(\omega:M\to\mathbb{R}\) an associated action and \(\dim M=2n\). Assume that_
\[\operatorname{Fix}\phi=\bigsqcup\mathscr{B},\]
_where each \(B\in\mathscr{B}\) is a codimension zero family of fixed points such that \(\partial^{+}B=\varnothing\) or \(\partial^{-}B=\varnothing\). Pick a map \(\iota:\mathscr{B}\to\mathbb{Z}\) such that:_
* _if_ \(\omega(B)=\omega(B^{\prime})\)_, then_ \(\iota(B)=\iota(B^{\prime})\)_,_
* _if_ \(\omega(B)<\omega(B^{\prime})\)_, then_ \(\iota(B)<\iota(B^{\prime})\)_._
_Then there is a spectral sequence_
\[E^{1}_{p,q}=\bigoplus_{\iota(B)=p}H_{n+p+q+\operatorname{CZ}_{\phi}(B)}(B, \partial^{+}B;\mathbb{Z}/2\mathbb{Z})\Longrightarrow\operatorname{HF}_{ \bullet}(\phi,+).\]
Our goal is to study the degeneration properties of the McLean spectral sequence in our case. For that purpose, we adapt the arguments of [17] to our setting.
We start by giving a sufficient topological condition which implies the degeneration of the McLean spectral sequence at the first page.
**Lemma 4.7**.: _Let \((M,\lambda,\phi)\) and \(\mathscr{B}\) be as in Proposition 4.6. Assume the following condition holds:_
\[\begin{array}{l}\text{If }\eta:[0,1]\to M\text{ is a path whose endpoints are fixed by }\phi\text{, and }\phi\circ\eta\text{ is homotopic}\\ \text{to }\eta\text{ relative to the endpoints, then both endpoints of }\eta\text{ lie on the same }B\in\mathscr{B}\text{.}\end{array} \tag{4.8}\]
_Then the McLean spectral sequence degenerates at the first page._
Proof.: Let \(\check{H}_{\delta,t}\) and \(J_{t}\) be the time-dependant Hamiltonian and almost complex structure considered in [11, SS6.3.2]. As discussed in SS6.3.3 in loc. cit., every \(\phi\)-twisted Hamiltonian loop is constant and equal to a point in some \(B\in\mathscr{B}\). We claim that if there is a Floer trajectory between two \(\phi\)-twisted Hamiltonian loops, then the loops are points lying on the same \(B\in\mathscr{B}\). Indeed, let \(u:\mathbb{R}^{2}\to M\) be a Floer trajectory between the constant loops \(p_{-}\) and \(p_{+}\). In particular, for every \((s,t)\in\mathbb{R}^{2}\) it satisfies
* \(\phi(u(s,t+1))=u(s,t)\),
* \(\lim_{s\to\pm\infty}u(s,t)=p_{\pm}\).
From \(u\) we can obtain a smooth homotopy \(\bar{u}:[0,1]\times\mathbb{R}\to M\) relative to the endpoints \(p_{-}\) and \(p_{+}\) such that \(\phi\circ\bar{u}(\cdot,1)=\bar{u}(\cdot,0)\), so by condition (4.8) there exists \(B\in\mathscr{B}\) such that \(p_{-},p_{+}\in B\), as desired.
Recall the Floer complex \(\operatorname{CF}_{*}(\check{H}_{\delta,t},J_{t})\) is generated by the \(\phi\)-twisted Hamiltonian loops, and graded by the minus Conley-Zehnder index. Its differential counts the number of Floer trajectories modulo two between \(\phi\)-twisted Hamiltonian loops. The fact that there
are no Floer trajectories between different families of \(\mathscr{B}\) in particular implies that the complex \(\operatorname{CF}_{*}(\breve{H}_{\delta,t},J_{t})\) can be expressed as a direct sum of complexes by grouping loops with the same action functional. Moreover, the filtration inducing the McLean spectral sequence coincides with the natural filtration induced by the direct sum decomposition of \(\operatorname{CF}_{*}(\breve{H}_{\delta,t},J_{t})\), so the spectral sequence degenerates at the first page.
**Remark 4.9**.: Observe that the analytic condition of a Floer trajectory (that is, the Floer equation) does not play any role in the proof.
**Lemma 4.10**.: _The abstract contact open book \((\mathbb{F},\lambda,\overline{\varphi^{m}})\) satisfies condition (4.8) in Lemma 4.7._
Proof.: Let \(\eta\) be a path between two fixed points of \(\overline{\varphi^{m}}\) such that \(\overline{\varphi^{m}}\circ\eta\) is homotopic to \(\eta\) relative to the endpoints. Let \(N\) be a positive integer divisible by \(N_{E}\) for every \(E\in\mathscr{E}\). Then \((\overline{\varphi^{m}})^{N}\) is the identity on every \(\mathbb{F}_{E}\), \(E\in\mathscr{E}\cup\mathscr{E}\), and also on every \(\mathbb{F}_{E,F}\), \(E,F\in\mathscr{F}_{m,R}\) with no empty intersection, where \(R\in\mathscr{R}\). Therefore \((\overline{\varphi^{m}})^{N}\) is isotopic to a composition \(T\) of nontrivial Dehn twists such that every connected component of \(\operatorname{Fix}\overline{\varphi^{m}}\) is contained in a connected component of \(\operatorname{Fix}T\), and no two different connected components of \(\operatorname{Fix}T\) are contained in the same connected component of \(\operatorname{Fix}\overline{\varphi^{m}}\). In turn, \((\overline{\varphi^{m}})^{N}\circ\eta\) is homotopic to \(\eta\), [17, Lemma 3(ii)] in particular implies that the endpoints of \(\eta\) lie on the same family of fixed points.
**Proposition 4.11**.: _The McLean spectral sequence associated to the decomposition of Proposition 4.5 degenerates at the first page._
Proof.: It follows from combining Lemma 4.7 and Lemma 4.10.
## 5. Comparing (co)homologies
Armed with the results of Sections 3 and 4, we are now able to compute the cohomology of the contact loci and the Floer homology of the monodormy iterates associated to the plane curve \(f\). Recall that we have decompositions
\[\mathcal{X}_{m}=\left(\bigsqcup_{R\in\mathscr{R}}\mathfrak{Z}_{m,R}\right) \sqcup\left(\bigsqcup_{R,S\in\mathscr{R}\cup\mathscr{E}}\bigsqcup_{E\in \mathscr{F}_{R,S}}\mathcal{X}_{m,E}\right)\]
\[\operatorname{Fix}(\overline{\varphi^{m}})=\left(\bigsqcup_{R\in\mathscr{R}} B_{m,R}\right)\sqcup\left(\bigsqcup_{R,S\in\mathscr{R}\cup\mathscr{S}}\bigsqcup_{ \begin{subarray}{c}E\in\mathscr{F}_{R,S}\\ N_{E}|m\end{subarray}}\mathbb{F}_{E}\right)\sqcup\left(\bigsqcup_{j=1}^{r} \mathbb{F}_{\tilde{C}[j]}\right)\]
coming from Corollary 3.5 and Proposition 4.5 respectively.
**Proposition 5.1**.: _For every \(m\in\mathbb{Z}_{>0}\) there are isomorphisms_
\[H_{c}^{\bullet+2m\left(2-\frac{\nu_{E_{\text{dom}}(m,R)}}{N_{E_{\text{dom}}(m,R)}}\right)}(\mathfrak{Z}_{m,R}) \cong H^{\bullet}(B_{m,R},\partial B_{m,R}) \text{for }R\in\mathscr{R},\]
\[H_{c}^{\bullet+2m\left(2-\frac{\nu_{E}}{N_{E}}\right)}(\mathcal{X}_{m,E}) \cong H^{\bullet}(\mathbb{F}_{E},\partial\mathbb{F}_{E}) \text{for }E\in\mathscr{S}_{R,S}\text{ such that }N_{E}\text{ divides }m.\]
Proof.: From 3.7, 3.8 and 4.5 we have homeomorphisms
\[\mathfrak{Z}_{m,R}\cong(B_{m,R}\setminus\partial B_{m,R})\times\mathbb{C}^{m \left(2-\frac{\nu_{E_{\text{dom}}(m,R)}}{N_{E_{\text{dom}}(m,R)}}\right)} \text{ and }\text{ }
Proof of Theorem 1.1.: There are isomorphisms
\[H_{c}^{\bullet}(\mathcal{X}_{m}) \cong\left(\bigoplus_{R\in\mathscr{R}}H_{c}^{\bullet}(\mathfrak{Z}_ {m,R})\right)\oplus\left(\bigoplus_{R,S\in\mathscr{R}\cup\delta}\bigoplus_{E\in \mathscr{F}_{R,S}}H_{c}^{\bullet}(\mathcal{X}_{m,E})\right)\] \[\mathrm{HF}_{\bullet}(\varphi^{m},+) \cong\left(\bigoplus_{R\in\mathscr{R}}H_{\bullet+1+\mathrm{CZ}(B _{m,R})}(B_{m,R},\partial B_{m,R};\mathbb{Z}/2\mathbb{Z})\right)\oplus\] \[\oplus\left(\bigoplus_{R,S\in\mathscr{R}\cup\delta}\bigoplus_{ \begin{subarray}{c}E\in\mathscr{F}_{R,S}\\ N_{E}|m\end{subarray}}H_{\bullet+1+\mathrm{CZ}(\mathbb{F}_{E})}(\mathbb{F}_{E},\partial\mathbb{F}_{E};\mathbb{Z}/2\mathbb{Z})\right).\]
The first isomorphism is a standard fact about cohomology, while the second isomorphism follows from the degeneration of the McLean spectral sequence in Proposition 4.11. An important observation is that the pieces \(\mathbb{F}_{\tilde{C}^{[j]}}\) do not contribute to the Floer homology. Indeed, in this case \(\partial^{+}\mathbb{F}_{\tilde{C}^{[j]}}\) is just one of the two boundary components of the cylinder, and hence the relative homology is zero.
Note that we did not need to compute the value of the action associated to the abstract contact open book \((\mathbb{F},\lambda,\varphi^{m})\) at the sets \(B_{m,R},\mathbb{F}_{E}\). This is because if the value of the action at \(B_{m,R}\) (resp. \(\mathbb{F}_{E}\)) changes, then the relative homology of \(B_{m,R}\) (resp. \(\mathbb{F}_{E}\)) appears in a different column of the \(E^{1}\) page, but with the same total degree. Since we have \(E^{1}\) degeneration such a shift does not affect the result. Nevertheless, the interested reader may find the value of the action (expressed in terms of a \(\mu\)-ample divisor) in [11, Proposition 7.2].
The universal coefficient theorem gives a noncanonical isomorphism
\[H_{c}^{\bullet}(\mathcal{X}_{m,E},\mathbb{Z}/2\mathbb{Z})\cong H_{c}^{\bullet }(\mathcal{X}_{m,E})\otimes\mathbb{Z}/2\mathbb{Z}\]
and the analogous one for \(\mathfrak{Z}_{m,R}\). Finally, Proposition 4.5 and Proposition 5.1 show that the shift in the degree is
\[1+\mathrm{CZ}(\mathbb{F}_{E})+2m\left(2-\frac{\nu_{E}}{N_{E}}\right)=1+2m \left(\frac{\nu_{E}}{N_{E}}-1+2-\frac{\nu_{E}}{N_{E}}\right)=2m+1.\]
**Remark 5.2**.: The isomorphism of Theorem 1.1 is not canonical because it involves the isomorphism of a vector space with its dual, but more importantly because we used a spectral sequence to compute the Floer homology, so we can only know the associated graded of \(\mathrm{HF}_{\bullet}(\varphi^{m},+)\), and in particular the dimension of the vector space at each degree.
|
2309.08724 | Merging two Hierarchies of Internal Contextual Grammars with Subregular
Selection | In this paper, we continue the research on the power of contextual grammars
with selection languages from subfamilies of the family of regular languages.
In the past, two independent hierarchies have been obtained for external and
internal contextual grammars, one based on selection languages defined by
structural properties (finite, monoidal, nilpotent, combinational, definite,
ordered, non-counting, power-separating, suffix-closed, commutative, circular,
or union-free languages), the other one based on selection languages defined by
resources (number of non-terminal symbols, production rules, or states needed
for generating or accepting them). In a previous paper, the language families
of these hierarchies for external contextual grammars were compared and the
hierarchies merged. In the present paper, we compare the language families of
these hierarchies for internal contextual grammars and merge these hierarchies. | Bianca Truthe | 2023-09-15T19:15:22Z | http://arxiv.org/abs/2309.08724v1 | # Merging two Hierarchies of Internal Contextual Grammars with Subregular Selection
###### Abstract
In this paper, we continue the research on the power of contextual grammars with selection languages from subfamilies of the family of regular languages. In the past, two independent hierarchies have been obtained for external and internal contextual grammars, one based on selection languages defined by structural properties (finite, monoidal, nilpotent, combinational, definite, ordered, non-counting, power-separating, suffix-closed, commutative, circular, or union-free languages), the other one based on selection languages defined by resources (number of non-terminal symbols, production rules, or states needed for generating or accepting them). In a previous paper, the language families of these hierarchies for external contextual grammars were compared and the hierarchies merged. In the present paper, we compare the language families of these hierarchies for internal contextual grammars and merge these hierarchies.
## 1 Introduction
Contextual grammars were introduced by S. Marcus in [18] as a formal model that might be used in the generation of natural languages. The derivation steps consist in adding contexts to given well formed sentences, starting from an initial finite basis. Formally, a context is given by a pair \((u,v)\) of words and inserting it externally into a word \(x\) gives the word \(uxv\) whereas inserting it internally gives all words \(x_{1}ux_{2}vx_{3}\) when \(x=x_{1}x_{2}x_{3}\). In order to control the derivation process, contextual grammars with selection were defined. In such contextual grammars, a context \((u,v)\) may be added only if the surrounded word \(x\) or \(x_{2}\) belongs to a language which is associated with the context. Language families were defined where all selection languages in a contextual grammar belong to some language family \(\mathcal{F}\). Such contextual grammars are said to be 'with selection in the family \(\mathcal{F}\)'. Contextual grammars have been studied where the family \(\mathcal{F}\) is taken from the Chomsky hierarchy (see [16, 21, 22] and references therein).
In [5], the study of external contextual grammars with selection in special regular sets was started. Finite, combinational, definite, nilpotent, regular suffix-closed, regular commutative languages and languages of the form \(V^{*}\) for some alphabet \(V\) were considered. The research was continued in [9, 10, 11, 17] where further subregular families of selection languages were considered and the effect of subregular selection languages on the generative power of external and internal contextual grammars was investigated. A recent survey can be found in [28] which presents for each type of contextual grammars (external and internal ones) two hierarchies, one based on selection languages defined by structural properties (finite, monoidal, nilpotent, combinational, definite, ordered, non-counting, power-separating, suffix-closed, commutative, circular, or union-free languages), the other one based on selection languages defined by resources (number of non-terminal symbols, production rules, or states needed for generating or accepting them). In [29], the language families of these hierarchies for external contextual grammars were compared and the hierarchies merged. In the present paper, we compare the language families of these hierarchies for internal contextual grammars and merge the hierarchies.
The internal case is different from the case of external contextual grammars, as there are two main differences between the ways in which words are derived. In the case of internal contextual grammars, it is possible that the insertion of a context into a sentential form can be done at more than one place, such that the derivation becomes in some sense non-deterministic; in the case of external grammars, once a context was selected, there is at most one way to insert it: wrapped around the sentential form, when this word is in the selection language of the context. On the other hand, the outermost ends of a word derived externally have been added at the end of the derivation, whereas derived internally the ends could have been at the ends of the word already from the beginning since some inner part can be 'pumped'. If a context can be added internally, then it can be added arbitrarily often (because the subword where the context is wrapped around does not change) which does not necessarily hold for external grammars.
In Section 2, we give the definitions and notation of the concepts used in this paper (languages, grammars, automata, subregular language families, inclusion relations between these families, contextual grammars, and inclusion relations between the families generated by internal contextual grammars where the selection languages belong to various subregular language families). In Section 3, we present our results where, first, several languages are presented which later serve as witness languages for proper inclusions or the incomparability of two language families and, later, these languages are used to prove relations between the various language families generated by internal contextual grammars with different types of selection. Finally, in Section 4, we state some problems which are left open and give some ideas for future research.
## 2 Preliminaries
Throughout the paper, we assume that the reader is familiar with the basic concepts of the theory of automata and formal languages. For details, we refer to [22]. Here we only recall some notation and the definition of contextual grammars with selection which form the central notion of the paper.
### Languages, grammars, automata
Given an alphabet \(V\), we denote by \(V^{*}\) and \(V^{+}\) the set of all words and the set of all non-empty words over \(V\), respectively. The empty word is denoted by \(\lambda\). By \(V^{k}\) and \(V^{\leq k}\) for some natural number \(k\), we denote the set of all words of the alphabet \(V\) with exactly \(k\) letters and the set of all words over \(V\) with at most \(k\) letters, respectively. For a word \(w\) and a letter \(a\), we denote the length of \(w\) by \(|w|\) and the number of occurrences of the letter \(a\) in the word \(w\) by \(|w|_{a}\). For a set \(A\), we denote its cardinality by \(|A|\).
A right-linear grammar is a quadruple
\[G=(N,T,P,S)\]
where \(N\) is a finite set of non-terminal symbols, \(T\) is a finite set of terminal symbols, \(P\) is a finite set of production rules of the form \(A\to wB\) or \(A\to w\) with \(A,B\in N\) and \(w\in T^{*}\), and \(S\in N\) is the start symbol. Such a grammar is called regular, if all the rules are of the form \(A\to xB\) or \(A\to x\) with \(A,B\in N\) and \(x\in T\) or \(S\to\lambda\). The language generated by a right-linear or regular grammar is the set of all words over the terminal alphabet which are obtained from the start symbol \(S\) by a successive replacement of the non-terminal symbols according to the rules in the set \(P\). Every language generated by a right-linear grammar can also be generated by a regular grammar.
A deterministic finite automaton is a quintuple
\[\mathcal{A}=(V,Z,z_{0},F,\delta)\]
where \(V\) is a finite set of input symbols, \(Z\) is a finite set of states, \(z_{0}\in Z\) is the initial state, \(F\subseteq Z\) is a set of accepting states, and \(\delta\) is a transition function \(\delta:Z\times V\to Z\). The language accepted by such an automaton is the set of all input words over the alphabet \(V\) which lead letterwise by the transition function from the initial state to an accepting state.
The set of all languages generated by some right-linear grammar coincides with the set of all languages accepted by a deterministic finite automaton. All these languages are called regular and form a family denoted by \(REG\). Any subfamily of this set is called a subregular language family.
### Resources restricted languages
We define subregular families by restricting the resources needed for generating or accepting their elements:
\[RL_{n}^{V} =\left\{\,L\mid L\text{ is generated by a right-linear grammar with at most $n$ non-terminal symbols}\,\right\},\] \[RL_{n}^{P} =\left\{\,L\mid L\text{ is generated by a right-linear grammar with at most $n$ production rules}\,\right\},\] \[REG_{n}^{Z} =\left\{\,L\mid L\text{ is accepted by a deterministic finite automaton with at most $n$ states}\,\right\}.\]
### Subregular language families based on the structure
We consider the following restrictions for regular languages. Let \(L\) be a language over an alphabet \(V\). With respect to the alphabet \(V\), the language \(L\) is said to be
* _monoidal_ if and only if \(L=V^{*}\),
* _nilpotent_ if and only if it is finite or its complement \(V^{*}\setminus L\) is finite,
* _combinational_ if and only if it has the form \(L=V^{*}X\) for some subset \(X\subseteq V\),
* _definite_ if and only if it can be represented in the form \(L=A\cup V^{*}B\) where \(A\) and \(B\) are finite subsets of \(V^{*}\),
* _suffix-closed_ (or _fully initial_ or _multiple-entry_ language) if and only if, for any two words \(x\in V^{*}\) and \(y\in V^{*}\), the relation \(xy\in L\) implies the relation \(y\in L\),
* _ordered_ if and only if the language is accepted by some deterministic finite automaton \[\mathcal{A}=(V,Z,z_{0},F,\delta)\] with an input alphabet \(V\), a finite set \(Z\) of states, a start state \(z_{0}\in Z\), a set \(F\subseteq Z\) of accepting states and a transition mapping \(\delta\) where \((Z,\preceq)\) is a totally ordered set and, for any input symbol \(a\in V\), the relation \(z\preceq z^{\prime}\) implies \(\delta(z,a)\preceq\delta(z^{\prime},a)\),
* _commutative_ if and only if it contains with each word also all permutations of this word,
* _circular_ if and only if it contains with each word also all circular shifts of this word,
* _non-counting_ (or _star-free_) if and only if there is a natural number \(k\geq 1\) such that, for any three words \(x\in V^{*}\), \(y\in V^{*}\), and \(z\in V^{*}\), it holds \(xy^{k}z\in L\) if and only if \(xy^{k+1}z\in L\),
* _power-separating_ if and only if, there is a natural number \(m\geq 1\) such that for any word \(x\in V^{*}\), either \(J_{x}^{m}\cap L=\emptyset\) or \(J_{x}^{m}\subseteq L\) where \(J_{x}^{m}=\left\{\,x^{n}\mid n\geq m\,\right\}\),
* _union-free_ if and only if \(L\) can be described by a regular expression which is only built by product and star.
We remark that monoidal, nilpotent, combinational, definite, ordered, and union-free languages are regular, whereas non-regular languages of the other types mentioned above exist. Here, we consider among the commutative, circular, suffix-closed, non-counting, and power-separating languages only those which are also regular.
Some properties of the languages of the classes mentioned above can be found in [23] (monoids), [13] (nilpotent languages), [15] (combinational and commutative languages), [20] (definite languages), [14] and [3] (suffix-closed languages), [24] (ordered languages), [4] (circular languages), [19] (non-counting languages), [25] (power-separating languages), [2] (union-free languages).
By \(FIN\), \(MON\), \(NIL\), \(COMB\), \(DEF\), \(SUF\), \(ORD\), \(COMM\), \(CIRC\), \(NC\), \(PS\), \(UF\), and \(REG\), we denote the families of all finite, monoidal, nilpotent, combinational, definite, regular suffix-closed, ordered, regular commutative, regular circular, regular non-counting, regular power-separating, union-free, and regular, languages, respectively.
As the set of all families under consideration, we set
\[\mathfrak{F}=\{FIN,MON,NIL,COMB,DEF,SUF,ORD,COMM,CIRC,NC,PS,UF\}\]
\[\cup\{\;RL_{n}^{V}\;|\;n\geq 1\;\}\cup\{\;RL_{n}^{P}\;|\;n\geq 1\;\}\cup\{\; REG_{n}^{Z}\;|\;n\geq 1\;\}.\]
### Hierarchy of subregular families of languages
We present here a hierarchy of the families of the aforementioned set \(\mathfrak{F}\) with respect to the set theoretic inclusion relation.
Figure 1: Hierarchy of subregular language families
**Theorem 2.1**: _The inclusion relations presented in Figure 1 hold. An arrow from an entry \(X\) to an entry \(Y\) depicts the proper inclusion \(X\subset Y\); if two families are not connected by a directed path, then they are incomparable._
For proofs and references to proofs of the relations, we refer to [27].
### Contextual grammars
Let \(\mathcal{F}\) be a family of languages. A contextual grammar with selection in \(\mathcal{F}\) is a triple
\[G=(V,\mathcal{S},A)\]
where
* \(V\) is an alphabet,
* \(\mathcal{S}\) is a finite set of selection pairs \((S,C)\) where \(S\) is a selection language over some subset \(U\) of the alphabet \(V\) which belongs to the family \(\mathcal{F}\) with respect to the alphabet \(U\) and where \(C\subset V^{*}\times V^{*}\) is a finite set of contexts with the condition, for each context \((u,v)\in C\), at least one side is not empty: \(uv\neq\lambda\),
* \(A\) is a finite subset of \(V^{*}\) (its elements are called axioms).
Let \(G=(V,\mathcal{S},A)\) be a contextual grammar with selection. A direct internal derivation step in \(G\) is defined as follows: a word \(x\) derives a word \(y\) (written as \(x\Longrightarrow y\)) if and only if there are words \(x_{1}\), \(x_{2}\), \(x_{3}\) with \(x_{1}x_{2}x_{3}=x\) and there is a selection pair \((S,C)\in\mathcal{S}\) such that \(x_{2}\in S\) and \(y=x_{1}ux_{2}vx_{3}\) for some pair \((u,v)\in C\). Intuitively, we can only wrap a context \((u,v)\in C\) around a subword \(x_{2}\) of \(x\) if \(x_{2}\) belongs to the corresponding selection language \(S\).
By \(\Longrightarrow^{*}\), we denote the reflexive and transitive closure of the relation \(\Longrightarrow\). The language generated by \(G\) is defined as
\[L=\{\,z\mid x\Longrightarrow^{*}z\text{ for some }x\in A\,\,\}.\]
By \(\mathcal{IC}(\mathcal{F})\), we denote the family of all languages generated internally by contextual grammars with selection in \(\mathcal{F}\). When a contextual grammar works in the internal mode, we call it an internal contextual grammar.
From previous research, we have the two hierarchies depicted in Figure 2. An arrow from an entry \(X\) to an entry \(Y\) depicts the proper inclusion \(X\subset Y\); a solid arrow indicates that the inclusion is proper, the dashed arrow from \(\mathcal{IC}(ORD)\) to \(\mathcal{IC}(NC)\) indicates that it is not known so far whether this inclusion is proper or whether equality holds. The label at an edge shows in which paper the relation was proved.
If two families \(X\) and \(Y\) are not connected by a directed path, then \(X\) and \(Y\) are in most cases incomparable. The only exceptions are the relations of the family \(\mathcal{IC}(SUF)\) to the families \(\mathcal{IC}(ORD)\) and \(\mathcal{IC}(NC)\) where it is not known whether they are incomparable or whether \(\mathcal{IC}(SUF)\) is a subset of the other and the relation of the family \(\mathcal{IC}(REG_{n+1}^{Z})\) to \(\mathcal{IC}(RL_{n}^{V})\) for \(n\geq 1\) where it is not known whether they are incomparable or whether \(\mathcal{IC}(REG_{n+1}^{Z})\) is a subset of \(\mathcal{IC}(RL_{n}^{V})\).
We note here that in [5, 9, 10, 17, 26, 25, 6] a slightly different definition was used than in [28, 12] and the present paper. This difference consists in the alphabet of the selection languages. In the early papers, the selection languages belong to some subfamily \(\mathcal{F}\) with respect to the whole alphabet \(V\) of the contextual grammar whereas in later papers, the selection languages belong to some subfamily \(\mathcal{F}\) with respect to some subalphabet \(U\subseteq V\) of the contextual grammar. The language \(\{a\}^{*}\{a\}^{5}\), for instance, is nilpotent with respect to the alphabet \(\{a\}\) but not with respect to the alphabet \(\{a,b\}\). For almost all
proofs in the mentioned papers, there is no difference between using one or the other definition. The only proof which relies on the definition is that of the relation \(L(G)\notin\mathcal{IC}(DEF)\) for
\[G=(V,\{(\mathit{Suf}(\{d\}^{*}\{b\}),\{(a,b)\}),(\{a,\lambda\},\{(c,d)\})\},\{ecadb \})\]
from [11, Lemma 21] (also used in [26, Theorem 3.5]). However, the proof is valid also with the subalphabet definition if one changes the axiom \(ecadb\) to the word \(dcadb\).
From the definition follows that the subset relation is preserved under the use of contextual grammars: if we allow more, we do not obtain less.
**Lemma 2.2**: _For any two language classes \(X\) and \(Y\) with \(X\subseteq Y\), we have the inclusion_
\[\mathcal{IC}(X)\subseteq\mathcal{IC}(Y).\]
In the following section, we relate the families of the two hierarchies mentioned above.
Figure 2: Hierarchies of the language families by internal contextual grammars with selection languages defined by structural properties (left) or restricted resources (right). An edge label refers to the paper where the respective inclusion is proved.
## 3 Results
When we speak about contextual grammars in this section, we mean internal contextual grammars (whose languages are generated in the internal mode).
First, we present languages which will serve later as witness languages for proper inclusions or incomparabilities.
**Lemma 3.1**: _Let \(V=\{a,b,c,d,e\}\) be an alphabet, \(G=(V,\{(S_{1},C_{1}),(S_{2},C_{2})\},\{c\})\) be a contextual grammar with_
\[\begin{array}{ll}S_{1}=\{b\}^{*}\{c\},&C_{1}=\{(ab,ab)\},\\ S_{2}=\{aa\}^{*},&C_{2}=\{(d,e)\},\end{array}\]
_and \(L=L(G)\) be the laguage generated. Then_
\[L\in(\mathcal{IC}(RL_{1}^{V})\cap\mathcal{IC}(RL_{2}^{P})\cap\mathcal{IC}(REG_ {2}^{Z}))\setminus\mathcal{IC}(PS).\]
_Proof._ The selection languages are generated by right-linear grammars with the following rules (and start symbol \(S\)):
\[\begin{array}{ll}S_{1}:&S\to bS,\ S\to c,\\ S_{2}:&S\to aaS,\ S\to\lambda.\end{array}\]
Since these rules contain one non-terminal symbol only and two rules each, we obtain
\[L\in\mathcal{IC}(RL_{1}^{V})\cap\mathcal{IC}(RL_{2}^{P}).\]
Since the words of the language \(L\) contain only one letter \(c\) (the axiom has no more and the contexts do not contain \(c\)), the language \(L\) is also generated if \(S_{1}\) is replaced by the language \(S_{1}^{\prime}=(\{b\}^{*}\{c\})^{+}\) (the additional words cannot be used for selection).
The selection languages \(S_{1}^{\prime}\) and \(S_{2}\) are accepted by automata with two states each whose transition functions are given in the following diagram:
\[S_{1}^{\prime}:\quad\mbox{\rm start}\quad\raisebox{-14.226378pt}{\includegraphics[ ]{fig/L1}}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
Further, let \(k\) be the maximal length of the axioms and contexts plus \(m\):
\[k=\max\{\max\{\,|w|\mid w\in A\,\},\max\{\,|uv|\mid(u,v)\in C,(S,C)\in\mathcal{S} \,\}\}+m.\]
Consider the word \(w=da^{2k}eb^{2k}c(ab)^{2k}\) which belongs to the language \(L\) but not to the set \(A\) of axioms due to its length. Therefore, it is derived from another word \(w^{\prime}\in L\) by insertion of a context \((u,v)\) from a selection pair \((S,C)\). We now study the possibilities for \(u\) and, depending from this, also for \(v\). Let \(w^{\prime}_{1}\), \(w^{\prime}_{2}\), and \(w^{\prime}_{3}\) be the subwords of \(w^{\prime}\) which are separated by the insertion of \((u,v)\):
\[w^{\prime}=w^{\prime}_{1}w^{\prime}_{2}w^{\prime}_{3}\Longrightarrow w^{\prime }_{1}ww^{\prime}_{2}vw^{\prime}_{3}=w.\]
If \(u=d\), then \(v=e\). This case will be continued later.
If \(u=da^{n}\) for some \(n\) with \(1\leq n\leq k\), then \(v\) contains the letter \(e\) and has to bear also \(n\) letters \(b\) to be inserted before the letter \(c\) but also \(n\) letters of \(a\) and \(b\) to be created after the \(c\) which is not possible.
If \(u=a^{n}\) for some number \(n\) with \(1\leq n\leq k\) and \(w^{\prime}_{1}=da^{p}\) for some \(p\) with \(0\leq p\leq 2k-n\), then \(v\) has to bear also \(n\) letters \(b\) to be inserted before the letter \(c\) and also \(n\) letters of \(a\) and \(b\) to be created after the \(c\) which is not possible.
It is not possible that \(u\) contains the letter \(e\) because \(d\) and \(e\) are inserted at the same time but \(d\) cannot be present in \(u\) together with \(e\) due to the length of \(u\).
If \(w^{\prime}_{1}\) starts with \(da^{2k}e\) (if \(u\) as a subword of \(w\) starts after the letter \(e\)), then the word \(w^{\prime}\) does not have the correct form (does not belong to the language \(L\) which is a contradiction), since the number of letters \(a\) before \(c\) is already \(2k\) whereas the number of occurrences of \(b\) before \(c\) or the number of occurrences of \(ab\) after \(c\) is less (since \(|uv|>0\)).
Thus, the only possibility is that \((u,v)=(d,e)\) and \(w^{\prime}_{2}=a^{2k}\). We have \(2k>m\) and, therefore, \(a^{2k}\in J^{m}_{a}\). Hence, \(J^{m}_{a}\cap S\neq\emptyset\) and \(J^{m}_{a}\subseteq S\). Therefore, the word \(a^{2k+1}\) (which belongs to the set \(J^{m}_{a}\)) also belongs to the selection language \(S\). The language \(L\) also contains the word \(a^{2k+1}b^{2k+1}c(ab)^{2k+1}\). With the same selection pair \((S,C)\), the word \(da^{2k+1}eb^{2k+1}c(ab)^{2k+1}\) could be derived. But this does not belong to the language \(L\). This contradiction shows that our assumption was wrong and that \(L\notin\mathcal{JC}(PS)\) holds. \(\Box\)
**Lemma 3.2**: _Let \(L=\{\,c^{n}ac^{mn}bc^{n+m}\mid n\geq 0,m\geq 0\,\}\cup\{\,c^{n}bc^{n}a\mid n \geq 0\,\}\). Then the relation_
\[L\in(\mathcal{JC}(RL^{V}_{1})\cap\mathcal{JC}(RL^{P}_{2})\cap\mathcal{JC}(REG^ {Z}_{2}))\setminus(\mathcal{JC}(CIRC)\cup\mathcal{JC}(SUF))\]
_holds._
_Proof._ Let \(V=\{\,a,b,c\,\}\). The language \(L\) is generated by the contextual grammar
\[G=(V,\{(\{ab,b\},\{(c,c)\})\},\{ab,ba\}).\]
Since the selection language is finite with two words, it can be generated by a right-linear grammar with one non-terminal symbol and two rules only. Hence, \(L\in\mathcal{JC}(RL^{V}_{1})\cap\mathcal{JC}(RL^{P}_{2})\).
The language \(L\) is also generated by the contextual grammar
\[G=(V,\{(V^{*}\{b\},\{(c,c)\})\},\{ab,ba\}).\]
with a combinational selection language only. Every combinational language is accepted by a deterministic finite automaton with two states (see Theorem 2.1 and Figure 1). Hence, \(L\in\mathcal{JC}(REG^{Z}_{2})\).
In [11, Lemma 18], it was shown that the language \(L\) can neither be generated by a contextual grammar with circular filters nor by one with suffix-closed filters. Hence, \(L\notin\mathcal{JC}(CIRC)\cup\mathcal{JC}(SUF)\). \(\Box\)
**Lemma 3.3**: _Let \(n\geq 1\) be a natural number and let_
\[A_{n}=\{a_{1},\ldots,a_{n}\},\quad B_{n}=\{b_{1},\ldots,b_{n}\},\quad C_{n}=\{c_ {1},\ldots,c_{n}\},\quad D_{n}=\{d_{1},\ldots,d_{n}\},\]
_as well as_
\[V_{n} =A_{n}\cup B_{n}\cup C_{n}\cup D_{n},\] \[P_{n} =\{\;(a_{i},c_{j})\;|\;1\leq i\leq n,1\leq j\leq n\;\},\] \[Q_{n} =\{\;(b_{i},d_{j})\;|\;1\leq i\leq n,1\leq j\leq n\;\},\] \[G_{n} =(V_{n},\{(B_{n}^{*},P_{n}),(C_{n}^{*},Q_{n})\},\{\;a_{i_{a}}b_{i _{b}}c_{i_{c}}d_{i_{d}}\;|\;1\leq i_{x}\leq n,\;x\in\{a,b,c,d\;\}\;\}),\]
_and \(L_{n}=L(G_{n})\). Then the relation \(L_{n}\in\mathcal{IC}(MON)\setminus\mathcal{IC}(RL_{n}^{P})\) holds._
_Proof._ Let \(n\geq 1\). The selection languages of \(G_{n}\) are monoidal. Thus, \(L_{n}\in\mathcal{IC}(MON)\). From [27, Lemma 3.30], we know that \(L\notin\mathcal{IC}(RL_{n}^{P})\). \(\square\)
**Lemma 3.4**: _Let \(V=\{a,b\}\) and \(L_{n}=\{\;a^{p_{0}}ba^{p_{1}}ba^{p_{2}}b\cdots a^{p_{n}}ba^{p_{0}}ba^{p_{1}}ba^ {p_{2}}b\cdots a^{p_{n}}\;|\;p_{i}\geq 1,\;0\leq i\leq n\;\}\) for \(n\geq 1\). Then_
\[L_{n}\in(\mathcal{IC}(COMM)\cap\mathcal{IC}(ORD))\setminus\mathcal{IC}(RL_{n }^{V}).\]
_Proof._ Let \(n\) be a natural number with \(n\geq 1\).
The language \(L_{n}\) is generated by the contextual grammar
\[G_{n}=(V,\{(S_{n},\{(a,a)\})\},\{(ab)^{2n+1}a\})\]
with the selection language \(S_{n}=(\{a\}^{*}\{b\}\{a\}^{*})^{n+1}\). This selection language is commutative; hence, we have \(L_{n}\in\mathcal{IC}(COMM)\).
The selection language is accepted by an automaton whose transition function is shown in the following diagram:
This shows that the automaton is ordered (with \(z_{0}\prec z_{1}\prec\cdots\prec z_{n+2}\), it holds \(\delta(z_{i},x)\preceq\delta(z_{j},x)\) for any two states \(z_{i}\) and \(z_{j}\) with \(z_{i}\prec z_{j}\) and any \(x\in\{a,b\}\)). Hence, \(L_{n}\in\mathcal{IC}(ORD)\).
In [27, Lemma 3.29], the relation \(L_{n}\notin\mathcal{IC}(RL_{n}^{V})\) was proved. \(\square\)
**Lemma 3.5**: _Let \(n\geq 2\) be a natural number, \(V_{n}=\{a_{1},a_{2},\ldots,a_{n}\}\) be an alphabet, and \(L_{n}\) be the language \(L_{n}=\{a_{1}a_{2}\ldots a_{n}\}^{+}\cup V_{n}^{n-1}\). Then the relation \(L_{n}\in\mathcal{IC}(FIN)\setminus\mathcal{IC}(REG_{n}^{Z})\) holds._
_Proof._ Let \(n\geq 2\). The language \(L_{n}\) is generated by the contextual grammar
\[G_{n}=(V_{n},\{(\{a_{1}a_{2}\ldots a_{n}\},\{(\lambda,a_{1}a_{2}\ldots a_{n})\}) \},V_{n}^{n-1}\cup\{a_{1}a_{2}\ldots a_{n}\})\]
with a finite selection language only. Thus, \(L_{n}\in\mathcal{IC}(FIN)\).
In [27, Lemma 3.31], it was shown that \(L_{n}\notin\mathcal{IC}(REG_{n}^{Z})\). \(\square\)
In a similar way, the following result is proved.
**Lemma 3.6**: _Let \(n\geq 2\) be a natural number, \(V_{n}=\{a_{1},a_{2},\ldots,a_{n}\}\) be an alphabet, and \(L_{n}\) be the language_
\[L_{n}=V_{n}^{\leq n-1}\cup\bigcup_{k\geq 1}V_{n}^{kn}.\]
_Then the relation \(L_{n}\in\mathcal{IC}(COMM)\setminus\mathcal{IC}(REG_{n}^{Z})\) holds._
_Proof._ Let \(n\geq 2\). The language \(L_{n}\) is generated by the contextual grammar
\[G_{n}=(V_{n},\{(V_{n}^{n},\{\;(\lambda,w)\;|\;w\in V_{n}^{n}\;\})\},V_{n}^{ \leq n-1}\cup V_{n}^{n})\]
with a commutative selection language only. Thus, \(L_{n}\in\mathcal{IC}(COMM)\).
In any contextual grammar generating the language \(L_{n}\), every context has a length which is divisible by \(n\) and can only be added to subwords of words of the language which have a length of at least \(n\). Since every subword of length less than \(n\) occurs in the language, the selected subwords must have a length of at least \(n\). This cannot be checked by a deterministic finite automaton with \(n\) states only. \(\Box\)
We now prove the relations between the language families of contextual grammars where the selection languages are taken from subregular families of languages which have common structural properties and from families of regular languages defined by restricting the resources needed for generating or accepting them. We start with families which are defined by the number of production rules necessary for generating the selection languages.
**Lemma 3.7**: _The language families \(\mathcal{IC}(RL_{1}^{P})\) and \(\mathcal{IC}(FIN)\) coincide._
_Proof._ The inclusion \(\mathcal{IC}(RL_{1}^{P})\subseteq\mathcal{IC}(FIN)\) follows by Lemma 2.2 from the inclusion \(RL_{1}^{P}\subseteq FIN\) (see Theorem 2.1 and also Figure 1).
For the converse inclusion, let \(m\geq 1\) and
\[G=(V,\{\;(S_{i},C_{i})\;|\;1\leq i\leq m\;\},A)\]
be a contextual grammar where all selection languages \(S_{i}\) (\(1\leq i\leq m\)) are finite. Then we split up the selection languages into singleton sets and obtain the contextual grammar
\[G^{\prime}=(V,\{\;(\{w\},C_{i})\;|\;1\leq i\leq m,\;w\in S_{i}\;\},A)\]
which generates the same language as \(G\) and all selection languages belong to the family \(RL_{1}^{P}\). Hence, also the inclusion \(\mathcal{IC}(FIN)\subseteq\mathcal{IC}(RL_{1}^{P})\) holds and together we obtain \(\mathcal{IC}(FIN)=\mathcal{IC}(RL_{1}^{P})\)\(\Box\)
**Lemma 3.8**: _The language families \(\mathcal{IC}(RL_{n}^{P})\) for \(n\geq 2\) are incomparable to the families_
\[\mathcal{IC}(MON),\;\mathcal{IC}(NIL),\;\mathcal{IC}(COMB),\; \mathcal{IC}(DEF),\;\mathcal{IC}(ORD),\;\mathcal{IC}(NC),\;\mathcal{IC}(PS),\] \[\mathcal{IC}(SUF),\;\mathcal{IC}(COMM),\;\text{and}\;\mathcal{IC }(CIRC).\]
_Proof._ Due to the inclusion relations stated in Theorem 2.1, depicted in Figure 1, proofs of the following relations are sufficient:
1. \(\mathcal{IC}(RL_{2}^{P})\setminus\mathcal{IC}(PS)\neq\emptyset\),
2. \(\mathcal{IC}(RL_{2}^{P})\setminus\mathcal{IC}(CIRC)\neq\emptyset\),
3. \(\mathcal{IC}(MON)\setminus\mathcal{IC}(RL_{n}^{P})\neq\emptyset\) for every natural number \(n\) with \(n\geq 2\).
The first relation was proved in Lemma 3.1, the second relation in Lemma 3.2, and the third relation in Lemma 3.3.
Regarding the families which are defined by the number of states necessary for accepting the selection languages, we obtain the following results.
**Lemma 3.9**: _The language families \(\mathcal{IC}(MON)\) and \(\mathcal{IC}(REG_{1}^{Z})\) coincide._
_Proof._ This follows from the fact that \(REG_{1}^{Z}=MON\cup\{\emptyset\}\) and that the empty set has no influence as a selection language. \(\Box\)
**Lemma 3.10**: _The relation \(\mathcal{IC}(COMB)\subset\mathcal{IC}(REG_{2}^{Z})\) holds._
_Proof._ From Theorem 2.1 (see Figure 1), we know that \(COMB\subset REG_{2}^{Z}\). By Lemma 2.2, we obtain that \(\mathcal{IC}(COMB)\subseteq\mathcal{IC}(REG_{2}^{Z})\) holds. By Lemma 3.1, this inclusion is proper. \(\Box\)
**Lemma 3.11**: _Every language family \(\mathcal{IC}(REG_{n}^{Z})\) where \(n\geq 2\) is incomparable to each of the families_
\[\mathcal{IC}(FIN),\ \mathcal{IC}(NIL),\ \mathcal{IC}(DEF),\ \mathcal{IC}( ORD),\ \mathcal{IC}(NC),\text{ and }\mathcal{IC}(PS).\]
_Proof._ Due to the inclusion relations stated in Theorem 2.1, depicted in Figure 1, proofs of the following relations are sufficient:
1. \(\mathcal{IC}(REG_{2}^{Z})\setminus\mathcal{IC}(PS)\neq\emptyset\),
2. \(\mathcal{IC}(FIN)\setminus\mathcal{IC}(REG_{n}^{Z})\neq\emptyset\) for every \(n\geq 2\).
The first relation was proved with Lemma 3.1, the second one with Lemma 3.5. \(\Box\)
**Lemma 3.12**: _Every language family \(\mathcal{IC}(REG_{n}^{Z})\) where \(n\geq 2\) is incomparable to each of the families \(\mathcal{IC}(COMM)\) and \(\mathcal{IC}(CIRC)\)._
_Proof._ Due to the inclusion relations stated in Theorem 2.1, depicted in Figure 1, proofs of the following relations are sufficient:
1. \(\mathcal{IC}(REG_{2}^{Z})\setminus\mathcal{IC}(CIRC)\neq\emptyset\),
2. \(\mathcal{IC}(COMM)\setminus\mathcal{IC}(REG_{n}^{Z})\neq\emptyset\) for every \(n\geq 2\).
The first relation was proved with Lemma 3.2, the second one with Lemma 3.6. \(\Box\)
Regarding the families which are defined by the number of non-terminal symbols necessary for generating the selection languages, we obtain the following results.
**Lemma 3.13**: _The relation \(\mathcal{IC}(DEF)\subset\mathcal{IC}(RL_{1}^{V})\) holds._
_Proof._ We first prove the inclusion \(\mathcal{IC}(DEF)\subseteq\mathcal{IC}(RL_{1}^{V})\).
Let \(n\geq 1\) and
\[G=(V,\{\ (S_{i},C_{i})\ |\ 1\leq i\leq n\ \},A)\]
be a contextual grammar where every selection language can be represented in the form \(S_{i}=A_{i}\cup V^{*}B_{i}\) with \(1\leq i\leq n\) for finite subsets \(A_{i}\) and \(B_{i}\) of \(V^{*}\). The same language \(L(G)\) is also generated by the contextual grammar
\[G^{\prime}=(V,\{\ (A_{i},C_{i})\ |\ 1\leq i\leq n\ \}\cup\{\ (V^{*}B_{i},C_{i})\ |\ 1 \leq i\leq n\ \},A).\]
Every such selection language \(A_{i}\) and \(V^{*}B_{i}\) for \(1\leq i\leq n\) can be generated by a right-linear grammar with one non-terminal symbol only:
\[G_{A_{i}}=(\{S\},V,\{\ S\to w\mid w\in A_{i}\ \},S)\]
for generating the language \(A_{i}\) and
\[G_{B_{i}}=(\{S\},V,\{\ S\to xS\mid x\in V\ \}\cup\{\ S\to w\mid w\in B_{i}\ \},S)\]
for generating the language \(V^{*}B_{i}\). Hence, \(\mathcal{IC}(DEF)\subseteq\mathcal{IC}(RL_{1}^{V})\).
With Lemma 3.1, it is proved that a language exists in the set \(\mathcal{IC}(RL_{1}^{V})\setminus\mathcal{IC}(PS)\). This language is also a witness language for the properness of the inclusion \(\mathcal{IC}(DEF)\subset\mathcal{IC}(RL_{1}^{V})\). \(\square\)
**Lemma 3.14**: _Every language family \(\mathcal{IC}(RL_{n}^{V})\) where \(n\geq 1\) is incomparable to the families_
\[\mathcal{IC}(ORD),\ \mathcal{IC}(NC),\ \mathcal{IC}(PS),\ \mathcal{IC}(COMM),\ and\ \mathcal{IC}(CIRC).\]
_Proof._ Due to the inclusion relations stated in Theorem 2.1, depicted in Figure 1, proofs of the following relations are sufficient:
1. \(\mathcal{IC}(RL_{1}^{V})\setminus\mathcal{IC}(PS)\neq\emptyset\),
2. \(\mathcal{IC}(RL_{1}^{V})\setminus\mathcal{IC}(CIRC)\neq\emptyset\),
3. \(\mathcal{IC}(COMM)\setminus\mathcal{IC}(RL_{n}^{V})\neq\emptyset\) for every \(n\geq 1\),
4. \(\mathcal{IC}(ORD)\setminus\mathcal{IC}(RL_{n}^{V})\neq\emptyset\) for every \(n\geq 1\).
The first relation is proved in Lemma 3.1, the second in Lemma 3.2, and the other two in Lemma 3.4. \(\square\)
The following theorem summarizes the results.
**Theorem 3.15**: _The relations depicted in Figure 3 hold. An arrow from an entry \(X\) to an entry \(Y\) denotes the proper inclusion \(X\subset Y\). If two families are not connected by a directed path then they are not necessarily incomparable._
If two families \(X\) and \(Y\) are not connected by a directed path, then \(X\) and \(Y\) are in most cases incomparable. The only exceptions are the relations of the family \(\mathcal{IC}(SUF)\) to the families \(\mathcal{IC}(ORD)\) and \(\mathcal{IC}(NC)\), to the families \(\mathcal{IC}(RL_{n}^{V})\) for \(n\geq 1\), to the families \(\mathcal{IC}(REG_{n}^{Z})\) for \(n\geq 2\) where it is not known whether they are incomparable or whether \(\mathcal{IC}(SUF)\) is a subset of the other and the relation of the family \(\mathcal{IC}(REG_{n+1}^{Z})\) to \(\mathcal{IC}(RL_{n}^{V})\) for \(n\geq 1\) where it is not known whether they are incomparable or whether \(\mathcal{IC}(REG_{n+1}^{Z})\) is a subset of \(\mathcal{IC}(RL_{n}^{V})\).
## 4 Conclusions and Further Work
In [28], two independent hierarchies have been obtained for each type of contextual grammars, one based on selection languages defined by structural properties, the other one based on resources. In the present paper, these hierarchies have been merged for internal contextual grammars.
Some questions remain open:
* Let \(n\geq 1\). Is there a language \(L_{n}\in\mathcal{IC}(SUF)\setminus\mathcal{IC}(RL_{n}^{V})\)?
* Let \(n\geq 2\). Is there a language \(L_{n}\in\mathcal{IC}(SUF)\setminus\mathcal{IC}(REG_{n}^{Z})\) for \(n\geq 2\)?
If the first question is answered affirmatively, then these languages \(L_{n}\) satisfy also \(L_{n}\notin\mathcal{IC}(REG_{n}^{Z})\) since \(\mathcal{IC}(REG_{n}^{Z})\subset\mathcal{IC}(RL_{n}^{V})\) for \(n\geq 1\) (Theorem 2.1, see Figure 1).
If such languages are found, then it is clear that every language family \(\mathcal{IC}(RL_{n}^{V})\) for \(n\geq 1\) and every languages family \(\mathcal{IC}(REG_{n}^{Z})\) for \(n\geq 2\) is incomparable to the family \(\mathcal{IC}(SUF)\). So far, we only know that \(\mathcal{IC}(RL_{n}^{V})\not\subseteq\mathcal{IC}(SUF)\) for \(n\geq 1\) and that \(\mathcal{IC}(REG_{n}^{Z})\not\subseteq\mathcal{IC}(SUF)\) for \(n\geq 2\) (both shown in Lemma 3.1)
Recently, in [6, 12], strictly locally \(k\)-testable languages have been investigated as selection languages for contextual grammars. Also for the language families defined by those selection languages, it should be investigated where they are located in the presented hierarchy.
Figure 3: Hierarchy of language families by contextual grammars; an edge label refers to the corresponding lemma (where the relation was not already shown in Figure 2). The incomparabilities were proved in the Lemmas 3.8, 3.11, 3.12 and 3.14.
Additionally, other subfamilies of regular languages could be taken into consideration. Recently, in [7, 8], external contextual grammars have been investigated where the selection languages are ideals or codes. This research could be extended to internal contextual grammars with ideals or codes as selection languages.
|
2309.07417 | On the singular problem involving $g$-Laplacian | In this paper, we show that the existence of a positive weak solution to the
equation $(-\Delta_g)^s u=f u^{-q(x)}\;\mbox{in}\; \Omega,$ where $\Omega$ is a
smooth bounded domain in $R^N$, $q\in C^1(\overline{\Omega})$, and
$(-\Delta_g)^s$ is the fractional $g$-Laplacian with $g$ is the antiderivative
of a Young function and $f$ in suitable Orlicz space subjected to zero
Dirichlet condition. This includes the mixed fractional $(p,q)-$Laplacian as a
special case. The solution so obtained is also shown to be locally H\"older
continuous. | Kaushik Bal, Riddhi Mishra, Kaushik Mohanta | 2023-09-14T04:07:35Z | http://arxiv.org/abs/2309.07417v1 | # On the singular problem involving \(g\)-Laplacian
###### Abstract.
In this paper, we show that the existence of weak solution to the equation
\[(-\Delta_{g})^{s}u(x) =f(x)u(x)^{-q(x)}\text{ in }\Omega,\] \[u>0\text{ in }\ \Omega,\] \[u =0\text{ in }\mathbb{R}^{N}\setminus\Omega\]
where \(\Omega\) is a smooth bounded domain in \(\mathbb{R}^{N}\), \(q\in C^{1}(\overline{\Omega})\), and \((-\Delta_{g})^{s}\) is the fractional \(g\)-Laplacian with \(g\) is the antiderivative of a Young function and \(f\) in suitable Orlicz space. This includes the mixed fractional \((p,q)-\)Laplacian as a special case. The solution so obtained are also shown to be locally Holder continuous.
Key words and phrases:Singular problem; variable singularity; fractional \(g\)-Laplacian 2020 Mathematics Subject Classification.: 35R11, 35J62, 35A15
## 1. Introduction
Nonlocal problems have been a subject of immense interest in mathematics recently. Various studies have been published to verify if the results of the Laplace operator can be suitably generalized for problems involving fractional Laplacian and its generalization. Continuing with the spirit of recent developments in the study of nonlocal operators, in this article, we consider the following problem
\[(-\Delta_{g})^{s}u(x) =f(x)u(x)^{-q(x)}\text{ in }\Omega,\] \[u>0\text{ in }\ \Omega,\] \[u=0\text{ in }\ \mathbb{R}^{N}\setminus\Omega \tag{1.1}\]
with \(\Omega\) being a smooth bounded domain in \(\mathbb{R}^{N}\) and \(q\) is a non-negative \(C^{1}\) function in \(\overline{\Omega}\), and the fractional \(g\)-Laplacian operator is defined as
\[(-\Delta_{g})^{s}u(x):=\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{dy}{|x-y|^{N+s}}\]
with \(g:[0,\infty)\to\mathbb{R}\) is a right continuous function satisfying the following assumptions:
* \(g(0)=0;\ g(t)>0\) for \(t>0\) and \(\lim_{t\to+\infty}g(t)=\infty\).
* \(g\) is convex on \((0,\infty)\).
* \(g^{\prime}\) is nondecreasing on \((0,\infty)\), and hence on \(\mathbb{R}\setminus\{0\}\).
Given \(g:\mathbb{R}\to\mathbb{R}\), we define \(G:[0,\infty)\to[0,\infty)\), called the N-function or Young's function by
\[G(t):=\int_{0}^{t}g(\tau)d\tau.\]
We also assume the following additional conditions on \(G\) and \(g\):
* \(g=G^{\prime}\) is absolutely continuous, so it is differentiable almost everywhere.
* \(\int_{0}^{1}\frac{G^{-1}(\tau)}{\tau^{\frac{2\pi^{2}}{3\pi^{2}}}}d\tau<\infty\) and \(\int_{1}^{\infty}\frac{G^{-1}(\tau)}{\tau^{\frac{2\pi^{2}}{3\pi^{2}}}}d\tau=\infty\)
* There exist \(p^{+},p^{-}\) such that \[1<p^{-}-1\leq\frac{tg^{\prime}(t)}{g(t)}\leq p^{+}-1\leq\infty\quad t>0.\]
Note that we will always be assuming conditions \((H_{a})-(H_{g})\) on \(g\) and \(G\) throughout the whole paper until otherwise specified. In literature, \(G\) is known as a Young function or an \(N\)-function.
_Remark 1.1_.: The following examples of \(G\) fits our framework:
* \(G_{p}(t):=\frac{1}{p}t^{p}\), where \(p\geq 2\).
2010 Mathematics Subject Classification: 65B
The infimum in the above definition is known to be achieved. The fractional Orlicz-Sobolev spaces are defined as
\[W^{s,G}(\Omega):=\left\{f\in L^{G}(\Omega)\ \Big{|}\ \exists\ \lambda>0\ \text{such that}\ M_{W^{s,G}(\Omega)}\left(\frac{f}{\lambda}\right)<\infty \right\}.\]
This space is equipped with the seminorm
\[\|f\|_{W^{s,G}(\Omega)}:=\inf\{\lambda>0\ |\ M_{W^{s,G}(\Omega)}\left(\frac{f}{ \lambda}\right)\leq 1\}.\]
However, we shall mainly be working with the spaces defined by
\[\hat{W}^{s,G}(\Omega):=\left\{f\in L^{G}_{loc}(\mathbb{R}^{N}):\ \exists\ U\Subset \Omega\ \text{s.t}\ ||u||_{s,G,U}+\int_{\mathbb{R}^{N}}g(\frac{|u(x)|}{1+|x|^{s}})\ \frac{dx}{(1+|x|)^{n+s}}<\infty\right\}\]
and,
\[W^{s,G}_{0}(\Omega):=\left\{f\in W^{s,G}(\mathbb{R}^{N})\ \Big{|}\ f\equiv 0\ \ \text{on}\ \mathbb{R}^{N}\setminus\Omega\right\}.\]
\(W^{s,G}_{0}(\Omega)\) is equipped with the norm \(\|\cdot\|_{W^{s,G}(\mathbb{R}^{N})}\). Note that for \(G(t)=t^{p};\ 1<p<\infty\), \(L^{G}(\Omega)\) and \(W^{s,G}(\Omega)\) are well known Lebesgue space \(L^{p}(\Omega)\) and the fractional Sobolev space \(W^{s,p}(\Omega)\) respectively (see [1, p. 524]).
We now discuss some properties of these spaces which we shall use in the next section. We start by observing that the assumption \((H_{g})\) implies
\[2<p^{-}\leq\frac{tg(t)}{G(t)}\leq p^{+}<\infty,\ \ \ t>0. \tag{2.1}\]
To see this, note that assumption \((H_{g})\) implies \((tg(t))^{\prime}\leq p^{+}G(t)^{\prime}\). The following two lemmas will be used frequently in the rest of the article.
**Lemma 2.1**.: _Let \(G\) be an \(N\)-function, and let \(g=G^{\prime}\) satisfy \((H_{a})-(H_{g})\). Then_
\[\lambda^{p^{-}}G(t)\leq G(\lambda t)\leq\lambda^{p^{+}}G(t)\ \ \ \forall\ \lambda\geq 1,\ \forall t>0,\]
_where \(p^{+},p^{-}\) is the constant as defined in \((H_{g})\). The above inequality is equivalent to_
\[\lambda^{p^{-}}G(t)\geq G(\lambda t)\geq\lambda^{p^{+}}G(t)\ \ \ \forall\ 0\leq \lambda\leq 1,\ \forall t>0.\]
Proof.: For any \(\lambda>1\),
\[\log(\lambda^{p^{-}})=\int_{t}^{\lambda t}\frac{p^{-}}{\tau}d\tau\leq\int_{t} ^{\lambda t}\frac{g(\tau)}{G(\tau)}d\tau\leq\int_{t}^{\lambda t}\frac{p^{+}}{ \tau}d\tau=\log(\lambda^{p^{+}}).\]
This implies
\[\log\left(\lambda^{p^{-}}\right)\leq\log\left(\frac{G(\lambda t)}{G(t)} \right)\leq\log\left(\lambda^{p^{+}}\right).\]
The lemma follows.
An immediate consequence of lemma 2.1 is the following
**Lemma 2.2**.: _When \(\|f\|_{W^{s,G}(\Omega)}\leq 1\),_
\[\|f\|_{W^{s,G}(\Omega)}^{p^{+}}\leq M_{W^{s,G}(\Omega)}(f)\leq\|f\|_{W^{s,G}( \Omega)}^{p^{-}},\]
_and when \(\|f\|_{W^{s,G}(\Omega)}\geq 1\),_
\[\|f\|_{W^{s,G}(\Omega)}^{p^{-}}\leq M_{W^{s,G}(\Omega)}(f)\leq\|f\|_{W^{s,G}( \Omega)}^{p^{+}}.\]
**Lemma 2.3**.: _Let \(G\) be an N-function satisfying \((H_{a})-(H_{g})\). For any two real numbers \(a\) and \(b\), we have_
\[(g(b)-g(a))(b-a)\geq C(G)\ G(|b-a|).\]
_for some constant \(C\) depending on the \(N-\)function \(G\)._
Proof.: By the symmetry of the inequality, it is enough to prove this lemma for the cases \(0<a\leq b\) and \(a<0<b\). In the first case, using Taylor's theorem with an integral form of reminder, we have
\[G(|b-a|) =G(0)+g(0)|b-a|+\frac{1}{2}\int_{0}^{b-a}g^{\prime}(t)(b-a-t)dt\] \[=\frac{b-a}{2}\int_{a}^{b}g^{\prime}(t-a)\frac{b-t}{b-a}dt\leq \frac{b-a}{2}\int_{a}^{b}g^{\prime}(t)dt\]
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}\ dxdy=\int_{\Omega}f(x)u(x)^{-q( x)}\phi(x)\ dx. \tag{3.1}\]
The boundary condition is understood in the sense that
1. if \(q(x)\leq 1\) on \(\Omega_{\delta}:=\{x\in\Omega\ \Big{|}\ \text{dist}(x,\partial\Omega)<\delta\}\), then \(u\in W^{s,G}_{0}(\Omega)\).
2. Elsewhere one has, \(\Phi(u)\in W^{s,G}_{0}(\Omega)\), where \[\Phi(t):=\int_{0}^{t}G^{-1}\left(G(1)\tau^{q^{*}-1}\right)d\tau.\]
Furthermore, we say that \(u\) is a subsolution (or supersolution) of eq.1.1 if, for any \(\varphi\in C^{\infty}_{c}(\Omega)\),
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}\;dxdy\leq\ (\text{or}\ \geq)\ \int_{\Omega}f(x)u(x)^{-q(x)}\varphi(x)\;dx. \tag{3.2}\]
**Theorem 3.2**.: _Let there exist \(\delta>0\) such that \(q(x)\leq 1\) on \(\Omega_{\delta}:=\{x\in\Omega\ \Big{|}\ \text{dist}(x,\partial\Omega)<\delta\}\) and \(f\in L^{\overline{G_{*}}}(\Omega)\). Then eq.1.1 has a weak solution in \(W^{s,G}_{0}(\Omega)\) with \(\text{essinf}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Lemma 3.6**.: _Let \(g\) be sub-multiplicative, that is, there is a constant \(C>0\) for which \(Cg(t_{1}t_{2})\leq g(t_{1})g(t_{2})\) for any \(t_{1},t_{2}>0\). Let \(F\) and \(u\) be such that_
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}\;dxdy=\int_{\Omega}F\varphi\; dx,\]
_for any \(\varphi\in W_{0}^{s,G}(\Omega)\). Then for any convex and Lipschitz function \(\Phi\), we have_
\[\int_{\Omega}F(x)g(\Phi^{\prime}(u(x)))\Phi(u)\;dx\geq C\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}}G\left(\frac{|\Phi(u(x))-\Phi(u(y))|}{|x-y|^{s}}\right) \frac{dxdy}{|x-y|^{N}}.\]
Proof.: First, note that, by density argument, we can assume \(\Phi\) to be \(C^{1}\). Choose \(\varphi=g(\Phi^{\prime}(u))\psi\). Then we have
\[2\iint_{\{u(x)>u(y)\}} g\left(\frac{u(x)-u(y)}{|x-y|^{s}}\right)\frac{g(\Phi^{\prime}(u(x )))\psi(x)-g(\Phi^{\prime}(u(y)))\psi(y)}{|x-y|^{N+s}}dxdy\] \[=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y) }{|x-y|^{s}}\right)\frac{g(\Phi^{\prime}(u(x)))\psi(x)-g(\Phi^{\prime}(u(y))) \psi(y)}{|x-y|^{N+s}}dxdy\] \[=\int_{\Omega}F(x)g(\Phi^{\prime}(u(x)))\psi(x)dx.\]
Set \(u(x)=a\), \(u(y)=b\), \(\psi(x)=A\) and \(\psi(y)=B\). Then the integrand in the LHS becomes
\[g\left(\frac{a-b}{|x-y|^{s}}\right)\frac{g(\Phi^{\prime}(a))A-g(\Phi^{\prime} (b))B}{|x-y|^{N+s}}.\]
Using the convexity of \(\Phi\), we have
\[\Phi(a)-\Phi(b)\leq\Phi^{\prime}(a)(a-b)\text{ and }\Phi(a)-\Phi(b)\geq\Phi^{ \prime}(b)(a-b).\]
We then have
\[g\left(\frac{a-b}{|x-y|^{s}}\right)\frac{g(\Phi^{\prime}(a))A-g( \Phi^{\prime}(b))B}{|x-y|^{N+s}}\] \[\geq g\left(\frac{a-b}{|x-y|^{s}}\right)\frac{g\left(\frac{\Phi(a )-\Phi(b)}{a-b}\right)A-g\left(\frac{\Phi(a)-\Phi(b)}{a-b}\right)B}{|x-y|^{N+s}}\] \[=g\left(\frac{a-b}{|x-y|^{s}}\right)g\left(\frac{\Phi(a)-\Phi(b)} {a-b}\right)\frac{A-B}{|x-y|^{N+s}}\] \[\geq Cg\left(\frac{\Phi(a)-\Phi(b)}{|x-y|^{s}}\right)\frac{A-B}{| x-y|^{N+s}}.\]
This, after taking \(\psi=\Phi(u)\) (note that \(\Phi\) is assumed to be \(C^{1}\)), gives
\[\int_{\Omega}F(x)g(\Phi^{\prime}(u(x)))\Phi(u)dx\geq C\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}}G\left(\frac{|\Phi(u(x))-\Phi(u(y))|}{|x-y|^{s}}\right) \frac{dxdy}{|x-y|^{N}}.\]
**Lemma 3.7**.: _Let \(f\in L^{\infty}(\Omega)\) with \(f\geq 0\), and \(f\) is not identically zero. Then the problem_
\[\begin{cases}(-\Delta_{g})^{s}u=f,&\text{ in }\Omega,\\ u>0,\text{ in }\Omega,\\ u=0,\text{ in }\mathbb{R}^{N}\setminus\Omega\end{cases} \tag{3.5}\]
_has a unique solution \(u\in W_{0}^{s,G}(\Omega)\cap L^{\infty}(\Omega)\)._
Proof.: The existence, uniqueness, and continuity follows from [1, Theorem 6.16], lemma 2.8, and the fact that \(f\geq 0\), so that \((-\Delta_{g})^{s}u\geq 0\) on \(\Omega\), using lemma 3.5. It remains to show that \(u\in L^{\infty}(\Omega)\). For this, we shall assume, without loss of generality, that \(\Omega\subseteq B(0,1)\) and fix \(\alpha>1\).
Let us consider
\[v_{\alpha}(x)=\begin{cases}\alpha(1-|x|),&\text{when }|x|<1,\\ 0,&\text{otherwise}.\end{cases}\]
Note that for since \(\alpha>1\), for any \(0<\lambda<1\) we have, using lemma2.1 and eq.2.1, the estimate \(g(\alpha\lambda t)>\frac{p^{-\alpha p^{-1}\lambda^{p^{+}-1}G(t)}}{t}\) when \(t>0\). Again, for \(x\in\Omega\subseteq B(0,1)\subseteq B(x,1+|x|)\) we get
\[(-\Delta_{g})^{s}v_{\alpha}(x) \geq\int_{|y|>1}g\left(\frac{v_{\alpha}(x)-v_{\alpha}(y)}{|x-y|^{ s}}\right)\frac{dy}{|x-y|^{N+s}}\] \[=\int_{|y|>1}g\left(\frac{v_{\alpha}(x)}{|x-y|^{s}}\right)\frac{ dy}{|x-y|^{N+s}}\] \[\geq p^{-\alpha p^{-}-1}(1-|x|)^{p^{+}-1}\int_{|y|>1}G\left(\frac{ 1}{|x-y|^{s}}\right)\frac{dy}{|x-y|^{N}}\] \[=p^{-}\alpha^{p^{-}-1}(1-|x|)^{p^{+}-1}\int_{|y|>1}G\left(\frac{ 1}{(1+|y|)^{s}}\right)\frac{dy}{(1+|y|)^{N}}\to\infty\]
uniformly as \(\alpha\to\infty\). Thus, as \(f\) is bounded, we can choose \(\alpha\) large enough to get \((-\Delta)_{g}^{s}v_{\alpha}>(-\Delta)_{g}^{s}u\). Applying lemma3.5 we get \(u\leq v_{\alpha}\) in \(\mathbb{R}^{N}\). Thus, \(u\) is bounded.
We consider the following approximated problem of eq.1.1, where we used the notation, \(f_{n}=\min\{f,n\}\) for all \(n\in\mathbb{N}\), and assumed \(q>0\) is \(C^{1}\),
\[(-\Delta_{g})^{s}u(x) =\frac{f_{n}(x)}{(u(x)+\frac{1}{n})^{q(x)}}\text{ in }\;\Omega,\] \[u >0\text{ in }\;\Omega,\] \[u =0\text{ in }\;\mathbb{R}^{N}\setminus\Omega. \tag{3.6}\]
**Lemma 3.8**.: _For a fixed \(n\in\mathbb{N}\), eq.3.6 has a weak solution \(u_{n}\in C^{\alpha(n)}(\Omega)\) where \(\alpha(n)\in(0,1)\;\forall n\in\mathbb{N}\)._
Proof.: Note that \(\frac{f_{n}(x)}{(u^{+}(x)+\frac{1}{n})^{q(x)}}\in L^{\infty}(\Omega)\). Hence by lemma3.7, there exists a unique solution \(w\in W_{0}^{s,G}(\Omega)\cap L^{\infty}(\Omega)\) to the problem
\[(-\Delta_{g})^{s}w(x) =\frac{f_{n}(x)}{(u^{+}(x)+\frac{1}{n})^{q(x)}}\text{ in }\;\Omega,\] \[w >0\text{ in }\;\Omega,\] \[w =0\text{ in }\;\mathbb{R}^{N}\setminus\Omega.\]
This allows us to define the operator \(S:W_{0}^{s,G}(\Omega)\to W_{0}^{s,G}(\Omega)\) by \(S(u)=w\) the solution of eq.3.7. Multiplying both sides of eq.3.7 by \(w\), we get
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{w(x)-w(y) }{|x-y|^{s}}\right)\frac{(w(x)-w(y))}{|x-y|^{N+s}}dxdy = \int_{\Omega}\frac{f_{n}(x)w(x)}{(u(x)^{+}+\frac{1}{n})^{q(x)}}dx \leq n^{1+\|q\|_{L^{\infty}(\Omega)}}\|w\|_{L^{1}(\Omega)}.\]
Applying eq.2.1, we get
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{|w(x)-w(y )|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}} \leq\frac{1}{p^{-}}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g \left(\frac{|w(x)-w(y)|}{|x-y|^{s}}\right)\frac{|w(x)-w(y)|}{|x-y|^{N+s}}dxdy\] \[=\frac{1}{p^{-}}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left( \frac{(w(x)-w(y))}{|x-y|^{s}}\right)\frac{(w(x)-w(y))}{|x-y|^{N+s}}dxdy\] \[\leq\frac{n^{1+\|q\|_{L^{\infty}(\Omega)}}}{p^{-}}\|w\|_{L^{1}( \Omega)}.\]
Assume \(\|w\|_{W_{0}^{s,G}(\Omega)}>1\),
\[\frac{1}{\|w\|_{W_{0}^{s,G}(\Omega)}^{p^{-}}}\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}G\left(\frac{|w(x)-w(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{ N}}\geq\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{|w(x)-w(y)|}{\|w\|_{W_{0}^{s,G}(\Omega)}|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}=1.\]
So, we have
\[\|w\|_{W_{0}^{s,G}(\Omega)}^{p^{-}}\leq\frac{n^{1+\|q\|_{L^{\infty}(\Omega)}} }{p^{-}}\|w\|_{L^{1}(\Omega)},\]
and consequently, by lemma2.7,
\[\|w\|_{W_{0}^{s,G}(\Omega)}^{p^{-}-1}\leq Cn^{1+\|q\|_{L^{\infty}(\Omega)}}\]
provided \(\|w\|_{W^{s,G}_{0}(\Omega)}>1\). Setting \(R:=\max\{1,\left(Cn^{1+\|q\|_{L^{\infty}(\Omega)}}\right)^{\frac{1}{p^{s}-1}}\}\), we can see that \(S\) maps the ball of radius \(R\) in the metric space \(W^{s,G}_{0}(\Omega)\), into itself. The proof will now be complete if we show that \(S\) is continuous and compact.
**Proof of continuity of \(S\):** Assume that \(u_{i}\to u\) in \(W^{s,G}_{0}(\Omega)\). Set \(w_{i}=S(u_{i})\) and \(w=S(u)\). So that we have for any \(\varphi\in W^{s,G}_{0}(\Omega)\),
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{w_{i}(x)-w_ {i}(y)}{|x-y|^{s}}\right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy=\int_ {\Omega}\frac{f_{n}(x)\varphi(x)}{(u_{i}(x)^{+}+\frac{1}{n})^{q(x)}}dx\quad \text{and} \tag{3.8}\] \[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{w(x)-w(y)} {|x-y|^{s}}\right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy=\int_{\Omega }\frac{f_{n}(x)\varphi(x)}{(u(x)^{+}+\frac{1}{n})^{q(x)}}dx. \tag{3.7}\]
We have to show that \(w_{i}\to w\) in \(W^{s,G}_{0}(\Omega)\). By lemma 2.7, passing to a subsequence, \(u_{i}\to u\) in \(L^{G_{*}}(\Omega)\) and \(u_{i}\to u\) a.e. in \(\Omega\). Set \(\overline{w_{i}}:=w_{i}-w\). Subtracting eq. (3.8) from eq. (3.7), with the choice \(\varphi=\overline{w_{i}}\), and then applying lemma 2.3 for \(a=\frac{w(x)-w(y)}{|x-y|^{s}}\) and, \(b=\frac{w_{i}(x)-w_{i}(y)}{|x-y|^{s}}\), we get
\[C(G)\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{| \overline{w}(x)-\overline{w_{i}}(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}dxdy\] \[\leq\int_{\Omega}f_{n}(x)\left(\frac{1}{(u_{i}(x)^{+}+\frac{1}{n} )^{q(x)}}-\frac{1}{(u(x)^{+}+\frac{1}{n})^{q(x)}}\right)(w_{i}(x)-w(x))dx.\]
We apply lemma 2.2 on the left-hand side and Holder inequality on the right-hand side of this equation to get,
\[C(G) \min\left\{\|w_{i}-w\|_{W^{s,G}}^{p^{+}},\|w_{i}-w\|_{W^{s,G}}^{p^{ -}}\right\}\] \[\leq C\left\|f_{n}(x)\left(\frac{1}{(u_{i}(x)^{+}+\frac{1}{n})^{q (x)}}-\frac{1}{(u(x)^{+}+\frac{1}{n})^{q(x)}}\right)\right\|_{L^{G_{*}^{*}}}\| w_{i}-w\|_{L^{G_{*}}}\] \[\leq C\left\|f_{n}(x)\left(\frac{1}{(u_{i}(x)^{+}+\frac{1}{n})^{q (x)}}-\frac{1}{(u(x)^{+}+\frac{1}{n})^{q(x)}}\right)\right\|_{L^{G_{*}^{*}}}\| w_{i}-w\|_{W^{s,G}},\]
where the last inequality follows from lemma 2.6. This gives
\[\min\left\{\|w_{i}-w\|_{W^{s,G}}^{p^{+}-1},\|w_{i}-w\|_{W^{s,G}}^{p^{-}-1} \right\}\leq C\left\|f_{n}(x)\left(\frac{1}{(u_{i}(x)^{+}+\frac{1}{n})^{q(x)} }-\frac{1}{(u(x)^{+}+\frac{1}{n})^{q(x)}}\right)\right\|_{L^{G_{*}^{*}}}.\]
Now observe that
\[\left|f_{n}(x)\left(\frac{1}{(u_{i}(x)^{+}+\frac{1}{n})^{q(x)}}-\frac{1}{(u(x) ^{+}+\frac{1}{n})^{q(x)}}\right)\right|\leq 2n^{q(x)+1}\leq 2n^{\|q\|_{L^{ \infty}}+1}.\]
Hence, as \(u_{i}\to u\) pointwise a.e., by DCT it follows that \(w_{i}\to w\) in \(W^{s,G}_{0}\). Thus \(S\) is continuous.
**Proof of compactness of \(S\):** Assume that \(u_{i}\) is a bounded sequence in \(W^{s,G}_{0}(\Omega)\). As before, denote \(w_{i}:=S(u_{i})\). We wish to show that \(w_{i}\) has a convergent subsequence in \(W^{s,G}_{0}(\Omega)\). From eq. (3.7) and lemmas 2.2 and 2.5, we get
\[\min\left\{\|w_{i}\|_{W^{s,G}}^{p^{+}},\|w_{i}\|_{W^{s,G}}^{p^{-}}\right\} \leq C(G)\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{w _{i}(x)-w_{i}(y)}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\] \[\leq C(G)\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{w _{i}(x)-w_{i}(y)}{|x-y|^{s}}\right)\frac{(w_{i}(x)-w_{i}(y))}{|x-y|^{N+s}}dxdy\] \[=C(G)\int_{\Omega}\frac{f_{n}(x)w_{i}(x)}{(u_{i}(x)^{+}+\frac{1}{n} )^{q(x)}}dx\leq n^{1+\|q\|_{L^{\infty}(\Omega)}}\|w_{i}\|_{L^{1}(\Omega)}\] \[\leq Cn^{1+\|q\|_{L^{\infty}(\Omega)}}\|w_{i}\|_{W^{s,G}(\Omega)}.\]
This shows that \(w_{i}\) is a bounded sequence in \(W^{s,G}_{0}(\Omega)\). From the boundedness of the two sequences, \(u_{i},w_{i}\), we conclude that there exists \(u,w\in W^{s,G}_{0}(\Omega)\) such that \(u_{i}\rightharpoonup u\) and \(w_{i}\rightharpoonup w\) in \(W^{s,G}_{0}(\Omega)\). We now want to show \(S(u)=w\), that is for any \(\varphi\in C^{\infty}_{c}(\Omega)\),
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{w(x)-w(y)}{|x-y|^{s}} \right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy=\int_{\Omega}\frac{f_{n}( x)\varphi(x)}{(u(x)^{+}+\frac{1}{n})^{q(x)}}dx. \tag{3.9}\]
Note that we already know
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{w_{i}(x)-w_{i}(y)}{|x-y|^{s }}\right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy=\int_{\Omega}\frac{f_{ n}(x)\varphi(x)}{(u_{i}(x)^{+}+\frac{1}{n})^{q(x)}}dx. \tag{3.10}\]
By DCT, it is seen easily that the right-hand side of eq.3.10 converges to the right-hand side of eq.3.9. It remains to show the convergence of the left-hand side. Note that,
\[\overline{G}(g(t))=\int_{0}^{g(t)}g^{-1}(\tau)d\tau=\int_{0}^{t}\tau g^{\prime }(\tau)d\tau\equiv\int_{0}^{t}g(\tau)d\tau=G(t).\]
Using this and the fact that \(w_{i}\)'s are bounded in \(W_{0}^{s,G}(\Omega)\), we have that \(g\left(\frac{|w_{i}(x)-w_{i}(y)|}{|x-y|^{s}}\right)\) is a bounded sequence in \(L^{\overline{G}}(\frac{1}{|x-y|^{N}},\mathbb{R}^{N}\times\mathbb{R}^{N})\) hence it has a weakly convergent subsequence. Thus we conclude that, up to a subsequence,
\[g\left(\frac{|w_{i}(x)-w_{i}(y)|}{|x-y|^{s}}\right)\rightharpoonup g\left( \frac{|w(x)-w(y)|}{|x-y|^{s}}\right)\]
weakly in \(L^{\overline{G}}(\frac{1}{|x-y|^{N}},\mathbb{R}^{N}\times\mathbb{R}^{N})\). Now, since \(\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{s}}\in L^{G}(\frac{1}{|x-y|^{N}},\mathbb{ R}^{N}\times\mathbb{R}^{N})\),
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{|w_{i}(x)-w_{i}(y)|}{| x-y|^{s}}\right)\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{N+s}}dxdy\to\int_{ \mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{|w(x)-w(y)|}{|x-y|^{s}} \right)\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{N+s}}dxdy\]
Since the solution so obtained is in \(u_{n}\in W_{0}^{s,G}(\Omega)\cap L^{\infty}(\Omega)\) and hence it is \(C^{\alpha(n)}(\Omega)\) where \(\alpha(n)\in(0,1),\ \forall n\in\mathbb{N}\) by Theorem1.1 of Bonder et al [10].
**Lemma 3.9**.: _Assume \(g\) to be convex on \((0,1)\). The sequence of functions \(\{u_{n}\}_{n}\), found in lemma3.8 satisfies_
\[u_{n}(x)\leq u_{n+1}(x),\quad\text{for almost every $x\in\Omega$},\]
_and for any compact set \(K\subseteq\Omega\), there exists a constant \(l=l(K)>0\) such that for any \(n\), large enough,_
\[u_{n}(x)\geq l\quad\text{for almost every $x\in K$}.\]
Proof.: Set the notation \(w_{n}(x)=(u_{n}(x)-u_{n+1}(x))^{+}\). Then we note that, for any \(x\in\Omega\), and \(f_{n}(x)\leq f_{n+1}(x)\),
\[\int_{\Omega} \frac{f_{n}(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)}}w_{n}(x)dx-\int_{ \Omega}\frac{f_{n+1}(x)}{(u_{n+1}(x)+\frac{1}{n+1})^{q(x)}}w_{n}(x)dx\] \[=\int_{\Omega}\left(\frac{f_{n}(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)} }-\frac{f_{n+1}(x)}{(u_{n+1}(x)+\frac{1}{n+1})^{q(x)}}\right)w_{n}(x)dx\] \[=\int_{\Omega}\left(\frac{f_{n}(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)} }-\frac{f_{n+1}(x)}{(u_{n+1}(x)+\frac{1}{n+1})^{q(x)}}\right)(u_{n}(x)-u_{n+1} (x))^{+}dx\] \[\leq\int_{\{u_{n}(x)>u_{n+1}(x)\}}f_{n+1}(x)\left(\frac{(u_{n+1} (x)+\frac{1}{n+1})^{q(x)}-(u_{n}(x)+\frac{1}{n})^{q(x)}}{(u_{n}(x)+\frac{1}{n })^{q(x)}(u_{n+1}(x)+\frac{1}{n+1})^{q(x)}}\right)(u_{n}-u_{n+1})^{+}dx\] \[\leq 0.\]
Then the above calculation and eq.3.6 implies
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}} g\left(\frac{u_{n}(x)-u_{n}(y)}{|x-y|^{s}}\right)\frac{w_{n}(x)-w_{n}(y)}{| x-y|^{N+s}}dxdy\] \[\leq\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u_{n+1} (x)-u_{n+1}(y)}{|x-y|^{s}}\right)\frac{w_{n}(x)-w_{n}(y)}{|x-y|^{N+s}}dxdy.\]
Now [10, Theorem 1.1] implies that both \(u_{n},u_{n+1}\) are Holder continuous up to the boundary. So, we can apply lemma3.5 to get \(u_{n}\leq u_{n+1}\) a.e. on \(\mathbb{R}^{N}\). This concludes the proof of the first part.
The second part follows from the continuity of \(u_{n}\), and lemma2.8, which gives \(u_{n}>0\) on \(\Omega\).
Proof of theorem3.2.: By lemma3.8, eq.3.6 has a weak solution \(u_{n}\). Let \(\varphi\in C_{c}^{\infty}(\Omega)\). We have
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u_{n}(x)-u_{n}(y)}{|x-y| ^{s}}\right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy=\int_{\Omega}\frac{f_{ n}(x)\varphi(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)}}dx. \tag{3.11}\]
First, we claim:
\[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left( \frac{u_{n}(x)-u_{n}(y)}{|x-y|^{s}}\right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+ s}}dxdy\\ =\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y) }{|x-y|^{s}}\right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy. \tag{3.12}\]
**Proof of the claim:** Set \(\omega_{\delta}:=\Omega\setminus\Omega_{\delta}\). Then by lemma3.9, there exists a constant \(l>0\) such that \(u_{n}\geq l>0\) on \(\omega_{\delta}\). We get, using lemma2.7, and choosing \(\varphi=u_{n}\),
\[C(G) \int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{|u_{n}(x)- u_{n}(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\] \[\leq\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u_{n}( x)-u_{n}(y)}{|x-y|^{s}}\right)\frac{u_{n}(x)-u_{n}(y)}{|x-y|^{N+s}}dxdy\] \[=\int_{\Omega}\frac{f_{n}(x)u_{n}(x)}{(u_{n}(x)+\frac{1}{n})^{q( x)}}dx\] \[=\int_{\Omega_{\delta}}\frac{f_{n}(x)u_{n}(x)}{(u_{n}(x)+\frac{1} {n})^{q(x)}}dx+\int_{\omega_{\delta}}\frac{f_{n}(x)u_{n}(x)}{(u_{n}(x)+\frac{1 }{n})^{q(x)}}dx\] \[\leq\int_{\Omega_{\delta\cap\{u_{n}\leq 1\}}}f_{n}(x)dx+\int_{ \Omega_{\delta\cap\{u_{n}>1\}}}f_{n}(x)u_{n}(x)dx+\int_{\omega_{\delta}}\frac {f_{n}(x)u_{n}(x)}{l^{q(x)}}dx\] \[\leq\|f\|_{L^{1}(\Omega)}+(1+\|l^{-q(\cdot)}\|_{L^{\infty}(\omega _{\delta})})\|f\|_{L^{\overline{\omega}_{n}}(\Omega)}\|u_{n}\|_{L^{G_{*}}( \Omega)}\] \[\leq\|f\|_{L^{1}(\Omega)}+C_{1}\|u_{n}\|_{W^{s,G}_{0}(\Omega)}.\]
Assuming \(\alpha:=\|u_{n}\|_{W^{s,G}_{0}(\Omega)}>1\), we get, using lemma2.1,
\[1=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{|u_{n}( x)-u_{n}(y)|}{\alpha|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\\ \leq\frac{1}{\alpha^{p^{-}}}\int_{\mathbb{R}^{N}}\int_{\mathbb{R} ^{N}}G\left(\frac{|u_{n}(x)-u_{n}(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}} \leq\frac{\|f\|_{L^{1}(\Omega)}}{\alpha^{p^{-}}}+C_{1}\frac{1}{\alpha^{p^{-}-1}}\]
This shows that \(\|u_{n}\|_{W^{s,G}_{0}(\Omega)}\) must be bounded. So \(u_{n}\rightharpoonup u\) in \(W^{s,G}_{0}\) weakly. By lemma2.7, \(u_{n}\to u\) strongly in \(L^{1}(\Omega)\), and hence \(u_{n}\to u\) pointwise a.e. up to a subsequence.
Now applying lemma2.1
\[\overline{G}(g(t))=\int_{0}^{g(t)}\overline{g}(\tau)d\tau=\int_{0}^{t} \overline{g}(g(\tau))g^{\prime}(\tau)d\tau=\int_{0}^{t}\tau g^{\prime}(\tau)d\tau.\]
This implies
\[(p^{-}-1)G(t)\leq\overline{G}(g(t))\leq(P^{+}-1)G(t). \tag{3.13}\]
This, along with lemma2.2, shows that the sequence of functions \((x,y)\mapsto g\left(\frac{u_{n}(x)-u_{n}(y)}{|x-y|^{s}}\right)\) is bounded in \(L^{\overline{G}}\left(\mathbb{R}^{N}\times\mathbb{R}^{N},\frac{dxdy}{|x-y|^{s }}\right)\). So it has a weakly convergent subsequence; without loss of generality, we assume it to be itself. It is easy to check that the the function \((x,y)\mapsto\frac{\varphi(x)-\varphi(y)}{|x-y|^{s}}\) is in \(L^{G}\left(\mathbb{R}^{N}\times\mathbb{R}^{N},\frac{dxdy}{|x-y|^{N}}\right)\). Hence eq.3.12 follows and the claim is true.
Now, in order to complete the proof, taking into account eq.3.11, we only need to show the convergence of the right-hand side of eq.3.11. Note that
\[\left|\frac{f_{n}(x)\varphi(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)}}\right|\leq|l^{-q (x)}f(x)\varphi(x)|\in L^{1}(\Omega),\]
where we get \(l\) from applying lemma3.9 on \(\operatorname{supp}(\varphi)\). Therefore, we can apply DCT to get
\[\lim_{n\to\infty}\int_{\Omega}\frac{f_{n}(x)\varphi(x)}{(u_{n}(x)+\frac{1}{n})^ {q(x)}}dx=\int_{\Omega}\frac{f(x)\varphi(x)}{u(x)^{q(x)}}dx.\]
Hence the proof is complete.
**Lemma 3.10**.: _For any \(a,b\in\mathbb{R}\), we have_
\[|g(a)-g(b)|\leq C\frac{|a-b|g(|a|+|b|)}{|a|+|b|}\leq Cg(|a|+|b|).\]
Proof.: \[g(b)-g(a)=\int_{0}^{1}g^{\prime}(a+(b-a)t)(b-a)dt.\]
Now since \(g^{\prime}\) is increasing one has for \(t\in(0,1)\), \(|a+(b-a)t|\leq||a|+|b||\). So we get
\[|g(a)-g(b)|\leq|a-b|g^{\prime}(|a|+|b|)\]
The results now follow trivially using the hypothesis on \(g\).
**Lemma 3.11**.: _Let \(\Phi:(0,\infty)\to(0,\infty)\) be a strictly convex, \(C^{1}\)-function such that \(\Phi^{\prime}\) is increasing and there exists \(\theta_{1},\theta_{2}\geq 0\) such that \(\theta_{1}\frac{\Phi(x)}{x}\leq\Phi^{\prime}(x)\leq\theta_{2}\frac{\Phi(x)}{x}\). For \(x,y\in\mathbb{R}\) and \(\varepsilon>0\), define \(S_{\varepsilon}^{x}:=\{x\geq\varepsilon\}\cap\{y\geq 0\}\), and \(S_{\varepsilon}^{y}:=\{x\geq 0\}\cap\{y\geq\varepsilon\}.\) Then for \((x,y)\in S_{\varepsilon}^{x}\cup S_{\varepsilon}^{y}\),_
\[|\Phi(x)-\Phi(y)|\geq C\Phi^{\prime}(\epsilon)|x-y|\text{ with }C:=\max( \theta_{1},1).\]
Proof.: By symmetry, without loss of generality, we can assume \(x>y\). Now for some \(\lambda\in(y,x)\), we have \(\Phi(x)-\Phi(y)=\Phi^{\prime}(\lambda)(x-y)\). If we assume \(x\geq y\geq\varepsilon>0\), then we have
\[|\Phi(x)-\Phi(y)|\geq\Phi^{\prime}(\lambda)|x-y|\geq\Phi^{\prime}(\varepsilon )|x-y|\]
For, \(0\leq y<\varepsilon\leq x\), then by strict convexity of \(\Phi\), we get
\[\frac{\Phi(x)-\Phi(y)}{x-y}\geq\frac{\Phi(x)}{x}\geq\theta_{1}\Phi^{\prime}(x) \geq\theta_{1}\Phi^{\prime}(\varepsilon)\]
thus concluding the assertion.
**Lemma 3.12**.: _Let \(\Phi,\ H,\ f,\ q\) be as in theorem 3.3, \(u_{n}\) be as in lemma 3.8. Then there is a constant \(C>0\), independent of \(n\) such that \(\|\Phi(u_{n})\|_{W^{*,G}_{0}(\Omega)},\ \|\Phi(u)\|_{W^{*,G}_{0}(\Omega)}\leq C\), where \(u\) is the pointwise limit of \(u_{n}\)._
Proof.: We have, for \(t>0\),
\[\Phi(t):=\int_{0}^{t}G^{-1}\left(G(1)\tau^{q^{*}-1}\right)d\tau,\]
that is
\[\Phi^{\prime}(t):=G^{-1}\left(G(1)t^{q^{*}-1}\right),\]
which gives, applying the fact that \(\Phi^{\prime}(t)\) is increasing and hence \(\Phi(t)\leq t\Phi^{\prime}(t)\),
\[g\left(\Phi^{\prime}(t)\right)\Phi(t)=\frac{\Phi^{\prime}(t)g\left(\Phi^{ \prime}(t)\right)}{G\left(\Phi^{\prime}(t)\right)}\frac{G\left(\Phi^{\prime}( t)\right)\Phi(t)}{\Phi^{\prime}(t)}\leq p^{+}G(1)t^{q^{*}-1}\frac{\Phi(t)}{\Phi^{ \prime}(t)}\leq p^{+}G(1)t^{q^{*}}\,. \tag{3.14}\]
Using eq. (3.14) and lemma 3.6, and the fact \(q^{*}>1\) we have
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}} G\left(\frac{|\Phi(u_{n}(x))-\Phi(u_{n}(y))|}{|x-y|^{s}}\right)\frac{ dxdy}{|x-y|^{N}}\] \[\leq C\int_{\Omega}\frac{f_{n}(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)}} g(\Phi^{\prime}(u_{n}(x)))\Phi(u_{n}(x))dx\] \[=C\left(\int_{\Omega_{\delta_{i}},\ u_{n}<1}+\int_{\Omega_{\delta _{i}},\ u_{n}\geq 1}+\int_{\omega_{\delta_{i}},\ u_{n}<1}+\int_{\omega_{\delta_{i}},\ u_{n}\geq 1}\right) \frac{f_{n}(x)}{(u_{n}(x)+\frac{1}{n})^{q(x)}}g(\Phi^{\prime}(u_{n}(x)))\Phi(u_ {n}(x)) \tag{3.15}\] \[\leq C\int_{\Omega\cap\{u_{n}<1\}}f_{n}(x)+C\int_{\Omega\cap\{u_{ n}\geq 1\}}f_{n}(x)u_{n}(x)^{q^{*}}\,.\]
Set \(r:=\frac{p^{-}}{p^{-}+q^{*}-1}\). We have, for large enough \(t_{0}\), and for any \(t>t_{0}\),
\[t^{\frac{1}{2}} =\frac{1}{r}\int_{0}^{1}\tau^{\frac{1}{2}-1}d\tau+\frac{1}{r}\int _{1}^{t}\tau^{\frac{1}{2}-1}d\tau\leq\frac{2}{r}\int_{1}^{t}\tau^{\frac{1}{2}-1 }G^{-1}(G(1))d\tau \tag{3.16}\] \[\leq\frac{2}{r}\int_{1}^{t}G^{-1}(G(1)\tau^{\frac{p^{-}(1-r)}{r} })d\tau\leq\frac{2}{r}\int_{1}^{t}G^{-1}(G(1)\tau^{\frac{p^{-}(1-r)}{r}})d \tau=\frac{2}{r}\Phi(t).\]
Applying eq. (3.16) on eq. (3.15) and then using Holder's inequality, and finally the fact that \(|f_{n}|\leq|f|\), we get
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}} G\left(\frac{|\Phi(u_{n}(x))-\Phi(u_{n}(y))|}{|x-y|^{s}}\right) \frac{dxdy}{|x-y|^{N}}\] \[\leq C\int_{\Omega\cap\{u_{n}<1\}}f_{n}(x)+C\int_{\Omega\cap\{u_ {n}\geq 1\}}f_{n}(x)\Phi(u_{n}(x))^{rq^{*}}\] \[\leq C\|f_{n}\|_{L^{1}(\Omega)}+C\|f_{n}\|_{L^{\overline{\mu}}( \Omega)}\|\Phi(u_{n})^{rq^{*}}\|_{L^{H}(\Omega)}\] \[\leq C\|f\|_{L^{1}(\Omega)}+C\|f\|_{L^{\overline{\mu}}(\Omega)}\| \Phi(u_{n})^{rq^{*}}\|_{L^{H}(\Omega)}. \tag{3.17}\]
Observe that
\[\left\|\Phi^{rq^{*}}(u_{n})\right\|_{L^{H}(\Omega)} =\inf\left\{\lambda>0\ \Big{|}\ \int_{\Omega}H\left(\frac{\Phi(u_{n})^{rq^{*}}}{\lambda}\right)\leq 1\right\}\] \[=\inf\left\{\lambda^{rq^{*}}>0\ \Big{|}\ \int_{\Omega}H\left(\frac{\Phi(u_{n})^{rq^{*}}}{ \lambda^{rq^{*}}}\right)\leq 1\right\}\] \[=\left(\inf\left\{\lambda>0\ \Big{|}\ \int_{\Omega}H\left(\frac{ \Phi(u_{n})^{rq^{*}}}{\lambda^{rq^{*}}}\right)\right\}\right)^{rq^{*}}\] \[=\left(\inf\left\{\lambda>0\ \Big{|}\ \int_{\Omega}G_{*}\left(\frac{ \Phi(u_{n})}{\lambda}\right)\right\}\right)^{rq^{*}}=\left\|\Phi(u_{n})\right\| _{L^{G_{*}}(\Omega)}^{rq^{*}},\]
to see the last line recall that \(G_{*}(t):=H\left(t^{rq^{*}}\right)\). Combining this with eq. (3.17) gives
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{|\Phi(u_{n}(x))-\Phi(u_ {n}(y))|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\leq C\|f\|_{L^{1}(\Omega)}+C \|f\|_{L^{\overline{\mu}}(\Omega)}\|\Phi(u_{n})\|_{L^{G_{*}}(\Omega)}^{rq^{*}}.\]
From lemma 2.7, we can write
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{|\Phi(u_{n}(x))-\Phi(u _{n}(y))|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\leq C\|f\|_{L^{1}(\Omega)}+ C\|f\|_{L^{\overline{\mu}}(\Omega)}\|\Phi(u_{n})\|_{W_{0}^{s,c}(\Omega)}^{rq^{*}}.\]
When \(\|\Phi(u_{n})\|_{W_{0}^{s,c}(\Omega)}>t_{0}\), using lemma 2.2, we get
\[\|\Phi(u_{n})\|_{W_{0}^{s,c}(\Omega)}^{p^{-}}\leq C\|f\|_{L^{1}(\Omega)}+C\|f \|_{L^{\overline{\mu}}(\Omega)}\|\Phi(u_{n})\|_{W_{0}^{s,c}(\Omega)}^{rq^{*}}.\]
From the hypothesis, we have, \(rq^{*}<p^{-}\). This implies that the norm \(\|\Phi(u_{n})\|_{W_{0}^{s,c}(\Omega)}\) cannot increase arbitrarily. So, there exists a constant \(C>0\), independent of \(n\), such that \(\|\Phi(u_{n})\|_{W_{0}^{s,c}(\Omega)}\leq C\).
By lemma 3.9, \(u_{n}\) is a monotone increasing sequence. So, we can define \(u\) as the pointwise limit of \(u_{n}\). Direct application of Fatou's lemma and lemma 2.2 implies that \(\|\Phi(u)\|_{W_{0}^{s,c}(\Omega)}\leq C\).
Proof of theorem 3.3.: By lemma 3.9, \(u_{n}\) is a monotone increasing sequence. So, we can define \(u\) as the pointwise limit of \(u_{n}\). Next, we show that this \(u\) is the required solution.
We know from lemma 3.8 that there are \(u_{n}\) which satisfy
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u_{n}(x)-u_{n}(y)}{|x-y |^{s}}\right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy=\int_{\Omega}\frac{ f_{n}(x)\phi(x)}{\big{(}u_{n}(x)+\frac{1}{n}\big{)}^{q(x)}}dx.\]
Note that, on \(\operatorname{supp}(\phi)\), as \(f\in L^{1}(\Omega)\),
\[\left|\frac{f_{n}(x)\phi(x)}{\big{(}u_{n}(x)+\frac{1}{n}\big{)}^{q(x)}}\right| \leq\|l^{-q(\cdot)}\|_{L^{\infty}}|f||\phi|\in L^{1}.\]
Hence, by dominated convergence theorem, we get
\[\lim_{n\to\infty}\int_{\Omega}\frac{f_{n}(x)\phi(x)}{\big{(}u_{n}(x)+\frac{1}{ n}\big{)}^{q(x)}}=\int_{\Omega}\frac{f(x)\phi(x)}{u(x)^{q(x)}}.\]
So, we need to show that
\[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u_{n} (x)-u_{n}(y)}{|x-y|^{s}}\right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy= \int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy.\]
We have, \(\Phi(u)\in W_{0}^{s,G}(\Omega)\) and by lemma 2.6, it follows that \(\Phi(u)\in L^{G}(\Omega)\). Comparing integrals, where \(u>1\), it follows that \(u\in L^{G}(\Omega)\). We see, using lemma 3.10,
\[\left|\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u_{n} (x)-u_{n}(y)}{|x-y|^{s}}\right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy- \int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+s}}dxdy\right|\] \[=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\left|g\left(\frac{u_{ n}(x)-u_{n}(y)}{|x-y|^{s}}\right)-g\left(\frac{u(x)-u(y)}{|x-y|^{s}}\right) \right|\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{N+s}}dxdy\] \[\leq C\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{|u_ {n}(x)-u_{n}(y)|+|u(x)-u(y)|}{|x-y|^{s}}\right)\frac{|\varphi(x)-\varphi(y)|}{ |x-y|^{N+s}}dxdy\] \[=C\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}I_{n}\quad\text{( assume)}\]
The proof will be complete if we can show that \(\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}I_{n}\to 0\). To do this, first, set
\[\mathcal{S}_{\phi}:=\text{supp}\phi,\quad\text{and}\quad\mathcal{Q}_{\phi}:=( \mathbb{R}^{N}\times\mathbb{R}^{N})\setminus(\mathcal{S}_{\phi}{}^{c}\times \mathcal{S}_{\phi}{}^{c}).\]
Now using Holder's inequality with respect to the measure \(\frac{dxdy}{|x-y|^{N}}\), we get for any compact set \(K\subseteq\mathbb{R}^{N}\times\mathbb{R}^{N}\),
\[\iint_{\mathbb{R}^{2N}\setminus K}I_{n} =\iint_{\mathcal{Q}_{\phi}\setminus K}I_{n}\] \[\leq C\left\|g\left(\frac{|u_{n}(x)-u_{n}(y)|+|u(x)-u(y)|}{|x-y|^ {s}}\right)\right\|_{L^{\overline{G}}(\mathcal{Q}_{\phi}\setminus K,\frac{dxdy }{|x-y|^{N}})}\left\|\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{s}}\right\|_{L^{G}( \mathcal{Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}.\]
Now, if \(\left\|g\left(\frac{|u_{n}(x)-u_{n}(y)|+|u(x)-u(y)|}{|x-y|^{s}}\right)\right\| _{L^{\overline{G}}(\mathcal{Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}\leq 1\), we get
\[\iint_{\mathbb{R}^{2N}\setminus K}I_{n}\leq C\left\|\frac{|\varphi(x)-\varphi( y)|}{|x-y|^{s}}\right\|_{L^{G}(\mathcal{Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}.\]
Otherwise, we apply lemma 2.2 and eq. (3.13) to get
\[\iint_{\mathbb{R}^{2N}\setminus K}I_{n}\] \[\leq C\left(\iint_{\mathcal{Q}_{\phi}\setminus K}G\left(\frac{|u _{n}(x)-u_{n}(y)|+|u(x)-u(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\right)^ {\frac{1}{p^{-}}}\left\|\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{s}}\right\|_{L^{G} (\mathcal{Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}\] \[\leq C\left[\iint_{\mathcal{Q}_{\phi}\setminus K}G\left(\frac{|u _{n}(x)-u_{n}(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}+\iint_{\mathcal{Q}_{ \phi}\setminus K}G\left(\frac{|u(x)-u(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^ {N}}\right]^{\frac{1}{p^{-}}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\times\left\|\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{s}} \right\|_{L^{G}(\mathcal{Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}.\]
By lemma 3.9, there exists \(l=l(\mathcal{S}_{\phi})>0\) such that for \(n\) large enough, \(u_{n}(x)>l\). We now apply lemma 3.11 on the two integrands of the last line to get
\[\iint_{\mathbb{R}^{2N}\setminus K}I_{n} \leq C\left[\iint_{\mathcal{Q}_{\phi}\setminus K}G\left(\frac{| \Phi(u_{n})(x)-\Phi(u_{n})(y)|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\right.\] \[\left.+\iint_{\mathcal{Q}_{\phi}\setminus K}G\left(\frac{|\Phi(u (x))-\Phi(u(y))|}{|x-y|^{s}}\right)\frac{dxdy}{|x-y|^{N}}\right)^{\frac{1}{p^ {+}}}\left\|\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{s}}\right\|_{L^{G}(\mathcal{ Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}.\]
By lemmas 2.2 and 3.12, it is clear that
\[\iint_{\mathbb{R}^{2N}\setminus K}I_{n}\leq C\left\|\frac{|\varphi(x)-\varphi( y)|}{|x-y|^{s}}\right\|_{L^{G}(\mathcal{Q}_{\phi}\setminus K,\frac{dxdy}{|x-y|^{N}})}.\]
Since \(\phi\in C_{c}^{\infty}(\Omega)\), for a fixed \(\varepsilon>0\), there exists \(K=K(\varepsilon)\) such that
\[\iint_{\mathbb{R}^{N}\setminus K}I_{n}<\frac{\varepsilon}{2}.\]
We, now have to estimate \(\iint_{K}I_{n}\). For this, we use Vitali's convergence theorem. Let \(E\subseteq K\). Arguing as above, we can get
\[\iint_{E}I_{n}\leq C\left\|\frac{|\varphi(x)-\varphi(y)|}{|x-y|^{s}}\right\|_{ L^{G}(E,\frac{dxdy}{|x-y|^{N}})}.\]
This shows that the integrand in LHS is uniformly integrable, that is \(\iint_{E}I_{n}\to 0\) as \(\mathcal{L}^{N}(E)\to 0\). Applying Vitali's convergence theorem, we get for large enough \(n\), \(\iint_{E}I_{n}<\frac{\varepsilon}{2}\). So, from eq. (3.17), we get \(\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}I_{n}\to 0\) as \(n\to\infty\), hence the proof follows.
Proof of theorem 3.4.: Let \(u\) be a solution of eq. (1.1) obtained through theorems 3.2 and 3.3. Then \(u\) is pointwise limit of a sequence of solutions, \(u_{n}\), of eq. (3.6). Also, by lemma 3.9, there exists \(l(K)>0\) for any compact set \(K\subseteq\Omega\) such that
\[u(x)\geq l(K)>0\quad\text{for almost all $x\in K$.}\]
This implies that there exists some \(C_{K}>0\) such that \(u^{-q(x)}(x)\leq C_{K}\) for all \(x\in K\). Fix \(x_{0}\in\Omega\) and \(r>0\) such that \(B:=B(x_{0},r)\subset\overline{B(x_{0},r)}\subset\Omega\). Again, since \(u\) is a weak solution of eq. (1.1), this implies that for any \(\varphi\in C_{c}^{\infty}(B(x_{0},r))\), where, with \(\varphi\geq 0\),
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y) }{|x-y|^{s}}\right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy=\int_{B}f(x) u(x)^{-q(x)}\phi(x)dx\\ \leq C_{B}\int_{B}f(x)\phi(x)dx=\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}g\left(\frac{v(x)-v(y)}{|x-y|^{s}}\right)\frac{(\varphi(x)- \varphi(y))}{|x-y|^{N+s}}dxdy, \tag{3.18}\]
where \(v\in W^{s,G}(B)\cap L^{\infty}(B)\), is a solution to the problem
\[\begin{cases}(-\Delta_{g})^{s}v=C_{B}f,\quad\text{ in }B,\\ v>0,\text{ in }B,\\ v=0,\text{ in }\mathbb{R}^{N}\setminus B\end{cases}\]
obtained through lemma 3.7. By using lemma 3.5, we can conclude that \(u\leq v\) in \(B\) if \(u\) is continuous on \(\mathbb{R}^{N}\). That is \(u\in L^{\infty}_{loc}(\Omega)\) provided \(u\) is continuous on \(\mathbb{R}^{N}\).
Again, since we have, from eq. (3.18),
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y)}{|x-y|^{s}} \right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy\leq C\int_{B}\phi(x)dx,\]
defining the sets
\[U_{0} :=\left\{(x,y)\in\mathbb{R}^{N}\times\mathbb{R}^{N}\ \Big{|}\ \frac{|u(x)-u(y)|}{|x-y|^{s}}\geq 1 \right\},\] \[U_{j} :=\left\{(x,y)\in\mathbb{R}^{N}\times\mathbb{R}^{N}\ \Big{|}\ \frac{1}{j+1}\leq\frac{|u(x)-u(y)|}{|x-y|^{s}}<\frac{1}{j}\right\} \quad\text{ for }j\geq 1,\]
we get from lemma 2.1 that
\[C\int_{B}\phi(x)dx \geq\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x) -u(y)}{|x-y|^{s}}\right)\frac{(\varphi(x)-\varphi(y))}{|x-y|^{N+s}}dxdy\] \[=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}g\left(\frac{u(x)-u(y) }{|x-y|^{s}}\right)\frac{u(x)-u(y)}{|x-y|^{s}}\frac{(\varphi(x)-\varphi(y))}{( u(x)-u(y))|x-y|^{N}}dxdy\] \[\geq p^{-}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}G\left(\frac{ |u(x)-u(y)|}{|x-y|^{s}}\right)\frac{(\varphi(x)-\varphi(y))}{(u(x)-u(y))|x-y| ^{N}}dxdy\] \[=p^{-}\sum_{j=0}^{\infty}j^{p^{+}}G(\frac{1}{j+1})\iint_{U_{j}} \frac{|u(x)-u(y)|^{p^{+}-2}(u(x)-u(y))(\phi(x)-\phi(y))}{|x-y|^{N+sp^{+}}}dxdy\] \[\geq p^{-}\sum_{j=0}^{\infty}\frac{j^{p^{+}}}{(j+1)^{p^{+}}}G(1) \iint_{U_{j}}\frac{|u(x)-u(y)|^{p^{+}-2}(u(x)-u(y))(\phi(x)-\phi(y))}{|x-y|^{N +sp^{+}}}dxdy\]
\[\geq C\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p^{+}-2}(u(x)-u (y))(\phi(x)-\phi(y))}{|x-y|^{N+sp^{+}}}dxdy.\]
We can now apply Corollary 5.5 of [16] to conclude that there is some \(\alpha\in(0,1)\) such that \(u\in C^{\alpha}(B)\). This completes the proof.
## Acknowledgement
The first author was funded by MATRICS (DST, INDIA) project MTR/2020/000594.
The second and the third author were funded by Academy of Finland grant: Geometrinen Analyysi (21000046081).
## Author Contribution
All the authors have contributed equally to the article.
## Conflict of Interest
The authors have no competing interests to declare that are relevant to the content of this article.
|
2309.03671 | Dataset Generation and Bonobo Classification from Weakly Labelled Videos | This paper presents a bonobo detection and classification pipeline built from
the commonly used machine learning methods. Such application is motivated by
the need to test bonobos in their enclosure using touch screen devices without
human assistance. This work introduces a newly acquired dataset based on bonobo
recordings generated semi-automatically. The recordings are weakly labelled and
fed to a macaque detector in order to spatially detect the individual present
in the video. Handcrafted features coupled with different classification
algorithms and deep-learning methods using a ResNet architecture are
investigated for bonobo identification. Performance is compared in terms of
classification accuracy on the splits of the database using different data
separation methods. We demonstrate the importance of data preparation and how a
wrong data separation can lead to false good results. Finally, after a
meaningful separation of the data, the best classification performance is
obtained using a fine-tuned ResNet model and reaches 75% of accuracy. | Pierre-Etienne Martin | 2023-09-07T12:19:51Z | http://arxiv.org/abs/2309.03671v1 | # Dataset Generation and Bonobo Classification from Weakly Labelled Videos
###### Abstract
This paper presents a bonobo detection and classification pipeline built from the commonly used machine learning methods. Such application is motivated by the need to test bonobos in their enclosure using touch screen devices without human assistance. This work introduces a newly acquired dataset based on bonobo recordings generated semi-automatically. The recordings are weakly labelled and fed to a macaque detector in order to spatially detect the individual present in the video. Handcrafted features coupled with different classification algorithms and deep-learning methods using a ResNet architecture are investigated for bonobo identification. Performance is compared in terms of classification accuracy on the splits of the database using different data separation methods. We demonstrate the importance of data preparation and how a wrong data separation can lead to false good results. Finally, after a meaningful separation of the data, the best classification performance is obtained using a fine-tuned ResNet model and reaches 75% of accuracy.
Keywords:Bonobos Classification, Machine Learning, Convolutional Neural Networks, Data Splitting, Automatic Dataset Generation
## 1 Introduction
Direct application of computer vision tools in a specific domain is rarely possible. Parameters often need to be fine-tuned to adapt to scene changes, model purpose or output size. Most applications often imply real-time processing leading to a trade-off in the conception of the pipeline between model performance, pipeline complexity, hardware capacity and processing time. It is with such a principle that many open source projects, such as Scikit-learn [13], DeepLabCut [12], Detectron2 from Facebook [23] or MMDetection from OpenMMLab [2], came into being. Their goal is to deliver updated computer vision tools in a wide range of applications by providing model architecture and their pre-trained weights on different datasets. Therefore, industry applications and academic researchers are saving time and computation resources by avoiding the training process of such methods. However, for a specific application such as Bonobos classification, fine-tuning seems to remain a necessary step in such a process.
Great-Ape individual classification is not a new topic in the field of behaviour analysis or computer vision. In 2012, three datasets tackling such a problem were presented: Gorillas, C-Zoo and C-Tai [10]. The two latest are then re-used and refined by different teams to improve the state-of-the-art classification methods. In [9], the author hand-annotated the two mentioned datasets using Image Maker and provided a classification solution based on Gabor features and local Surf descriptor. With the progress of the deep learning methods in the following years, the authors of [3] presented a CNN-based model in order to predict different attributes such as identity, age, age group, and gender of chimpanzees. Because of the wide use of face recognition on humans in the same years, transposition to apes and mammals, in general, is investigated [7], suggesting face features could be universal. It is with such principle that [22] and [6] address similar classification problems respectively to rhesus macaques and pandas.
With the many introduced dataset for primates, [4] compiles the Animal Face Dataset gathering 41 primate species from wild and captive animals. By selecting the 17 most populated species in the dataset, they reached 93.6% of individual classification. Similar work was conducted on Chimps only with CNN [15] on long-term video records in a 14-year dataset (10 million face images) of 23 individuals and reached an overall accuracy of 92.5%.
Finally, [11] aims at providing behaviour tool analysis by offering segmentation, identification, pose-estimation and classification of behaviour from homecage cameras. Well adapted, such a method may be resourceful for behaviour analysis in simple context [1] and help in the automatic acquisition and annotation of video recordings.
This project aims to provide a pipeline for individual recognition of bonobos from a webcam located on an apparatus dedicated to data acquisition using a touch-screen. This work was inspired by the ZACI project [14]. Its overall goal is to automatise the data collection for cognitive studies in bonobos. This work differs from the previously mentioned projects by the complexity of the task and the dataset. The dataset is acquired automatically without single-image annotation which may lead to errors. Furthermore, the classification procedure may not focus on the face, but only on the visible parts of the body such as the back, top of the head or even just the limbs.
In section 2 we present the context of the acquisition, annotation of the Bonobo dataset, and the different splitting methods carried out for evaluating the different classification methods presented in section 3. The results of the classification methods on the different splits of the dataset are presented and discussed in section 4. Finally, we draw our conclusion and present the future planned work in section 5.
## 2 Weakly Annotated Bonobo Dataset
The Bonobo dataset, the classification models and the training methods presented are available on the Project GitHub page1. In the following subsections, we describe the video acquisition, the weakly annotation procedure, the individual detector and the splitting methods of the dataset.
Footnote 1: github.com/ccp-eva/BonobosClassification
### Video Acquisition and Annotation
Videos were recorded using a digital camcorder Panasonic HC-V757 and a cheap Logitech webcam, both of resolution 1280x720 recording at 30 fps at the Zoo Berlin. The camcorder was manipulated by researchers familiar with the bonobo present in the zoo; while the webcam was located in the ZACI apparatus, see Figure 1. The videos were then selected and sorted according to the individual present in the video. The videos can be assimilated to focal observation, a common practice in behaviour analysis that consists of observing (here filming) one particular individual and observing his/her actions and interactions. This may lead to having several individuals in the field of the camera, or none because of obstruction, camera manipulation, or in the eventuality of not having the individual in the webcam's field of view. Indeed, according to the bonobo position to the apparatus, the webcam may not be able to capture the individual, even when performing on the touch screen, as is the case on Figure 1. No spatial information was annotated, nor is the presence of the individual in the field of the camera if several individuals were in the field of view. In this particular enclosure there are seven individuals of different gender and age (gender/year of birth): Matayo (male/2019), Monyama (female/2010), Opala (female/1998), Santi (male/1981), Limbuko (male/1995), Leki (female/2014) and Samani female/2020). Samani was not incorporated into this first version of the dataset because of her constant proximity to her mother Monyama. A total of 100 videos inequitably distributed across six bonobo individuals is here considered.
### Bonobo Detection
OpenMMLab provides a Topdown Heatmap using a cascade RCNN x101 [21] coupled with an HRNet [18] trained on MacaquePose [8] in order to detect macaques and estimate their pose from images and videos. We use the output of the pre-trained RCNN x101 to assess the bonobo's location in the video. Even if bonobos and macaques are different in many aspects, the RCNN model can detect bonobos from the recorded videos. If several bonobos are detected in one frame, only the detection with the highest score is considered. Indeed, we assume focal observation of one individual would better capture this particular individual, correlating with its confidence score on the detector. This way, we can build a weakly labelled dataset based on videos associated with individuals and provide a frame-wise Region Of Interest (ROI) and a confidence score per video.
### Data Splitting
#### 2.3.1 Splitting according to detection
The early results in detection and classification, described in section 4, motivated a particular splitting of the obtained dataset. From it, four datasets are generated: _noROI,SO_, _ROI,SO_, _noROI,SO.5_ and _ROI,SO.5_. They take into account two criteria: ROI and score. The generated dataset will either consider the ROI (and marked with _ROI_) or not (_noROI_) and consider the detection regardless of the score (_S0_) or considering only detected bonobos with a score above or equal to 0.5 ((_S0.5_). By considering only a better score, the number of samples per individual is impacted, making the classification task harder; but the quality of the data may be better, which may conversely ease the task. A score higher than 0.5 would lead to videos without detection and we, therefore, consider only these two generated datasets. Indeed, the score certainly remains low because of the difference between macaques and bonobos.
In total, regardless of the score, 84 841 bonobo detections are considered against 54 345, with a score above 0.5, from the 100 videos corresponding to 129 334 frames. As presented in Table 1, compared to other similar datasets, the generated ones stand out by their number of samples per individual.
#### 2.3.2 Splitting According to Videos per Individual
The dataset is also split following the proportion of videos 0.6, 0.2, and 0.2 respectively in train, validation
Figure 1: Video acquisition using the webcam located in the middle on the top side of the ZACI device. (©2018 Ruben Gralki/Zoo Berlin)
and test sets for the videos associated per individual. This split is fundamental to having non-similar images across the splits. The number of frames per video was not considered for this splitting method, mainly because it changes according to the detection score. The split is performed once and randomly.
The distribution of the samples per individual across train, validation and test for _S0_ and _S0.5_ is depicted in Figure 2.
Figure 2: Data distribution across train, validation and test sets. Light and hard colors representing respectively _S0_ and _S0.5_ data distribution.
## 3 Bonobo Classification Methods
In this work, we are interested in comparing the commonly used machine learning classifier based on handcrafted features and the deep learning models as feature extractors or fine-tuned using pre-trained weights.
### Machine Learning Classifiers
In order to perform classification, we consider seven machine learning algorithms: Logistic Regression (LR), Linear Discriminant Analysis (LDA), Gaussian Naive Bayes (NB), K-nearest neighbours (KNN), Support Vector Machine (SVM), Classification and Regression Trees (CART) and Random Forest (RF). All classifiers use the same basic image feature descriptors concatenated together: hu moments, haralick texture and color histogram (8 bins) computed on the images. These classifiers were trained on the whole generated datasets using a 10-fold cross-validation method. After obtaining these first results, we decided to split the dataset into train, validation, and test sets.
### Deep Learning Classifiers
We compared the previous classifiers with the pre-trained ResNet18 model from [5]. We run experiments only on the split datasets for this model. The model was used as a feature extractor: only the last fully-connected layer is trained. We also fine-tuned the model: all layers are trained. The same training method is used for the two cases for training: we train over 100 epochs with a decreased learning of factor 0.1 starting at 0.001 every 20 epochs. The model is fed with a batch of size 64 following a summing reduction to avoid overestimated backpropagation on shorter batches. We tried training with two losses: the usual cross-entropy loss and a weighed cross-entropy loss. The weighed cross-entropy loss uses the appearance of the individual in the training set:
\[l_{n}=-w_{y_{n}}\log\frac{\exp(x_{n,y_{n}})}{\sum_{c=1}^{C}\exp(x_{n,c})} \tag{1}\]
with \(w_{y}=N_{classes}*\frac{N_{y}}{N}\). \(N_{classes}\) represents the number of considered classes in our classification problem (6), \(N_{y}\) the number of samples for a particular individual, and \(N\) the total number of samples in the training set. This weight would equal 1 if the samples were distributed evenly across individuals on the training set. Simple data augmentation is performed through random resizing, cropping and flipping within the scale \([0.08,1]\) and ration \([\frac{3}{4},\frac{4}{3}]\) and flipping probability of 0.5. These augmentation parameters are the same ones used for training the inception modules [20, 19]. The images of all sets are then resized to \(224\times 224\) to fit the original input size of the pre-trained model. The model's state performing the best on the validation set with regard to the classification accuracy is saved for evaluation on the test set.
## 4 Results and Analysis
### Detection Results
The ROI computed from the videos was automatically generated using a pre-trained method without the ground-truth at our disposal. Thus we can only have a limited appreciation of the detection results. As depicted in subsection 4.1, we can point out the limitation of our method for generating our different datasets. Indeed, in subsection 4.1.a, we may notice that the bonobo performing on the ZACI device is Opala (therefore a video weakly annotated as Opala), but it is instead her son Matayo which has been detected with a low score of 0.28. However, in subsection 4.1.b, we can notice the effectiveness of the detector on stable images from the camcorder despite a reflection due to the filming condition (behind a glass window from the public area of the Zoo of Berlin).
### Classification Results
According to the cross-validation evaluation results reported in table Table 2, we could think the classification methods are performing brilliantly and that the classification problem was easy to solve. Indeed, we have almost 100% classification accuracy on all the generated datasets, especially with the Random Forest classifier. However, Table 3 shows us that such affirmation is inaccurate since performances are dropping for the same methods on the validation and test sets. Such behaviour can be explained by the high similarity of the data extracted from the videos while performing 10-fold cross-validation. Indeed, no separation of the data was performed concerning the videos for this evaluation method.
Results reported on Table 3 are less extreme but more complicated to interpret. It stresses the complexity and the variety of the different generated datasets. That is why we also give the mean accuracy across all classes on the test set (_avgT_). The standard deviation is not reported but is below \(10^{2}\). Are in
Figure 3: Bonobo detection results from the webcam in ZACI device and camcorder from public area in Zoo Berlin.
bold font the best accuracies for validation test, and avgT. Globally, the convergence of the trained models was observed within the first 15 epochs.
The best performance using handcrafted features is obtained with the RF method. They perform correctly compared to the ResNet models. Still, ResNet generally outperforms the RF methods. The feature extractor using classical cross-entropy loss seems the best suited to classify our noROI datasets. Compared to the RF method, they hold similar test accuracy but have much higher validation accuracy. This may be explained by the ability of the model to extract meaningful local features from bigger images.
The feature extractor trained using the weighted loss is the model performing the best on what we may consider the cleanest dataset (ROI with a confidence score above 0.5 - _ROI_, _S0.5_). It gets the highest validation and test average
\begin{table}
\begin{tabular}{|c|c c c c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{Generated Datasets} \\ \hline Models & noROI,S0 & ROI,S0 & noROI,S0.5 & ROI,S0.5 \\ \hline LR &.989 &.972 &.995 &.993 \\ LDA &.971 &.959 &.987 &.983 \\ NB &.633 &.614 &.709 &.674 \\ KNN &.9995 &.998 &.9993 &.9989 \\ SVM &.993 &.986 &.997 &.997 \\ CART &.998 &.992 &.998 &.996 \\ RF & **.9999** & **.999** & **.9998** & **.9996** \\ \hline \end{tabular}
*: the precision is set to 3 digits but may increase for better comparison.
\end{table}
Table 2: Model accuracies* comparison with regard to the different generated datasets with 10-fold-cross validation method on the whole dataset.
\begin{table}
\begin{tabular}{|c|c c c|c c c|c c c|c c|c c c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{13}{c|}{Generated Datasets} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{noROI, S0} & \multicolumn{3}{c|}{ROI, S0} & \multicolumn{3}{c|}{noROI, S0.5} & \multicolumn{3}{c|}{ROI, S0.5} \\ \hline Models & Tr. Val. & T. & mT. & Tr. Val. & T. & mT. & Tr. Val. & T. & avgT & Tr. & Val. & T. & mT. \\ \hline LR &.99 &.38 &.34 &.17 &.98 &.38 &.34 &.17 & 1 &.28 &.18 &.17 &.99 &.28 &.18 &.17 \\ LDA &.98 &.38 &.34 &.17 &.97 &.30 &.12 &.17 &.99 &.28 &.18 &.17 &.99 &.28 &.18 &.17 \\ NB &.68 &.38 &.34 &.17 &.63 &.38 &.34 &.17 &.80 &.28 &.18 &.17 &.78 &.28 &.18 &.17 \\ KNN & 1 &.38 &.34 &.17 & 1 &.38 &.34 &.17 & 1 &.28 &.18 &.17 & 1 &.28 &.18 &.17 \\ SVM &.99 &.38 &.34 &.17 &.99 &.38 &.17 &.17 & 1 &.28 &.18 &.17 & 1 &.28 &.18 &.17 \\ CART & 1 &.43 &.61 & **.60** & 1 &.33 &.41 &.32 & 1 &.32 &.42 & **.45** & 1 &.20 &.23 &.37 \\ RF & 1 & **.55** & **.65** &.49 &.99 & **.56** & **.66** & **.49** & 1 & **.37** & **.50** &.42 & 1 & **.44** & **.52** & **.47** \\ \hline ResNet &.93 & **.85** & **.64** & **.49** &.91 & **.79** &.62 &.55 &.95 & **.76** &.49 & **.46** &.97 &.71 &.48 &.53 \\ ResNet* &.90 &.85 &.60 &.47 &.87 &.77 &.60 &.54 &.94 &.70 &.49 &.45 &.92 & **.75** &.48 & **.54** \\ ResNet\(\dagger\) &.86 &.79 &.64 &.48 &.96 &.60 & **.75** & **.63** &.82 &.71 & **.50** &.45 &.64 &.49 & **.50** &.34 \\ ResNet*\(\dagger\) &.99 &.67 &.61 &.45 &.93 &.54 &.58 &.46 &.99 &.47 &.39 &.39 &.83 &.41 &.42 &.41 \\ \hline \end{tabular}
* : with weighted loss \(\dagger\) : fine-tuned
\end{table}
Table 3: Model accuracies comparison with regard to the different generated datasets. The best values in the non-train sets are in bold per model type.
accuracies while its accuracy on the test set (48%) remains close to the best accuracy obtained by the fine-tuned ResNet (50%). Surprisingly the weighted loss did not help much to improve the average test accuracy.
Furthermore, the fined-tuned ResNet on the _ROI, S0_ dataset is the one performing the best and getting the highest test and test average accuracy across all datasets and models. Its confusion matrix on the test set is depicted in Table 2. The model seems to have captured more discriminant information by updating the convolutional layers' weights with all detected ROI to solve this classification task. Still, Matayo and Monyama whose number of samples is the lowest in the dataset are without surprise the hardest to recognise. We may also notice the performance gap with the validation set underlying the difference between the train, validation and test sets and the difficulty of the task through the separation of the data.
Figure 4: Confusion Matrix of fined-tuned ResNet model on the _ROI, S0_ dataset using non-weighted cross-entropy loss.
Finally, we observed similar performance when using a fixed size of images for feature extraction. Similar results were obtained when saving the model state with regard to the lowest loss. Also, we observed a general tendency of the models to classify individuals as Leki or Santi, undoubtedly because of their higher appearance rate in the training set. It has been less observed with Limbuko when using the entire frame for classification. Indeed despite his high concentration in the train set, most of his videos were recorded using the camcorder. When using the whole frame, the classifier may recognise the scene and acquisition condition, and may not focus on the individual to classify. The integration of negative samples in the dataset did not help in the classification process.
Overall, classification scores remain lower compared to other similar datasets because the region of interest used for classification is more challenging to classify, even for our team familiar with the bonobos. This may be overcome by considering the temporal information of the videos, building a track and classifying it. The several appearances of the individual in the track should help in the classification decision.
## 5 Conclusion and Future Work
This paper offers a pipeline to generate rich datasets of bonobos individuals using pre-trained detection models and a weak annotation method. We show the importance of video separation when performing a classification task to avoid false high hopes in classification performance. We investigate the most common classification methods in machine learning to solve the bonobo individual classification and show the superiority of deep-learned features compared to handcrafted features. The ResNet performance is analysed and discussed using different training methods on the different generated datasets. To achieve better performance, we believe more complex network architecture is not the priority, instead, we wish to deepen the analysis of the detection performance and the quality of the generated datasets.
Future work will focus on the annotation of the acquired videos using CVAT [16] tool in order to evaluate the detection method and the automatically built datasets qualities. Other classification methods will also be investigated using direct feature differences to estimate the similarity between individuals and allow more flexible applications in different zoos to perform classification on a larger number of individuals without retraining the model. The focus will also be on using temporal information for classification, tracking and activity recognition in primates for further behaviour analysis.
#### Acknowledgements
Thanks to Zoo Berlin and its responsible curator, Dr. Andre Schule; its deputy head keeper and bonobo keeper, Ruben Gralki and Kathrin Susanne Kopp from MPI EVA for the acquisition and annotation of the Bonobo dataset. |
2309.15013 | Updated Corpora and Benchmarks for Long-Form Speech Recognition | The vast majority of ASR research uses corpora in which both the training and
test data have been pre-segmented into utterances. In most real-word ASR
use-cases, however, test audio is not segmented, leading to a mismatch between
inference-time conditions and models trained on segmented utterances. In this
paper, we re-release three standard ASR corpora - TED-LIUM 3, Gigapeech, and
VoxPopuli-en - with updated transcription and alignments to enable their use
for long-form ASR research. We use these reconstituted corpora to study the
train-test mismatch problem for transducers and attention-based
encoder-decoders (AEDs), confirming that AEDs are more susceptible to this
issue. Finally, we benchmark a simple long-form training for these models,
showing its efficacy for model robustness under this domain shift. | Jennifer Drexler Fox, Desh Raj, Natalie Delworth, Quinn McNamara, Corey Miller, Migüel Jetté | 2023-09-26T15:32:09Z | http://arxiv.org/abs/2309.15013v1 | # Updated Corpora and Benchmarks for Long-Form Speech Recognition
###### Abstract
The vast majority of ASR research uses corpora in which both the training and test data have been pre-segmented into utterances. In most real-word ASR use-cases, however, test audio is not segmented, leading to a mismatch between inference-time conditions and models trained on segmented utterances. In this paper, we re-release three standard ASR corpora--TED-LIUM 3, Gigapeech, and VoxPopulien--with updated transcription and alignments to enable their use for long-form ASR research. We use these reconstituted corpora to study the train-test mismatch problem for transducers and attention-based encoder-decoders (AEDs), confirming that AEDs are more susceptible to this issue. Finally, we benchmark a simple long-form training for these models, showing its efficacy for model robustness under this domain shift.
Jennifer Drexler Fox\({}^{1}\), Desh Raj\({}^{2}\), Natalie Delworth\({}^{1}\), Quinn McNamara\({}^{1}\), Corey Miller\({}^{1}\), Miguel Jette\({}^{1}\)\({}^{1}\)Rev.com; \({}^{2}\)Center for Language and Speech Processing, Johns Hopkins University, USA. Long-form ASR, datasets, segmentation, transducers.
## 1 Introduction
Most ASR research uses corpora in which both the training and test data have been pre-segmented into utterances. Real-world audio, on the other hand, occurs as _long-form_ unsegmented recordings, leading to a mismatch between inference-time conditions and models trained on segmented utterances. This mismatch problem for long-form ASR has been well established in the literature [1, 2], and researchers have sought to tackle it through better segmentation [3, 4], large context acoustic modeling [5, 6], or rescoring with appropriate language models [7, 8].
A significant fraction of these long-form modeling techniques have only been evaluated on in-house or simulated data. In Fig. 1, we present corpus statistics from 36 published papers1 on long-form ASR, showing that 26.9% and 5.7% used in-house and simulated data, respectively. Even when publicly available "true" long-form corpora were used, they were often multi-speaker (21.2%; e.g. AMI [10] and SwitchBoard [11]), non-English (32.6%), or contained missing segments (11.5%; GigaSpeech and TED-LIUM). This obscures the real long-context modeling problem with orthogonal issues such as overlapped speech, tokenization, or incorrect evaluation.
Footnote 1: These papers were manually selected based on an approximate depth-first search on the citation graph of a few seed papers, such as [9].
To enable fundamental research on this problem, we release long-form versions of three English ASR corpora: TED-LIUM 3 [12], Gigapeech [13], and VoxPopuli-en [14]. Although the original releases for these datasets provide full recordings, the completeness of their transcriptions varies significantly, thus creating several challenges towards their use for long-form ASR. For instance, several portions of the recording may be untranscribed, or some segments may have been removed due to alignment problems or non-verbatim transcription. We reconstitute these long-form corpora through _linking_ and _expansion_ techniques (Section 3).
Finally, we use these reconstituted corpora to demonstrate the train/inference mismatch problem using baseline ASR models trained on the original short-form segments, for both transducers [15] and attention-based encoder-decoders (AEDs) [16]. We show that incorporating long-form training can significantly improve performance when using chunk-wise overlapped inference. Our reconstituted versions of the corpora, along with word-level alignments, are publicly available as Lhotes manifests [17].2
Footnote 2: [https://github.com/revdotcom/speech-datasets](https://github.com/revdotcom/speech-datasets)
## 2 Related Work
There is a large body of work addressing long-form ASR from several perspectives. On the modeling front, researchers have extended conventional end-to-end ASR for large context handling through strategies such as: using history (or contextual) utterances to modify the encoder representation [18, 19, 5]), context expansion through preceding audio [1, 6, 20], and summarizing context through embeddings [21, 22]. Chiu
Figure 1: Statistics of in-house (gray) and public (colored) datasets used in long-form ASR research. Color shades represent languages: English, Mandarin, and Japanese.
et al. [9] compared popular end-to-end models for long-form ASR, finding that transducers are more robust than AEDs to the train-test mismatch. Often, this mismatch can be partially alleviated through techniques such as random utterance concatenation [23], minimum word error rate (MWER) training [2], and strong regularization [24]. OpenAI's Whisper model [25] takes a simpler approach to match training and inference conditions: in both cases, all audio is segmented into 30s chunks without any external VAD or diarizer.
Chunk-wise overlapped inference [9] is commonly used for offline decoding of long recordings. The related problem of segmentation has been addressed by using CTC-predicted blanks [26], a jointly trained continuous-integrate-and-fire (CIF) module [4], or using special tokens to predict segment boundaries [3]. Language models (LMs) trained with expanded context have been used in first-pass decoding [27], or more commonly for second-pass rescoring [7, 8, 28, 29, 30].
Despite such interest, there is little consensus about best practices for training/decoding in long-form ASR, partially because of a lack of common benchmarks. Although Earnings21 [31] and Earnings22 [32] were proposed to bridge this gap, they do not have any training data included, which makes it difficult to perform controlled investigations.
## 3 Reconstituting Long-Form Data
Our premise is that a "true" long-form corpus has long audio files and accompanying transcriptions. We used GigaSpeech, TED-LIUM, and VoxPopuli (en subset) as the base datasets for long-form reconstitution, since they provide such long recordings. These corpora have train, dev, and test partitions, but they are based on short segments and transcriptions (cut from the original recordings). In this section, we describe our reconstitution process for converting an eligible long-form corpus into a true long-form corpus. This process has two possible realizations, _linking_ and _expansion_. We view linking as a long-form repackaging of an existing corpus, whose results are directly comparable with results on the original corpora. In contrast, we view expansion as a new version of an existing corpus, since we have added new audio segments or transcriptions to the existing data.
### Linking
We define linking as concatenating original segments to make longer ones if no speech or transcriptions lie in between.
GigaSpeech comes with sequentially numbered segments that can be joined with previous and following segments when available. In some cases, segments were missing in the sequence, and thus were presumed to be untranscribed and would not be able to be linked across.
In contrast, internal resources were insufficient to allow for linking TED-LIUM. We observed several words in the audio that were not included in the transcriptions.3 Most of these missing transcriptions were, however, present in the transcripts from a scrape of ted.org.4 Mapping TED-LIUM talks to the scrape was largely automatic, but a remainder of files needed to be associated by a semi-automatic method. By referring to these externally-sourced complete transcriptions, we were able to link adjacent segments in the original partitions when there was no missing text between the segments. Fig. 2 shows two representative segment pairs. In Example 1, the external transcriptions (at bottom) indicate that there were no missing transcriptions in between, so linking is possible. In Example 2, the external transcriptions indicate that there were in fact missing transcriptions between the segments and thus they cannot be linked, unless the corpus is expanded. We will describe this expansion process in the following section.
Footnote 3: Previous work on long-form ASR using TED-LIUM seems to have missed or ignored this issue [26].
Footnote 4: [http://www.kaggle.com/dataeets/thegupta/ted-talk](http://www.kaggle.com/dataeets/thegupta/ted-talk)
### Expansion
Expansion is an optional process involving the addition of speech and/or transcriptions to an existing corpus.
In VoxPopuli, purportedly exhaustive transcriptions are present in the original release, but 57% of transcribed segments are not in the partitions. Segments were marked invalid when an ASR system got >20% CER. We listened to several of these "invalid" segments and decided that their audio quality was not markedly different from other segments. For any paragraph used in a particular partition, we resuscitated formerly invalid segments, allowing longer sequences to be reconstituted.
For TED-LIUM, expansion involved using the scraped transcriptions described in 3.1 to replace the original transcriptions which had gaps.
### Statistics of reconstituted data
Table 1 provides summary statistics contrasting the original and reconstituted long-form versions of the corpora. Since
Figure 2: Linkability determined by reference to external transcription
GigaSpeech reconstitution is simply linking, the extra corpus size is entirely between-segment silence. We created two long-form versions: M and 200h. The former is obtained by linking segments of the original GigaSpeech-M (GS-M), which is approximately 1000 hours. Since GS-M is a _random_ subset of GS-XL, it does not have many consecutive segments; as a result, the reconstituted long-form version only has an average length of 11.8s. To solve this issue, we created GS-200h specifically out of the longest consecutive segments in GS-XL, resulting in segments that are at least 240s. The dev and test have no missing references; therefore, their long-form versions are the full recordings.
For TED-LIUM, we show statistics for both the _linked_ and _expanded_ versions of the reconstituted data (we used the former for ASR experiments in Section 4). The increase in partition size for the expanded corpora results primarily from inclusion of inter-segment silence, not new references.
For VoxPopuli, expansion resulted in the addition of a substantial amount of new data. Compared to GS and TL, long-form segments are relatively short because we used the original paragraph segmentation.
### Alignment
In addition to extended transcriptions, we also provide word-level timestamps obtained through forced alignments. For the linked corpora, we used a HMM-GMM model trained using Kaldi [33] to align the original short segments. For the expanded corpora, we modified the Fairseq aligner [34] to provide start and end times and accompanying scores for each word. This aligner was run on the complete TED-LIUM talks and the VoxPopuli paragraphs. We used these word-level timestamps to create fixed-length chunks for training (e.g., 15 or 30 seconds) by concatenating subsequent words until the segment exceeded the specific length. We have supplied these alignments in our distribution to enable users to experiment with other segment lengths or dynamic segmentation.
## 4 ASR benchmarks
In conventional ASR, audio features for a segmented utterance \(\mathbf{X}\in\mathbb{R}^{T\times F}\) are provided as input to the system, and we are required to predict the transcript \(\mathbf{y}=(y_{1},\ldots,y_{U})\), where \(y_{u}\in\mathcal{Y}\) denotes output units such as graphemes or word-pieces. ASR systems search for \(\hat{\mathbf{y}}=\text{arg}\max_{\mathbf{y}}P(\mathbf{y}|\mathbf{X})\), often in a constrained search space using greedy or beam search.
### Models
**Neural transducers** are trained by minimizing the conditional log-likelihood, by marginalizing over the set of all alignments \(\mathbf{a}\in\mathcal{\bar{Y}}^{T+U}\), where \(\mathcal{\bar{Y}}=\mathcal{Y}\cup\{\phi\}\) and \(\phi\) is called the blank label [15]. Formally, \(P(\mathbf{y}|\mathbf{X})=\sum_{\mathbf{a}\in\mathcal{B}_{\text{RNNT}}^{-1}( \mathbf{y})}P(\mathbf{a}|\mathbf{X})\), where \(\mathcal{B}_{\text{RNNT}}\) removes the blank token. The probability \(P(\mathbf{x}|\mathbf{X})\) is computed by factoring the parameters into an encoder, a prediction network, and a joiner. Since transducers are trained using alignment between \(\mathbf{X}\) and \(\mathbf{y}\), they do not need to model end of sequence explicitly. This _frame-synchronous_ behavior may result in more robustness to train-test length mismatch. Furthermore, it also allows the estimation of token-level time-stamps at inference.
Conversely, we also trained a _label-synchronous_ **attention-based encoder-decoder** (AED) in the joint CTC-attention framework [35]. The attention head is trained with a label-wise cross-entropy loss, whereas the CTC head is trained with a sequence-level alignment-free criterion [36].
### Overlapped chunk-wise decoding
We follow the overlapped chunk-wise decoding strategy [9]. Given a long recording, we chunk it into fixed-length segments of size \(\ell_{\text{ch}}\), and extend them by an additional \(\ell_{\text{ex}}\) on each side to avoid edge effects. These segments are decoded using the transducer/AED model to obtain time-stamped tokens. We discard the edge tokens which belong to the extra regions in each segment. Finally, we concatenate the segment-level tokens to obtain the transcript for the recording. Unlike [37], we do not need to align the overlapped regions of consecutive segments, but the models are required to estimate token-level time-stamps.
### Long-form inference with attention decoding
We make two changes to the standard attention decoding paradigm to enable long-form inference. First, to combat this problem of high deletion rates on longer segments, we remove short hypotheses after beam search - any hypotheses with 10 or more tokens fewer than the longest hypothesis is removed from consideration. We use this setting for all AED decoding results. Second, because token-level time-stamps are required for the overlapped inference method described above, we obtain these from a constrained forward pass with the CTC head after decoding with the attention head.
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline \multicolumn{2}{c}{\multirow{2}{*}{**Dataset**}} & \multicolumn{2}{c}{**Original**} & \multicolumn{2}{c}{**Long-form**} \\ \cline{3-6} \multicolumn{2}{c}{} & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Length**} & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Length**} \\ \hline \multirow{4}{*}{**GMM**} & Train (M) & 999:56 & 4.0 & 1077:25 & 11.8 \\ & Train (200h) & 195:26 & 4.0 & 223:10 & 295.0 \\ & Dev & 11:50 & 6.6 & 10:29 & 1510.9 \\ & Test & 39:39 & 5.7 & 39:18 & 1088.2 \\ \cline{2-6} & Train & 453:48 & 6.1 & 441:59 & 64.0 \\ & Dev & 1:35 & 11.3 & (1:35) & (815.3) \\ & Test & 2.37 & 8.2 & 2:24 & 576.1 \\ & & & (2:42) & (972.7) \\ \hline \multirow{4}{*}{**GigaSpeech**} & Train & 536:08 & 10.6 & 1111:46 & 143.7 \\ & Dev & 5:06 & 10.5 & 7:31 & 129.5 \\ \cline{1-1} & Test & 5:04 & 9.9 & 18:01 & 108.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of reconstituted data: total size of the set (HH:MM) and average segment length (seconds). \({}^{\dagger}\)For long-form TED-LIUM, the numbers outside and within parentheses represent linked and expanded versions, respectively.
### Experimental Setup
For our experiments with neural transducers, we modified the standard Zipformer-transducer recipe in icefall5, trained using the pruned RNN-T loss [38]. The encoder consists of 6 Zipformer blocks [39], which are subsampled by up to 8x, and contain multiple self-attention layers (with shared attention weights). The prediction network is a 1D-convolutional layer with bigram context. We used greedy search for the chunk-wise decoding strategy. AEDs are implemented as part of a joint CTC/attention model using the Wenet toolkit6. They consist of a 12-layer conformer [40] encoder and 6-layer bidirectional transformer decoder, although only the 3 forward decoder layers are used for inference. We use beam search for all decoding conditions. For both models, 80-dim log Mel filter-banks are used as acoustic input, and the output units are BPEs. We trained the models using SpecAugment [41] on the original segments as well as on a combined training set containing the original and the long-form segments. Due to GPU memory constraints, the long-form partition was split into 30s segments. We evaluated the models on the original segments to compare word error rate (WER) performance with oracle segmentation, and then on the reconstituted long-form sets to measure robustness to train-test mismatch.
Footnote 5: [https://github.com/k2-fsa/icefall](https://github.com/k2-fsa/icefall)
Footnote 6: [https://github.com/wenet-e2e/wenet](https://github.com/wenet-e2e/wenet)
### Results
Table 2 shows the performance of the different model types when trained and evaluated on the original segments. Across all three corpora, the transducer model performed best, but both model types gave reasonable results. However, when these models were used to decode the reconstituted long-form data, their performance varied significantly, as seen in the "Original" rows of Table 3. As expected, the transducer model degraded only slightly (e.g., 6.38%\(\rightarrow\)7.02% on TEDLIUM dev), whereas **AED degraded significantly** on all test suites, driven predominantly by high deletion rates. The breakdown of AED WERs into insertion, substitution, and deletion errors can be seen in Figure 3.
Table 3 also shows the results for long-form training using the updated train sets. For both TED-LIUM and GigaSpeech, this training led to small improvements in transducer performance and large improvements in AED performance, although the transducer was still significantly better than the AED for long-form inference. From Fig. 3, we see that long-form training resulted in **large reductions in the deletion rate**, leading to better performance.
The inclusion of the long-form VoxPopuli data degraded performance for the transducer model, and only improved the AED model slightly. This is likely due to our being overly permissive with the included data. Some of the VoxPopuli references in the original transcripts were heavily edited (i.e., non-verbatim transcription), including reordering of words. Since the transducer model assumes monotonic alignments, training with such transcripts could potentially deteriorate the model. Future work is needed to find the appropriate balance between including additional data needed for long-form training and rejecting low-quality references. Alternatively, recently proposed techniques such as bypass temporal classification [42], which allow training with imperfect transcripts, could be explored for making the best use of this data.
## 5 Conclusion
In this work, we released updated long-form versions of three popular English datasets -- TED-LIUM, GigaSpeech, and VoxPopuli. This was achieved using a general "reconstitution" recipe comprising linking and expansion stages. To accompany this release, we presented baseline results using two commonly used models, transducers and AEDs. Across all three datasets, we demonstrated that transducers are more robust than AEDs to the train/test mismatch, when trained on segmented utterances. Finally, we showed that a simple strategy of combining original and long-form segments for training is effective at reducing the performance gap. Nevertheless, more research into training and modeling strategies is required to make long-form ASR robust in real scenarios, and we believe our public benchmarks would be important to measure progress.
**Acknowledgments. This work was started during JSALT 2023, hosted at Le Mans University, France, and sponsored by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, and Microsoft. D.R. acknowledges funding by NSF CCRI Grant No. 2120435 and a JHU-Amazon AI2AI fellowship.**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{
\begin{tabular}{c} **Size** \\ **(M)** \\ \end{tabular} } & \multicolumn{2}{c}{**TL**} & \multicolumn{2}{c}{**GS**} & \multicolumn{2}{c}{**VP**} \\ \cline{3-8} & & **Dev** & **Test** & **Dev** & **Test** & **Dev** & **Test** \\ \hline Transducer & 65.5 & 6.38 & 5.86 & 14.49 & 13.98 & 8.03 & 8.29 \\ AED & 109.8 & 9.11 & 8.48 & 15.34 & 15.31 & 13.63 & 14.07 \\ \hline \hline \end{tabular}
\end{table}
Table 2: WER results on original test sets.
Figure 3: Error rates on the dev sets for AED long-form inference.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{
\begin{tabular}{c} **Training** \\ **data** \\ \end{tabular} } & \multicolumn{2}{c}{**TL**} & \multicolumn{2}{c}{**GS**} & \multicolumn{2}{c}{**VP\({}^{\dagger}\)**} \\ \cline{3-8} & & **Dev** & **Test** & **Dev** & **Test** & **Dev** & **Test** \\ \hline \multirow{2}{*}{Transducer} & Original & 7.02 & 6.08 & 17.09 & 17.06 & 16.37 & 19.33 \\ & + Long-form & 6.25 & 5.71 & 16.21 & 16.36 & 26.18 & 29.84 \\ \hline \multirow{2}{*}{AED} & Original & 60.58 & 62.77 & 45.05 & 45.50 & 34.21 & 39.06 \\ & + Long-form & 18.88 & 23.89 & 20.17 & 20.78 & 27.83 & 33.02 \\ \hline \hline \end{tabular}
\end{table}
Table 3: WER results on long-form evaluation. \({}^{\dagger}\)Long-form training for VoxPopuli contains non-verbatim transcripts. |
2309.03665 | How adversarial attacks can disrupt seemingly stable accurate
classifiers | Adversarial attacks dramatically change the output of an otherwise accurate
learning system using a seemingly inconsequential modification to a piece of
input data. Paradoxically, empirical evidence indicates that even systems which
are robust to large random perturbations of the input data remain susceptible
to small, easily constructed, adversarial perturbations of their inputs. Here,
we show that this may be seen as a fundamental feature of classifiers working
with high dimensional input data. We introduce a simple generic and
generalisable framework for which key behaviours observed in practical systems
arise with high probability -- notably the simultaneous susceptibility of the
(otherwise accurate) model to easily constructed adversarial attacks, and
robustness to random perturbations of the input data. We confirm that the same
phenomena are directly observed in practical neural networks trained on
standard image classification problems, where even large additive random noise
fails to trigger the adversarial instability of the network. A surprising
takeaway is that even small margins separating a classifier's decision surface
from training and testing data can hide adversarial susceptibility from being
detected using randomly sampled perturbations. Counterintuitively, using
additive noise during training or testing is therefore inefficient for
eradicating or detecting adversarial examples, and more demanding adversarial
training is required. | Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham | 2023-09-07T12:02:00Z | http://arxiv.org/abs/2309.03665v2 | # How adversarial attacks can disrupt
###### Abstract.
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data. Paradoxically, empirical evidence indicates that even systems which are robust to large random perturbations of the input data remain susceptible to small, easily constructed, adversarial perturbations of their inputs. Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data. We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability--notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data. We confirm that the same phenomena are directly observed in practical neural networks trained on standard image classification problems, where even large additive random noise fails to trigger the adversarial instability of the network. A surprising takeaway is that even small margins separating a classifier's decision surface from training and testing data can hide adversarial susceptibility from being detected using randomly sampled perturbations. Counterintuitively, using additive noise during training or testing is therefore inefficient for radiating or detecting adversarial examples, and more demanding adversarial training is required.
\({}^{1}\)Department of Mathematics, King's College London, WC2R 2LS
\({}^{2}\)School of Computing and Mathematical Sciences, University of Leicester, LE1 7RH
\({}^{3}\)School of Mathematics, University of Edinburgh, EH9 3FD
_E-mail addresses: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]._
## 1. Introduction
Adversarial attacks aim to slightly modify a piece of input data in such a way as to significantly change the output of a model. As such, they exploit sensitivities and instabilities in neural networks. Recent work [2] has shown that such instabilities are somewhat inevitable, even in relatively small networks consisting of just two layers where the number of neurons is linear in the input data dimension. On top of this, there exist simple algorithms enabling a malicious attacker to produce adversarial perturbations quite easily in many cases [3]. It is remarkable, therefore, that these same instabilities are rarely triggered by random perturbations to the input data - even when these random perturbations may be much larger than destabilising adversarial perturbations.
This 'paradox of apparent stability' is demonstrated in Figure 1 for a standard convolutional neural network trained on CIFAR-10 images [11]. Although the majority of images in both the training and test data sets are susceptible to small adversarial attacks (Panel (a)), random perturbations even an order of magnitude larger mostly fail to cause the images to be misclassified (Panel (b)). These experiments are discussed in detail in Section 2.
Several explanations for the causes of adversarial examples have been proposed in the literature. An early work on the subject [5] suggested that adversarial images simply live in regions of the data space to which the data distribution assigns low probability. A variant of this idea, discussed in [9], suggests that adversarial attacks perturb inputs in a way that moves them in an orthogonal direction to the local data manifold. This results in adversarial images which exist in a region of data space where no training data could have been sampled, and the decision surfaces of the network are therefore relatively pathological. Other suggested mechanisms include the dimpled manifold hypothesis [15], boundary tilting [18], and the existence of uncountably large families of special distributions for which instabilities are expected [2]. However, none of these frameworks
rigorously account for and explain the paradoxical simultaneous robustness of these classifiers to random perturbations whose size could be several times larger than that of the adversarial ones.
Here, we suggest a resolution to the paradox rooted in ideas from the concentration effects of high dimensional probability distributions. We study the phenomenon in the context of a binary classification problem, and the simple, realistic framework we introduce captures the key features of the paradox which are observed in practice (precise definitions of these terms are given in Section 3):
**Accuracy:**: The classifier correctly labels non-perturbed data.
**Apparent stability:**: There is a vanishingly small probability that a sampled data point will be misclassified after a large random perturbation is applied to it.
**Vulnerability:**: Yet, with high probability any sampled data point is susceptible to a very small adversarial perturbation that changes the predicted class.
**Computability:**: The optimal destabilizing perturbation can be computed from knowledge of the loss function gradient.
A further important feature of the framework that we introduce is that it can easily be studied at various levels of generality, revealing that the phenomena we observe are not merely contrived artefacts. We first present the model in the case of two data distributions supported in \(n\)-dimensional balls with a linear classifier in Section 3. This simplified setting allows us to present results which distil the fundamental origins of the paradox without unnecessary technical details. We repeat the analysis in a general setting where data are sampled from two arbitrary distributions and a classifier with a nonlinear decision surface is deployed in Section E.
Our results reveal a tension between different notions of what it means for a classifier to be stable, a subtlety which is rarely discussed in practice. A problem may be _deterministically unstable_ in the sense that there exists a small, destabilising perturbation, while the fact that this instability is extremely unlikely to be triggered by random noise in the data renders the problem
Figure 1. Histograms showing the fraction of images which were misclassified after either (a) an adversarial attack (as the fraction of ordinarily correctly classified images) or (b) a random perturbation of different sizes (as the fraction of images which were susceptible to adversarial attacks), measured as the maximum absolute change to an individual pixel channel (the \(\ell^{\infty}\) norm). For adversarial attacks, this represents the smallest misclassifying attack in the adversarial direction, while for the random perturbations we record the smallest \(\ell^{\infty}\) norm among all misclassifying perturbations. The random perturbations applied to each image were normalised to have Euclidean norm equal to a fixed multiple of the Euclidean norm of the smallest successful adversarial attack for that image, shown on the horizontal axis of panel (b). Examples are shown at the size of their respective perturbation norms.
probabilistically stable_. This is a dangerous situation for a performance-critical classifier: even though the performance appears excellent at test time, adversarial instabilities lurk awaiting an unscrupulous attacker and cannot be detected at random. We discuss this concept and further practical implications of our results in Section 4.
## Notation
We use the following notation throughout:
* \(x\cdot y\) denotes the inner product of \(x,y\in\mathbb{R}^{n}\) and \(\|x\|=\sqrt{x\cdot y}\) denotes the Euclidean norm,
* \(\mathbb{B}_{r}^{n}(c)=\{x\in\mathbb{R}^{n}\,:\,\|x-c\|\leq r\}\) denotes the the Euclidean ball in \(\mathbb{R}^{n}\) with radius \(r>0\) centered at \(c\in\mathbb{R}^{n}\), and we use the abbreviation \(\mathbb{B}^{n}=\mathbb{B}_{1}^{n}(0)\),
* \(V^{n}=\frac{x^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}\) denotes the \(n\)-dimensional volume of \(\mathbb{B}^{n}\), and \(V^{n}_{\text{cap}}(r,h)\) denotes the volume of the cap with height \(h\) of the \(n\)-dimensional ball of radius \(r\), i.e. the volume of the set \(\{x\in\mathbb{R}^{n}\,:\,\|x\|<r\text{ and }x_{1}>1-h\}\), where \(x_{1}=x\cdot\mathbf{e}_{1}\) and \(\mathbf{e}_{1}=(1,0,\ldots,0)^{\top}\in\mathbb{R}^{n}\).
* For a set \(S\subset\mathbb{R}^{n}\), we use \(\mathcal{U}(S)\) to denote the uniform distribution on \(S\), and \(\mathbb{I}_{S}:S\to\{0,1\}\) to denote the indicator function of \(S\), such that \(\mathcal{I}_{S}(x)=1\) for \(x\in S\) and \(0\) otherwise.
## 2. The paradox of apparent stability demonstrated on CIFAR-10
The phenomenon of simultaneous susceptibility to adversarial attacks and robustness to random noise can be clearly demonstrated using the CIFAR-10 image classification dataset [11]. To present it in the simple setting of a binary classification problem, we split the 10 classes of the CIFAR-10 dataset into 45 binary classification problems. A separate network (each with an identical convolutional structure in the form of a truncated VGG network [16]) was trained using Tensorflow [1] for each of these problems, and each point in the training and test set was assessed for its susceptibility to adversarial examples using a gradient-based attack algorithm. On images which were susceptible to an adversarial attack with Euclidean norm \(\delta\), we applied 2000 randomly sampled perturbations with Euclidean norm \(\epsilon\) which were set to be 1, 2, 5, and 10 times the size of \(\delta\) to detect the sensitivity of the network to random perturbations around these images. The full experimental set up is described in Section A of the supplementary material.
The central phenomenon is illustrated in Figure 1: while the networks were easily fooled by relatively small adversarial perturbations which appear to make little perceptual difference to the image, they were remarkably robust to randomly sampled perturbations. Here we demonstrate this in the broadly representative case of the 'aeroplane-vs-cat' binary classification problem. Comparing the inset examples on Figures 0(a) and 0(b), it is difficult to spot the modification made by the adversarial perturbation, and it is nearly equally difficult to make out the aeroplane in the randomly perturbed image. Note that, since the original images have pixel channel values in \([0,1]\), a perturbation with \(\ell^{\infty}\) norm greater than 1 represents a drastic change to the contents of the image, yet one which was rarely able to cause the network to misclassify its input. Further details of the experimental setup and full results for this and the remaining classification problems are explored in Section B of the supplementary material.
We also provide the results of experiments incorporating additive random noise to data at training time to assess the impact this may have on adversarial susceptibility (described in Sections A.4 and B.4). The conclusion we draw from these experiments is that training with even large random perturbations does not seem to significantly decrease the susceptibility to adversarial attacks, and is responsible for a large drop in accuracy.
## 3. An illustrative theoretical model
This puzzle can be understood via a simple yet reasonably generic model problem which captures the main features of the phenomenon. In this section, we focus on the case of two data distributions satisfying a mild non-degeneracy requirement (Definition 1) which are supported on balls and classified using a linear discriminant. A significantly more general version of this model is analysed in Section E of the supplementary material, with fewer constraints on the distributions and a classifier which is permitted to use a more general nonlinear decision surface. The results and
conclusions remain largely qualitatively similar. In particular, the simultaneous co-existence of the high accuracy, the typicality of data susceptible to adversarial attacks, and the rarity of destabilising random perturbations with bounded \(\ell^{2}\) norm all extend to the more general model with nonlinear decision boundary (see Theorems 13, 15, 18 and Corollaries 14, 16, 19 in the Supplementary materials).
We recall the definition of a data distribution satisfying the Smeared Absolute Continuity (SmAC) property [6], which essentially just prevents it from having pathological concentration points. We note that if the growth property is satisfied with \(A=1\), then the distribution is simply the uniform distribution on the ball \(\mathbb{B}_{r}^{n}(c)\).
**Definition 1** (Smeared Absolute Continuity (SmAC) [6]).: _A distribution \(\mathcal{D}\) on \(\mathcal{R}^{n}\) is said to satisfy the smeared absolute continuity condition if it possesses a density \(p:\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\) and there exists a centre point \(c\in\mathbb{R}^{n}\) and radius \(r>0\) such that \(p(x)>0\) only for points \(x\) in the ball \(\mathbb{B}_{r}^{n}(c)\), and there exists a constant growth parameter \(A>0\) such that_
\[\sup_{x\in\mathbb{B}_{r}^{n}(c)}p(x)\leq\frac{A^{n}}{V^{n}r^{n}}.\]
Suppose that two classes of data are each sampled from data distributions \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}\) on \(\mathbb{R}^{n}\) satisfying the SmAC property. For simplicity, we suppose that these distributions each have radius 1 and centres given by \(c_{0}=-\epsilon\mathbf{e}_{1}\) and \(c_{1}=\epsilon\mathbf{e}_{1}\) respectively. We further suppose that both distributions satisfy the growth bound with the same parameter \(A\). For brevity, we also define the combined distribution \(\mathcal{D}_{\epsilon}\) which samples a point from \(\mathcal{D}_{0}\) with label 0 with probability \(\frac{1}{2}\), and samples a point from \(\mathcal{D}_{1}\) with label 1 with probability \(\frac{1}{2}\). The geometry of this setup is illustrated in Figure 2.
The classification function \(f:\mathbb{R}^{n}\to\{0,1\}\) with the highest accuracy which can be defined for this data model without further knowledge of the distributions is given by the simple linear separator
\[f(x)=\begin{cases}0&\text{ if }x_{1}<0,\\ 1&\text{ otherwise.}\end{cases} \tag{1}\]
This classifier does not necessarily return the correct label in all cases since, for \(\epsilon\in(0,1)\), the two data classes overlap.
The first property of this simple model is that misclassified points are rare in the high dimensional setting, despite the fact that the two balls from which points are sampled have only a small separation between their centres. More precisely, the probability that this classifier is correct converges exponentially to 1 as the data dimension grows. This result is proven in Section C.1 of the supplementary material.
**Theorem 2** (The classifier is accurate).: _For any \(\epsilon>0\), the probability that the classifier applies the correct label to a randomly sampled data point grows exponentially to 1 with dimension \(n\)
Figure 2. Two unit balls with centres separated by distance \(2\epsilon\), and the decision surface of the classifier \(f\) (dashed).
specifically_
\[P((x,\ell)\sim\mathcal{D}_{\epsilon}:f(x)=\ell)\geq 1-\frac{1}{2}A^{n}(1-\epsilon^{ 2})^{\frac{n}{2}}.\]
The sharpness of this result is verified empirically in Figure 2(a), computed for \(A=1\). We observe that by \(n=10,000\), the probability of sampling a point which will be misclassified is virtually 0. To put this and the following results into context, the \(32\times 32\times 3\) images used in CIFAR-10 have 3,072 attributes, while the size of \(256\times 256\times 3\) commonly used for the images in ImageNet have 196,608 attributes, placing them firmly within the range of dimensionalities where the effects described here are active.
On the other hand, even accurately classified points in this model are still close to the decision surface since the ball centres are only separated by distance \(\epsilon\). Because of this, for any \(\delta>\epsilon\), there are points sampled from each class which are susceptible to an adversarial attack \(s\in\mathbb{R}^{n}\) with \(\|s\|\leq\delta\) which causes \(f\) to predict the wrong class. Moreover, in high dimensions, data points sampled from such a distribution concentrate at distance \(\epsilon\) from this decision surface, meaning that the probability of sampling a point which is susceptible to an adversarial attack is high. This may be encapsulated in the following result, which is proven in Section C.2 of the supplementary material.
**Theorem 3** (Susceptible data points are typical).: _For any \(\epsilon\geq 0\) and \(\delta\in[\epsilon,1+\epsilon]\), the probability that a randomly sampled data point is susceptible to an adversarial attack with Euclidean norm \(\delta\) grows exponentially to 1 with the dimension \(n\), specifically_
\[P\big{(}(x,\ell) \sim\mathcal{D}_{\epsilon}\,:\,\text{there exists $s\in\mathbb{B}^{n}_{ \delta}$ such that $f(x+s)\neq\ell$}\big{)}\] \[\geq 1-\frac{1}{2}A^{n}(1-(\delta-\epsilon)^{2})^{\frac{n}{2}}.\]
Although this susceptibility may therefore be viewed as typical in high dimensions, however, the probability of detecting it by sampling random perturbations of data points is paradoxically very small, as shown by the following result which is proven in Section C.3 of the supplementary material.
Figure 3. Comparison of the theoretical bounds in Theorems 2 and 4 against empirical results computed using 10,000 data points sampled from \(\mathcal{D}_{\epsilon}\), with \(\epsilon=0.05\), and 10,000 perturbations sampled from \(\mathcal{U}(\mathbb{B}^{n}_{\delta})\) for various values of \(\delta\). We see that, even for perturbations 50 times larger than the separation distance between the balls (i.e. \(\delta=2.5\)), the probability of randomly sampling a perturbation which changes the classification of a random data point is very small in high dimensions.
**Theorem 4** (Destabilising perturbations are rare).: _For any \(\delta>\epsilon\geq 0\), the probability that a randomly selected perturbation with Euclidean norm \(\delta\) causes a randomly sampled data point to be misclassified is bounded from above as:_
\[P\big{(}(x,\ell)\sim\mathcal{D}_{\epsilon},s\sim\mathcal{U}(\mathbb{B}_{\delta }^{n})\,:\,f(x+s)\neq\ell\big{)}\leq A^{n}\Big{(}1-\Big{(}\frac{\epsilon}{1+ \delta}\Big{)}^{2}\Big{)}^{\frac{n}{2}}.\]
_In particular, when \(\delta\) is independent of dimension \(n\), this probability converges to 0 exponentially with \(n\)._
This probability bound is compared against empirically sampled data in Figure 2(b). While the bound is not particularly sharp in low dimensions, it accurately describes the key phenomenon which is the convergence of the probability to 0 in high dimensions. This phenomenon is startlingly persistent, even when the magnitude of the sampled perturbations is 50 times greater than the distance between the centres of the spheres (when \(\delta=2.5\)).
We note that some care needs to be taken when considering perturbations with fixed \(\ell^{\infty}\) norms. The corresponding \(\ell^{2}\) norm of these perturbations scales as \(\sqrt{n}\), affecting convergence to 0 of the probability of destabilisation (see Theorem 4).
A further aspect of this model problem is that successful adversarial attacks are universal in high dimensions. We define the _destabilisation margin_ to be the distance by which a destabilising perturbation pushes a data point across the decision threshold of the classifier (1). This is measured by the functions \(d_{\ell}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) associated with each class \(\ell=0,1\), where, for a data point \(x\) and a perturbation \(s\),
\[d_{0}(x,s)=\max\{x_{1}+s_{1},0\},\]
and
\[d_{1}(x,s)=\max\{-x_{1}-s_{1},0\}.\]
The following result then holds, as proven in Section C.4 of the supplementary material.
**Theorem 5** (Universality of adversarial attacks).: _Let \(\epsilon\geq 0\) and suppose that \(x,z\sim\mathcal{D}_{\epsilon}\) are independently sampled points with the same class label \(\ell\). For any \(\gamma\in(0,1]\), the probability that \(x\) is destabilised by all perturbations \(s\in\mathbb{R}^{n}\) which destabilise \(z\) with destabilisation margin \(d_{\ell}(z,s)>\gamma\) converges exponentially to 1 as the dimension \(n\) increases. Specifically, for \(\ell\in\{0,1\}\) and \(z\in\mathbb{R}^{n}\), let \(S_{z}=\{s\in\mathbb{R}^{n}\,:\,d_{\ell}(z,s)>\gamma\}\). Then,_
\[P(x,z\sim\mathcal{D}_{\ell}\,:\,f(x+s)\neq\ell\text{ for all }s\in S_{z})\geq 1-A^{ 2n}\Big{[}1-\Big{(}1-\frac{1}{2}\Big{(}1-\frac{\gamma^{2}}{4}\Big{)}^{\frac{n }{2}}\Big{)}^{2}\Big{]}.\]
The anatomy of this bound is slightly obfuscated at first glance by the presence of the distribution growth parameter \(A\). In the case when \(A=1\), which implies that both distributions are uniform, the lower bound takes the simpler form \((1-\frac{1}{2}(1-\frac{\gamma^{2}}{4})^{\frac{n}{2}})^{2}.\) When \(n\) is large, we may expect that the term \(\frac{1}{2}(1-\frac{\gamma^{2}}{4})^{\frac{n}{2}}\) is close to zero since \(\gamma\in(0,1]\), and the simplified bound is therefore close to 1. The value inside the square brackets of the general bound may therefore be expected to be close to 0, implying that the complete lower bound is close to 1 for sufficiently large \(n\), even if \(A\) is large.
The dependence of the bound in Theorem 5 on the margin \(\gamma\) by which the perturbation destabilises \(z\) is an interesting feature. Roughly speaking, the result suggests that in low dimensions only severe perturbations which push points a long way past the decision threshold may be regarded as universal in the sense of having a high probability of destabilising other sampled points. As the dimension \(n\) increases, however, perturbations which produce smaller and smaller margins on individual points become universal in the sense that they have a constant probability of destabilising other sampled points.
Common algorithms for constructing adversarial attacks work by perturbing the target input in such a way as to increase an appropriate loss function. Gradient-based methods for this, such as the Fast Gradient Sign Method [5], compute the gradient of the loss function with respect to the components of the input, evaluated at the target input with its true class. Perturbing the input in the direction of this gradient therefore moves it in the direction of steepest ascent of the loss function locally, thereby representing a good candidate for an adversarial direction. The minimal
scaling to be applied to this adversarial direction, required to form the final adversarial input, can then be determined via a line search in the adversarial direction.
In the case of this model setup, such an algorithm (with a standard choice of loss function) will successfully provide the optimal direction for an adversarial attack: the most direct path to move the input along in order to cross the decision surface. To show this, we first observe that the classifier \(f\) in (1) can be equivalently defined as \(f(x)=H(g(x))\), where \(H:\mathbb{R}\to\{0,1\}\) denotes the (piecewise constant) Heaviside function, and the linear function \(g(x)=\mathbf{e}_{1}\cdot x-\frac{1}{2}\). To construct gradient-based attacks, we use a differentiable version \(\tilde{f}\) of \(f\) constructed as \(\tilde{f}(x)=\sigma(g(x))\), where \(\sigma:\mathbb{R}\to(0,1)\) is a continuously differentiable version of the Heaviside function which is monotonically increasing with \(\sigma(0)=\frac{1}{2}\). An example of such a function is the standard sigmoid function. Then, the following result, proved in Section C.5 of the supplementary material, shows that gradient-based attacks on this classifier will always return the optimal attack direction.
**Theorem 6** (Gradient-based methods find the optimal adversarial attack).: _Let \(L:\mathbb{R}_{>0}\to\mathbb{R}\) denote any differentiable, monotonically increasing loss function. For any \((x,\ell)\sim\mathcal{D}_{\epsilon}\), the gradient of the loss \(L(|\tilde{f}(x)-\ell|)\) with respect to the components of \(x\) corresponds to a positive multiple of the optimal attack direction \((1-2\ell)\mathbf{e}_{1}\)._
### Generalisations
Despite its simplicity, the model presented above covers a variety of settings, including data sampled from many common distributions such as uniform distributions and truncated Gaussian distributions. The model may also be directly generalised to cover an even wider range of settings. In Section E, we present analogous results for classifiers with non-flat decision surfaces separating two general distributions. As corollaries of these results, we obtain variants of the theorems presented above for the more general case when \(r\neq 1\).
Further generalised scenarios in which our results may be applied are depicted in Figure 4. Panel \(a\) simply shows the original setup, with the red dashed line showing the decision surface of the classifier \(f\).
Panel \(b\) visualises a more general case in which the data from only one class is supported in a ball and satisfies the SmAC property, while the other data class belongs to some different class of distributions. A more general non-flat decision boundary is depicted as a green dashed line in this case, which we suppose may be sufficiently well approximated _locally_ by a hyperplane (depicted as a red dashed line). All results derived above (corresponding to the case of panel \(a\)) apply here for samples drawn from \(\mathcal{D}_{1}\).
Panel \(c\) shows the general setting in which both classes are sampled from non-SmAC distributions. However, if there is a subdomain \(\hat{D}\) near the decision surface within which one distribution is locally SmAC and the decision boundary in that domain is sufficiently well approximated locally by a hyperplane, then the problem can be considered as a local version of case \(b\).
## 4. Discussion and relation to prior work
### Existence of adversarial examples
Since the seminal work [17] reporting the discovery of adversarial examples in deep neural networks, the topic of adversarial examples as well as their origins and the mechanisms behind their occurrence have been the focus of significant attention in
Figure 4. Different scenarios to which the simple two ball model may be generalised.
theoretical and computational machine learning communities. One hypothesis, expressed in [17] was that the existence of the adversarial examples could be attributed to the inherent instabilities - i.e., large Jacobian norms leading to large Lipschitz constants for the classification maps. Theorems 3, 4 (see also Theorems 10 and 11 in Section D of the supplementary material) show that whilst the later mechanism may indeed constitute a feasible route for adversarial examples to occur, our presented framework reveals a simple and under-explored pathway for adversarial data to emerge naturally in systems without large Jacobian norms.
### Fragility of adversarial examples
An interesting additional property of adversarial examples has been empirically observed in [12, 7]. It has been found that the capability of adversarial examples to fool the classifiers they have been designed for could be hindered by perturbations and transformations which are naturally present in real-world environments. Here we show and prove (Theorems 4 and 11) that in the vicinity of the target images, adversarial examples may indeed occupy sets whose Lebesgue measure is exponentially small. Hence, the addition of a small but appropriate perturbation to an example of that type will have the capability to make it non-adversarial.
### Universality of adversarial examples
Another striking feature of adversarial examples is their potential universality. The phenomenon has first been reported in [13] and since then observed in a wide range of tasks and architectures [3]. Several explanations justifying the existence of universal adversarial examples have been proposed in the literature. This includes the view that universal perturbations may exploit correlated lower-dimensional structures in the classifier's decision boundaries. It has been less clear how to explain the simultaneous existence, fragility, typicality, and universality of adversarial perturbations. Theorems 3, 4, and 5 show that the combination of these correlations with the high dimensionality of data may explain the co-existence of the typicality of adversarial examples, their fragility, and at the same time universality.
### Typicality of adversarial examples
Several works have presented feasible mechanisms explaining the potential typicality and prevalence of adversarial data. In [14], [19] the authors exploited concentration of measure arguments to conclude that small destabilising perturbations can be typical in high dimensional settings. These arguments, however, do not explain the simultaneous rarity of destabilising random perturbations and the typicality of adversarial examples (see Fig. 6 and the discussion below). The connection of these two phenomena is a key feature of our framework.
### Notions of stability
Our present work reveals a new unexplored relationship between stability and the existence of adversarial data. We show that the ubiquitous presence of adversarial data perturbations which destabilise the classifier is not contradictory to the robustness of the classifier to random perturbations of the data. If we view the former as a form of _deterministic instability_ (i.e. there exist destabilising perturbations), and the latter as a form of _probabilistic stability_ (destabilising perturbations are unlikely to be sampled at random), it becomes apparent that the probabilistic stability is in fact masking the underlying instability. Since these two notions of stability are clearly not equivalent, it is imperative to understand the distinctions between the two. To clarify this intriguing relationship, let us first recall two relevant definitions of stability (cf. [8]).
**Definition 7** (\(\epsilon\)-stability).: _The classification map \(f:\mathbb{R}^{n}\to\{0,1\}\) is \(\epsilon\)-stable at \(x\) if_
\[f(x+s)=f(x)\ \text{ for all }\ s\in\mathbb{B}^{n}_{\epsilon}.\]
**Definition 8** (\(\epsilon\)-stability with confidence \(\upsilon\)).: _Let \(\mu\) be a probability distribution on \(\mathbb{B}^{n}_{\epsilon}\). The classification map \(f:\mathbb{R}^{n}\to\{0,1\}\) is \(\epsilon\)-stable at \(x\) with confidence \(\upsilon\) w.r.t. the distribution \(\mu\) if_
\[P(s\sim\mu\,:\,f(x+s)=f(x))\geq\upsilon.\]
At the core of the phenomenon explored in Theorems 3 and 4 is the fact that a "typical" point \(x\) is \(\delta\)-stable with confidence \(\upsilon\) with respect to perturbations sampled from \(\mathcal{U}(\mathbb{B}^{n}_{\delta})\), where \(\upsilon\) approaches 1 exponentially in \(n\). This makes the finding of adversarial perturbations by adding random samples \(s\sim\mathcal{U}(\mathbb{B}^{n}_{\delta})\) difficult and unlikely.
At the same time, for \(n\) sufficiently large, typical points are located in some \(\Delta<\epsilon\) vicinity of the equators of the \(n\)-dimensional unit balls supporting \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}\). This implies that these typical points are \(\epsilon-\Delta\)-stable in the sense of Definition 7. This is visualised in the diagram shown in Figure 5. In the absence of the margin \(\epsilon\) separating the centres of \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}\), there is no room to "hide" adversarial examples among random perturbations. This leads to an intriguing observation:
_The existence and prevalence of adversarial examples, which are undetectable via random perturbations, can sometimes be facilitated and even induced, by \(\epsilon^{\prime}\)-stability (for some appropriately chosen \(\epsilon^{\prime}\)) of "typical" data samples._
To illustrate the practical consequences of this, we investigated a model in which two data classes were sampled from complementary half-balls separated by margin \(\epsilon\geq 0\) (see Section D of the supplementary material). This choice of model is motivated by its ability to represent two separable classes without any margin or overlap. Figure 6 shows the results of numerical experiments computing the frequency with which sampled data was misclassified after random perturbations of the form \(s\sim\mathcal{U}(\mathbb{B}_{\delta}^{n})\) when \(\epsilon=0\), for different values of \(\delta\) and \(n\). This demonstrates
Figure 5. Adversarial susceptibility of seemingly stable classifiers. Points \(x\) and \(y\) are in the \(\Delta\) thickening of disc intersecting the ball \(\mathcal{D}_{1}\) along one of its largest equators. For \(n\) sufficiently large, most points sampled from \(\mathcal{U}(D_{1})\) belong to this domain. Both \(x\) and \(y\) are \((\epsilon-\Delta)\)-stable. At the same time, they are also \(\delta\)-stable with confidence \(\upsilon\eqsim 1\).
Figure 6. The empirical probability of sampling a point and perturbation of size \(\delta\) such that the perturbed data point is misclassified, under the two hemispheres model with \(\epsilon=0\). This empirical data was computed by sampling 10,000 points from the hemisphere distribution and 10,000 perturbations from \(\mathcal{U}(\mathbb{B}_{\delta}^{n})\).
that in the absence of margins (which is an admissible case in the setup adopted in [14]) the probability of registering misclassifications due to random perturbation is significant and does not change much with dimension.
## 5. Conclusion
Our new framework for studying the paradox of apparent stability in classification problems allows for rigorous probabilistic bounds that are consistent with empirical observations concerning vulnerability to worst-case (Theorem 3), random (Theorem 4), universal (Theorem 5) and computable (Theorem 6) attacks. The results are generic in the sense that they deal with small perturbations under which any smooth and accurate classifier will behave like the optimal linear classifier (1). As illustrated in Figure 4 and Section 3.1, the setup can be generalised to cover to broad range of input distributions and classification boundaries. We envisage that the results will also extend readily to comparable multiclass problems. In addition to quantifying vulnerabilities, our analysis also raises new issues concerning the most relevant and useful notions of stability in classification.
The overlapping unit ball model that we used, and the two hemispheres model in Section D of the supplementary material, are closely tied to the use of the Euclidean norm. We note that there are several applications where spherical input data arises naturally, including remote sensing, climate change modeling, global ionospheric prediction and environmental governance, [4]. It would of course be of interest to establish the extent to which these results can be extended to other choices of norm and input domain. We also note that more customised results could be investigated for specific classification tools by exploiting further information, for example, about the architecture, training regime and level of floating point accuracy.
## Acknowledgements
O.J.S., Q.Z., I.Y.T. and A.N.G. are grateful for financial support by the UKRI and EPSRC (UKRI Turing AI Fellowship ARalSE EP/V025295/1). I.Y.T. is also grateful for support from the UKRI Trustworthy Autonomous Systems Node in Verifiability EP/V026801/1. D.J.H. and A.B. were supported by EPSRC grant EP/V046527/1.
|
2309.12768 | WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the
Annual CVPR Conference | In this paper, we present the details of Women in Computer Vision Workshop -
WiCV 2023, organized alongside the hybrid CVPR 2023 in Vancouver, Canada. WiCV
aims to amplify the voices of underrepresented women in the computer vision
community, fostering increased visibility in both academia and industry. We
believe that such events play a vital role in addressing gender imbalances
within the field. The annual WiCV@CVPR workshop offers a) opportunity for
collaboration between researchers from minority groups, b) mentorship for
female junior researchers, c) financial support to presenters to alleviate
finanacial burdens and d) a diverse array of role models who can inspire
younger researchers at the outset of their careers. In this paper, we present a
comprehensive report on the workshop program, historical trends from the past
WiCV@CVPR events, and a summary of statistics related to presenters, attendees,
and sponsorship for the WiCV 2023 workshop. | Doris Antensteiner, Marah Halawa, Asra Aslam, Ivaxi Sheth, Sachini Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang | 2023-09-22T10:15:38Z | http://arxiv.org/abs/2309.12768v1 | # WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference
###### Abstract
In this paper, we present the details of Women in Computer Vision Workshop - WiCV 2023, organized alongside the hybrid CVPR 2023 in Vancouver, Canada. WiCV aims to amplify the voices of underrepresented women in the computer vision community, fostering increased visibility in both academia and industry. We believe that such events play a vital role in addressing gender imbalances within the field. The annual WiCV@CVPR workshop offers a) opportunity for collaboration between researchers from minority groups, b) mentorship for female junior researchers, c) financial support to presenters to alleviate financial burdens and d) a diverse array of role models who can inspire younger researchers at the outset of their careers. In this paper, we present a comprehensive report on the workshop program, historical trends from the past WiCV@CVPR events, and a summary of statistics related to presenters, attendees, and sponsorship for the WiCV 2023 workshop.
## 1 Introduction
Despite remarkable progress in various computer vision research areas in recent years, the field still graphes with a persistent lack of diversity and inclusion. While the field of computer vision rapidly expands, female researchers remain underrepresented in the area, constituting only a small amount of professionals in both academia and industry. Due to this, many female computer vision researchers can feel isolated in workspaces which remain unbalanced due to the lack of inclusion.
The WiCV workshop is a gathering designed for all individuals, irrespective of gender, engaged in computer vision research. It aims to appeal to researchers at all levels, including established researchers in both industry and academia (e.g. faculty or postdocs), graduate students pursuing a Masters or PhD, as well as undergraduates interested in research. The overarching goal is to enhance the visibility and recognition of female computer vision researchers across these diverse career stages, reaching women from various backgrounds in educational and industrial settings worldwide.
There are three key objectives of the WiCV workshop:
Networking and MentoringThe first objective is to expand the WiCV network and facilitate interactions between members of this network. This includes female students learning from seasoned professionals who share career advice and experiences. A mentoring banquet held alongside the workshop provides a casual environment for junior and senior women in computer vision to meet, exchange ideas and form mentoring or research relationships.
Raising VisibilityThe workshop's second objective is to elevate the visibility of women in computer vision, both at junior and senior levels. Senior researchers are invited to give high quality keynote talks on their research, while junior researchers are encouraged to submit their recent or ongoing work, with many of these being selected for oral or poster presentation through a rigorous peer review process. This empowers junior female researchers to gain experience presenting their work in a professional yet supportive setting. The workshop aims for diversity not only in research topics but also in the backgrounds of presenters. Additionally, a panel discussion provides a platform for female colleagues to address topics of inclusion and diversity.
Supporting Junior ResearchersFinally, the third objective is to offer junior female researchers the opportunity to attend a major computer vision conference that might otherwise be financially inaccessible. This is made possible through travel grants awarded to junior researchers who present their work during the workshop's poster session. These grants not only enable participation in the WiCV workshop but also provide access to the broader CVPR conference.
## 2 Workshop Program
The workshop program featured a diverse array of sessions, including 4 keynotes, 6 oral presentations, 34 poster presentations, a panel discussion, and a mentoring session. Consistent with previous years, our keynote speakers were carefully selected to ensure diversity in terms of the topics that were covered, their backgrounds, whether they work in academia or industry, and their seniority. This deliberate choice of diverse speakers is of paramount importance, as it offers junior researchers a multitude of potential role models with whom they can resonate and, in turn, envision their unique career paths.
The workshop schedule at CVPR 2023 featured a diverse range of sessions and activities, including:
* Introduction
* Invited Talk 1: Angel Chang (Simon Fraser University and Canada CIFAR AI), _Connecting 3D and Language_
* Oral Session 1
* Laia Tarres, _Sign Language Translation for Instructional Videos_
* Meghna Kapoor, _Underwater Moving Object Detection using an End-to-End Encoder-Decoder Architecture and GraphSage with Aggregator and Refactoring_
* Invited Talk 2: Devi Parikh (Generative AI Lead at Meta), _Multimodal Generative AI (AI for creativity)_
* Sponsors Exhibition (in person)
* Oral Session 2
* Deblina Bhattacharjee, _Dense Multitask Learning to Reconfigure Comics_
* Maryam Daniali, _Perception Over Time: Temporal Dynamics for Robust Image Understanding_
* Invited Talk 3: Judy Hoffman (School of Interactive Computing at Georgia Tech), _Efficient and Reliable Vision Models_
* Sponsors Exhibition (in person)
* Oral Session 3
* Zoya Shafique, _Nonverbal Communication Cue Recognition: A Pathway to More Accessible Communication_
* Sudha Velusamy, _A Light-Weight Human Eye Fixation Solution for Smartphone Applications_
* Invited Talk 4: Kristen Grauman (University of Texas at Austin and Facebook AI Research (FAIR)), _Human-object interactions in first person video_
* Panel Discussion by Abby Stylianou, Angel Chang, Devi Parikh, Ilke Demir, Judy Hoffman, and Kristen Grauman
* Poster Session (in person as well as virtual)
* Closing Remarks
* Mentoring Session, Talks, and Dinner (in person)
* Invited talk: 1. Abby Stylianou (Saint Louis University and Taylor Geospatial Institute) on _A bit about my research and finding your own path to happiness and success in this field even if it looks different_ 2. Ilke Demir (Intel Corporation) on the intriguing topic _Be you_
* In-Person mentoring session and dinner: Akshita Gupta (University of Guelph), Dima Damen (University of Bristol), Achal Dave (Toyota Research Institute), Federica Arrigoni (Politecnico di Milano), Hilde Kuehne (University on Bonn), Nikhila Ravi (Meta), Katherine Liu (Toyota Research Institute), Lilly Ball (Apple), Dian Chen (Toyota Research Institute), Shuyang Cheng (Cruise LLC), Jeremy Jones (Apple), Zuzana Kukelova (Czech Technical University in Prague).
* Virtual mentoring session on Zoom: Yifan Liu (ETH Zurich), and Dena Bazazian (University of Plymouth).
### Hybrid Setting
This year, our organizational approach underwent slight adjustments due to CVPR 2023 being held in a hybrid setting, accommodating both in-person and virtual attendance. We took deliberate steps to sure that the virtual WiCV workshop was an engaging and interactive event. To achieve this, we took the following steps: Talks, oral sessions, and the panel were streamed via Zoom for virtual attendances. The poster session was repeated virtually a week after the conference, mirroring the format of the main conference. We
also facilitated online mentoring sessions via Zoom, catering to mentors and mentees who could only participate virtually.
## 3 Workshop Statistics
The first edition of the Women in Computer Vision (WiCV) workshop was held in conjunction with CVPR 2015. Over the years, both the participation rate and the quality of submissions to WiCV have steadily increased. Following the examples from the editions held in previous years [4, 3, 6, 2, 1, 5] we have continued to curate top-quality submissions into our workshop proceedings. By providing oral and poster presenters the opportunity to publish their work in the conference's proceedings, we aim to further boost the visibility of female researchers.
This year, the workshop was held as a half-day in-person event with hybrid options, while the virtual component was hosted via Topia and Zoom. The in-person gathering took place at the Vancouver Convention Centre in Canada. Senior and junior researchers were invited to present their work, including the poster presentations detailed in the previous Section 2.
The organizers for this year's WiCV workshop come from diverse backgrounds in both academia and industry, representing various institutions across different time zones. Their diverse backgrounds and wide-ranging research areas have enriched the organizing committee's perspectives and contributed to a well-rounded approach. Their broad range of research interests in computer vision and machine learning encompass video understanding, object detection, non-verbal communication, open-source benchmark datasets, activity recognition, anomaly detection, autoencoders, generalization, captioning, 3D Point Cloud, medical imaging and vision for robotics.
This year, we received 68 high-quality submissions from a wide range of topics and institutions, which is on par with WiCV@CVPR22. The most popular topics included deep learning architectures and techniques followed by video action and event recognition, segmentation and shape analysis, and medical applications. Out of the 68 submissions, 61 underwent the review process. Six papers were selected as oral presentations and inclusion in the CVPR23 workshop proceedings, while 34 papers were chosen for poster presentations. The comparison with previous years is presented in Figure 1. Thanks to the diligent efforts of an interdisciplinary program committee comprising 41 reviewers, the submitted papers received thorough evaluations and valuable feedback. Additionally, during the mentoring session, 40 mentees received in-person guidance from 8 mentors, and 5 mentees attended virtual sessions with 2 mentors in separate meetings via Zoom.
This year, we continued the WiCV tradition from previous workshops [1, 2, 5, 6, 7] by providing grants to assist the authors of accepted submissions in participating in the workshop. These grants covered a range of expenses, with the specifics varying for each attendee depending on their individual needs, including, for example, conference registration fees, round-trip flight itineraries, and two days of accommodation for all authors of accepted submissions who requested funding.
The total sponsorship for this year's workshop amounted to $30,000 USD, with contributions of 6 sponsors, meeting our target. In Figure 2 you can find the details with respect to the past years.
## 4 Conclusions
WiCV at CVPR 2023 has once again proven to be a valuable opportunity for presenters, participants, and organizers, providing a platform to unite the community. It continues to address the persistent issue of gender balance in our field, and we believe it has played a significant role in
Figure 1: **WiCV Submissions. The number of submissions over the past years of WiCV.**
Figure 2: **WiCV Sponsors. The number of sponsors and the amount of sponsorship for WiCV. The amount is expressed in US dollar (USD).**
strengthening the community. It provided an opportunity for people to connect from all over the world from the comfort of their personal spaces. With a high number of paper submissions and even greater number of attendees, we anticipate that the workshop will continue the positive trajectory of previous years, fostering a stronger sense of community, increased visibility, and inclusive support and encouragement for all female researchers in academia and industry. Moreover, WiCV Members participated in the Diversity & Inclusion Social event at CVPR. Furthermore, WiCV also got featured by in the CVPR 2023 Magazine [8].
## 5 Acknowledgments
We express our sincere gratitude to our sponsors, including our Platinum sponsors: Toyota Research Institute and Apple, as well as our Silver Sponsor: Cruise, and Bronze sponsors: Deepmind, Meshcapade, and Snap Inc. Our appreciation also extends to the San Francisco Study Center, our fiscal sponsor, for their invaluable assistance in managing sponsorships and travel awards. We are thankful for the support and knowledge-sharing from organizers of previous WiCV workshops, without whom this WiCV event would not have been possible. Finally, we extend our heart-felt thanks to the dedicated program committee, authors, reviewers, submitters, and all participants for their valuable contributions to the WiCV network community.
## 6 Contact
**Website**: [https://sites.google.com/view/wiccvcvpr2023/home](https://sites.google.com/view/wiccvcvpr2023/home)
**E-mail**: [email protected]
**Facebook**: [https://www.facebook.com/WomenInComputerVision/](https://www.facebook.com/WomenInComputerVision/)
**Twitter**: [https://twitter.com/wicvworkshop](https://twitter.com/wicvworkshop)
**Google group**: [email protected]
|
2309.11261 | Particle interaction with binary-fluid interfaces in the presence of
wetting effects | In this paper, we present an Eulerian-Lagrangian methodology to simulate the
interaction between a fluid-fluid interface and a solid particle in the
presence of wetting effects. The target physical problem is represented by
ternary phase systems in which a solid phase and a drop phase interact inside
an incompressible Newtonian carrier fluid. The methodology is based on an
Eulerian-Lagrangian approach that allows for the numerical solution of the
Continuity and Navier-Stokes equations by using a pseudo-spectral method,
whereas the drop phase is modelled by the Phase Field Method, in which a smooth
transition layer represented by an hyperbolic function is considered both
across the solid-fluid interface and across the drop-fluid interface. Finally,
the solid phase is described in the form of a virtual force using the Direct
Forcing Immersed Boundary approach. The properties of the immersed solid phase
(including wetting effects), the deformability of the drops and the
characteristics of the carrier fluid flow are the main controlling parameters.
To simulate a ternary phase system, the solid phase is coupled to the
binary-fluid phase by introducing a single well potential in the free-energy
density functional, which can also control the solid surface wetting property.
The capabilities of the methodology are proven by examining first 2D and 3D
validation cases in which a solid particle is settling in a quiescent fluid.
Then, the interaction of solid particles with a binary-fluid interface and the
effects of surface wetting on the submergence of a quasi-buoyant body are
discussed. Finally, the equilibrium configuration for a solid particle
interacting with an equally-sized drop at different contact angles and the
relative rotation of two solid particles bridged by a drop are examined in the
case the interaction is induced by shear fluid flow deformations on the drop
interface. | Fernando Kevin Miranda S. Cruz, Cristian Marchioli | 2023-09-20T12:41:39Z | http://arxiv.org/abs/2309.11261v1 | [
###### Abstract
In this paper we present an Eulerian-Lagrangian methodology for the simulation of the interaction between a fluid-fluid interface and a solid particle in the presence of wetting effects. The target physical problem is represented by ternary phase systems in which a solid phase and a drop phase interact inside an incompressible Newtonian carrier fluid. The methodology is based on an Eulerian-Lagrangian approach that allows for the numerical solution of the Continuity and Navier-Stokes equations by using a pseudo-spectral method for the carrier fluid, whereas the drop phase is modelled by the Phase Field Method (PFM), in which a smooth transition layer represented by an hyperbolic function is considered both across the solid-fluid interface and across the drop-fluid interface. Finally, the solid phase is described in the form of a virtual force using the Direct Forcing Immersed Boundary approach (DFIB). The properties of the immersed solid phase (including wetting effects), the deformability of the drops and the characteristics of the carrier fluid flow are the main controlling parameters that the method accounts for. To simulate a ternary phase system, the solid phase is coupled to the binary-fluid phase by introducing a single well potential in the free-energy density functional, which can also control the solid surface wetting property. The capabilities of the implemented tool are proven by examining first 2D and 3D validation case studies in which a solid particle is settling in a quiescent fluid. Then, the interaction of a solid particles with a binary-fluid interface and the effects of surface wetting on the submergence of a quasi-buoyant body are discussed. Finally, the equilibrium configuration for a solid particle interacting with an equally-sized drop at different contact angles and the relative rotation of two solid particles bridged by a drop are examined in the case the interaction is induced by shear fluid flow deformations on the drop interface.
three-phase flow; fluid-fluid interface; solid-interface interaction; wetting effects; Phase Field; Immersed Boundary]Particle interaction with binary-fluid interfaces in the presence of wetting effects]Wetting effects F.K. Miranda et al.]F.K. Miranda + and C. Marchioli +University of Udine, Udine, Italy +Corresponding author: [email protected] +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Particle and fluid interface interactions are ubiquitous in many natural and engineering systems, including emulsions, foams, and biological fluids. Understanding the behavior of these interactions is crucial for designing and optimizing various processes, such as microfluidics, drug delivery, wastewater treatment, filtering of gas exhaust pollutant and enhanced oil recovery [1, 2, 3, 4, 5, 6]. Numerical simulations have become an important tool for studying these interactions, allowing researchers to explore their behavior under different conditions and with different materials.
Particle behaviour in fluids can be simulated following a point-wise or size-resolved point of view. The point-wise particles approach represents particles are as mathematical points without any spatial extent. In this method, particles are treated as mass points with associated properties such as
position, velocity, and mass. This simplified representation allows for efficient calculations and is commonly used when the size or shape of particles is not of primary importance for the simulation. Explicit interaction equations must be imposed to take into account the interaction between particles and binary fluid interfaces [7, 8].
Size-resolved particle simulations refer to a computational approach that explicitly takes into account the size,shape and and orientation of particles in the simulation. In size-resolved particle simulations, each particle is typically represented by a discrete volume or shape, such as spheres, ellipsoids, or irregular geometries. The size and shape of the particles are explicitly accounted for in the simulation, allowing for a more detailed characterization of their behavior and interactions with fluids and interfaces. However, this kind of simulations require careful consideration of computational resources, as the complexity and computational cost increase with the number and complexity of particles considered. Efficient algorithms and parallel computing techniques are often employed to tackle this challenge and enable large-scale simulations. Size-resolved particles simulations can be described using sharp or smoothed solid interface approaches.
Sharp interface approaches use Lagrangian point (tracers) to represent the topological shape of the particles and track their motion as they interact with the surrounding fluid, allowing for accurate modeling of complex fluid-particle and particle-particle interactions. For instance, the Discrete Element Method (DEM) models particles as discrete entities and considers their interactions based on contact mechanics principles. In this methods each particle is represented as a distinct entity with its own physical properties, such as size, shape, mass, and material characteristics. The motion and interactions of particles are determined by solving equations of motion for each individual particle. DEM enables the simulation of particle-particle and particle-wall interactions, as well as the study of particle segregation, mixing, and flow phenomena [9, 10, 11]. Another method widely used is the Immersed Boundary Method (IBM), it employs a force-coupling technique to represent the influence of particles on the fluid, while the fluid flow is solved on a fixed Eulerian grid. One of the key advantages of IBM is its versatility in handling different types of particles or immersed bodies, including rigid particles, deformable particles, or even biological cells. The method can accurately capture fluid-particle interactions, such as drag forces, lift forces, and boundary layer effects. It also allows for the investigation of complex phenomena, such as particle sedimentation, particle transport, or flow-induced deformations. However, IBM also has certain challenges. The force interpolation and back-coupling procedures require careful implementation to ensure accuracy and stability. The method can be computationally expensive, especially when simulating a large number of particles or complex particle shapes [12, 13, 14]. Finally, we can mention the Smoothed Particle Hydrodynamics (SPH), which is a meshless Lagrangian method that can be extended to simulate both fluid and particle phases. The fluid domain is discretized into a set of particles. Each particle carries information about fluid properties, such as density, pressure, and velocity. The simulation evolves by tracking the motion of these particles and updating their properties based on local interactions with neighboring particles. SPH naturally handles irregular particle shapes (rigid and deformable particles) and complex particle-fluid interactions [15, 16, 17].
Smoothed solid interface approaches use implicit advection equations to evolve the solid phase dynamics. They belong to the interface capturing type, i.e. a post-processing step must be done to retrieve the position and velocity of the solid interface at each time step. The solid interface is represented as a smooth transition layer. The solid body is represented then as a region with a distinct phase field parameter within the fluid domain. The phase field parameter is a continuous scalar field that smoothly transitions between values inside and outside the rigid body region. This allows for the description of the body's shape and motion without explicitly tracking its boundary. Usually the Phase Filed Method (PFM) is employed as a basis for smooth interface particles simulations, as we can see in the following references [18, 19, 20, 21, 22]. The performance of this approach is enhanced
in particle-binary fluid interfaces interactions where the fluids interface is treated as a diffuse transition layer, especially when describing the dynamic contact line evolution, which does not require extensive and complex modeling, but most of the times is implicitly solved. Although this approach is relatively new and has usually a complex description, its applicability seems vast and promising, and the research interest in this technique is continuously increasing.
Recently, the development of new approaches on how to numerically simulate ternary phase systems involving binary fluids and solid surfaces interactions has received a great deal of attention. Based on their complexity, the approaches that are currently available in literature can be grouped into 3 main categories. The first category includes the approaches developed to treat flat wall boundaries. These are the ternary interactions simplest to implement, where the domain boundaries are treated as solid walls interacting with two distinct phases. The type of simulations allowed by this type of approach include, among others, channel flow laden with drops, bubbly flows in a tank and droplets impingement in flat surfaces [23, 24, 25, 26]. The second category includes the approaches developed to treat stationary arbitrary-shaped solid bodies. These approaches treat a static solid interface as a wall-boundary condition and are generally used to study problems such as porous media interactions with drops in a carrier fluid, drop impingement into curved surfaces, contact line evolution on a solid surface, meso-scale and macro-scale rigid structures immersed in a binary fluid [27, 28, 29, 30]. The third category includes the approaches developed to treat moving size-resolved solid particles in binary flows. In this case, the trajectory of the immersed particle can be altered by the fluid motion and by its own inertia. The studies carried out using this type of approaches are fully coupled and some of them present solid surface wetting effects. Therefore, their range of possible applications is vast, ranging from spheres sinking in water, buoyant bodies at water-air interface and sphere splashing into water to solid particles capture by drops and self-assembly induced by lateral capillary forces, just to name a few [31, 32, 18, 20].
Wetting effects play a crucial role in the behavior of particle-fluid interfaces. Wetting is defined as the ability of a liquid to spread or adhere to a solid surface, and is influenced by the solid surface characteristics. Understanding this effects is important for designing surfaces with desired wetting properties, such as superhydrophobic or superhydrophobic surfaces, for various applications. In recent years, several studies have focused on simulating size resolved particles-fluid interfaces interactions taking into account wetting effects. For example, Molecular Dynamic simulations have been used to investigate the behavior of droplets on superhydrophobic surfaces [33]. LB methods, perhaps the most popular approach for this kind of simulations, is used in several works [34, 35], to mention a few, in the simulation of thin film breakage on hydrophobic surfaces [36], lateral capillary forces on wettable cylindrical particles [37] and drop impact on cylinders at different contact angles. Phase Field methods have been also used to simulate wettable cylinders impacting on a free surface[18].
Although effective in simulating three phase interactions, the size-resolved particles methods usually need complex formulations and sophisticated numerical implementations. In this work, we present a simple and easy-to-implement numerical tool, where a Direct Numerical Simulation (DNS) of the incompressible carrier fluid flow is performed, the Phase Field Method describes the time evolution of the drop phase dynamics and the immersed solid particles are based on a hybrid Eulerian-Lagrangian description. These particles are tracked in a Lagrangian framework and their disturbance into the Eulerian domain of the fluid is spread using the Direct Forcing method. Their size and shape are bounded by a fictitious solid phase with a smooth interface. In addition, the wetting effects are also taken into account during ternary interactions. This allows to investigate two phenomena, neither of which have been previously numerically investigated to the best of the author's knowledge: the wetting effects in the submergence of a quasi-buoyant body and the relative rotation of two solids (bridged by a droplet), induced by shear fluid flow deformations on the drop interface.
## 2 Methodology
In this work the solid body trajectories are treated as point-wise particles in a Lagrangian framework. Each particle position is mapped in the Eulerian domain and linked to a region with a resolved shape and size of the corresponding solid body. Similar to You et al. [38] a Direct Forcing method is applied in this region, however, we describe the solid interface as a transition layer from the solid region to the fluid bulk using a smooth function in order to ensure the compatibility with the PFM [39, 19].
### Single fluid and rigid-solid interaction approach
A generic incompressible Newtonian fluid flow is introduced as the carrier fluid flow, governed by the Navier-Stokes and continuity equations:
\[\nabla\cdot\mathbf{u} =0, \tag{1}\] \[\rho\left[\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u} \cdot\nabla\right)\mathbf{u}\right] =-\nabla P+\mu\nabla^{2}\mathbf{u}+\rho\mathbf{g} \tag{2}\]
Considering that the solid phase is described by a fictitious domain (\(\psi_{s}\)) built up by the union of \(n\) individual body fields:
\[\psi_{s}=\bigcup_{i=1}^{n}\psi_{i}, \tag{3}\]
a phase parameter \(\psi_{s}\) is inserted with constant values in the solid and fluid bulk volume (\(\psi_{s}=1\) and \(\psi_{s}=0\), respectively). The transition between phases is represented by a smooth layer, where fluid and solid properties coexist in proportions ruled by a hyperbolic tangential profile along the normal direction of the solid interface \(\mathbf{x}\). In order to properly describe the local properties, the grid resolution must ensure the thinnest width with a well defined transition profile. Every individual rigid-solid sphere can then be generated using the following expression:
\[h(\mathbf{x})=\frac{1}{2}[1-tanh(\frac{\mathbf{x}-\mathbf{r}}{\xi_{s}})], \tag{4}\]
which is similar to the formulation used by Nakayama et al. [21], where \(\mathbf{r}\) is the solid radius and \(\xi_{s}\) is the parameter control for the interface width.
The fluid-solid coupling is achieved by adding a virtual force into the Navier-Stokes equations following the Direct Forcing Immersed Boundary approach [40, 41, 42]. In this method, the fluid within the solid region is enforced to follow prescribed solid-bodies velocities, ensuring the rigidity and the non-penetration condition [38].
The modified Navier-Stokes equations are:
\[\rho\left[\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u}\cdot\nabla \right)\mathbf{u}\right] =-\nabla P+\mu\nabla^{2}\mathbf{u}+\rho\mathbf{g}+\rho\mathbf{f}_{DF} \tag{5}\]
where virtf is the virtual force exerted in the solid region \(\Omega_{s}\) to advance the solid object velocity from an intermediate time level velocity field \(\mathbf{u}^{*}\) (where no influence of the solid is considered for its resolution) to \(\mathbf{u}_{s}^{n+1}\) (calculated in previous steps) [38, 19, 43]. Eq. [6] shows how \(\mathbf{f}_{DF}\) is calculated.
\[\mathbf{f}_{DF}=\frac{\mathbf{u}_{s}^{n+1}-\mathbf{u}^{*}}{\Delta t}, \tag{6}\]
where \(\Delta t\) is the integration time step.
### Equations of motion for a solid immersed in a fluid
The solid phase dynamics is described in a Lagrangian frame. The motion of an immersed rigid-body is caused by lineal and angular momentum. Consequently, the velocity \(\mathbf{u}_{s}\) of the rigid-body can be decomposed in a translational and rotational velocity, as shown in eq. [7]:
\[\mathbf{u}_{s}=\mathbf{v}_{s}+\omega_{s}\times\mathbf{r} \tag{7}\]
where \(\mathbf{v}_{s}\) is the immersed-solid linear velocity and \(\omega_{s}\) its angular velocity with respect to the axis passing through its center of mass.
(_i_) We use the equation derived by Cheng-Shu et al. [38], shown in eq. [8], in order to obtain the lineal velocity of the solid object.
\[m_{s}\frac{d\mathbf{v}_{s}}{dt}=(m_{s}-m_{f})g-\iiint_{\Omega}\psi_{s}\rho_{f} fdV+m_{f}\frac{d\mathbf{v}_{s}}{dt} \tag{8}\]
Where \(m_{s}\) and \(m_{f}\) are the solid and the fluid mass respectively, \(g\) represents the gravity and \(f\) is the average value of virtual force for the solid. The terms in the RHS of eq. [8] accounts on the effects of
Figure 1: Solid phase generation diagram, starting with the center body point mapping (\(x_{i}\)), subdomains calculation (\(\{\psi_{i}\}\), continuing with individual body creation and finally merging sub-domains into a unique solid phase in the domain(\(\{\psi_{s}\}\).
buoyancy, inertia and added mass (read from left to right). Subsequently this equation is discretized in time considering the following equivalences:
\[m_{s}=\iiint_{\Omega_{s}}\rho_{s}fdV=\iiint_{\Omega}\psi_{s}\rho_{s}fdV \tag{9}\]
and
\[m_{f}=\iiint_{\Omega_{s}}\rho_{f}fdV=\iiint_{\Omega}\psi_{s}\rho_{f}fdV \tag{10}\]
and for the virtual force term we use a 2nd-order-accurate Adams-Bashforth scheme, obtaining as a result the following expression :
\[m_{s}\frac{\mathbf{v}_{s}^{n+1}-\mathbf{v}_{s}^{n}}{\Delta t}=(m_{s}-m_{f})g-( \frac{3}{2}\iiint_{\Omega_{s}}\rho_{f}f^{n}dV-\frac{1}{2}\iiint_{\Omega_{s}} \rho_{f}f^{n-1}dV)+m_{f}\frac{\mathbf{v}_{s}^{n}-\mathbf{v}_{s}^{n-1}}{\Delta t} \tag{11}\]
(_ii_) The angular momentum can be calculated from the intermediate time level velocity field \(\mathbf{u}^{*}\) as follows:
\[\mathbf{J}_{s}\omega_{s}=\iiint_{\Omega_{s}}\rho_{s}\mathbf{r}\times\mathbf{ u}^{*} \tag{12}\]
where \(\mathbf{J}_{s}\) is the rotational inertia of the solid body, \(\mathbf{r}=\mathbf{x}-\mathbf{X}_{s}\) is the relative vector of a spatial point (\(\mathbf{x}\)) to the the center of mass of the solid body (\(\mathbf{X}_{s}\)). From eq. [12], we can calculate the angular rotation of the body center-of-mass.
Finally, the body trajectory is calculated by integrating the following expression:
\[\frac{d\mathbf{X}_{s}}{dt}=\mathbf{u}_{s}. \tag{13}\]
### Fictitious solid-phase with wettability in immiscible binary fluids
In order to include the wettability effects of a solids immersed in a binary fluid model, we modify the free energy density functional (eq.[27]), following the approach presented by Shinto [32]. This is based on the model of Cahn [44], who adds an additional surface term \(\mathcal{F}_{s}\) (eq. 14) to describe the interactions between a binary fluid interface and a solid.
\[\mathcal{F}_{s}[\mathbf{X}_{s},t]=\frac{1}{\beta}\int_{S}(-H\psi_{s})dS \tag{14}\]
where \(\mathbf{X}_{s}\) is the position of the particle, \(S\) is the particle surface and \(H\) is a parameter which controls the wettability, we can precondition this property by tuning its value. For example in case of a fluid-drop system, where \(\hat{\Phi}_{f}=-1\) and \(\hat{\Phi}_{d}=+1\) represent the value of \(\phi\) in the bulk of each phase, if \(H=0\), the solid surface is neutrally wettable, if \(H<0\), the solid has more affinity to the fluid, and if \(H>0\), the solid has more affinity to the drops. \(\psi_{s}\) is the compositional order parameter of the solid. The binary fluid should evolve nearby this region, in order to accomplish the minimization of the free energy of the system [32; 28]. The tetaeq with respect to the affinity value can be calculated with the following expression:
\[cos(\theta_{eq})=\frac{\mathcal{X}_{S}}{2}(3-\mathcal{X}_{S}^{2}) \tag{15}\]
with
\[\mathcal{X}_{S}=\frac{\bar{\psi}_{s}-\bar{\phi}_{S}}{\bar{\phi}_{d}-\bar{\phi}_{S}} \tag{16}\]
and
\[\bar{\phi}_{S}=\frac{\bar{\phi}_{f}+\bar{\phi}_{d}}{2} \tag{17}\]
where \(\bar{\phi}_{S}\) and \(\mathcal{X}_{S}\) describe the homogeneous solid surface and its affinity (\(-1\leq\mathcal{X}_{S}\leq 1\)) [28]. A similar approach of implicitly imposing the contact angle by using an affinity parameter was developed by Guillaument et al. [24], who impose wetting effects using the penalty method.
The modified free energy functional considering the solid phase psis and the fluid phase (\(1-\psi_{s}\)) is shown in eq. [18]:
\[\mathcal{F}[\phi,\psi_{s}]=\frac{1}{\beta}\int_{V}d\mathbf{x}[f_{b}(\phi)+ \frac{\kappa}{2}|\nabla\phi|^{2}+\frac{K_{s}}{2}(\phi-\bar{\psi}_{s})^{2}\psi _{s}], \tag{18}\]
where \(\bar{\psi}_{s}\) is a constant value controlling the affinity, \(K_{s}\) is a positive parameter (which has to be chosen as a large value compared with the parameters \(\alpha\) and \(\beta\)) which ensures the value of the affinity inside the solid region in the phase field by imposing a single-well potential in the free energy functional [32].
The additional solid coupling term in the free energy functional makes the chemical potential (eq. [27]) evolve into the following expression:
\[\mu_{\phi}=\frac{\delta\mathcal{F}[\phi,\psi_{s}]}{\phi}=\alpha\phi^{3}-\beta \phi-\kappa\nabla^{2}\phi+K_{s}(\phi-\bar{\psi}_{s})\psi_{s}. \tag{19}\]
In order to ensure the no-penetration condition, we employ the operator \((\mathbf{I}-\mathbf{n}_{s}\otimes\mathbf{n}_{s})\), which acts directly in the solid diffused interface, with \(\mathbf{n}_{s}=\nabla\psi_{s}/|\nabla\psi_{s}|\) as the solid surface normal vector and \(\mathbf{I}\) as the unit tensor. The advection-diffusion equation, taking into account the solid phase, results as follows:
\[\frac{\partial\phi}{\partial t}+\mathbf{u}\cdot\nabla\phi=\mathcal{M}_{\phi }\nabla\cdot[(\mathbf{I}-\mathbf{n}_{s}\otimes\mathbf{n}_{s})(\nabla\mu_{ \phi})]. \tag{20}\]
### Flow Field Equations
The equations that fully describe the incompressible flow of a generic Newtonian fluid with advected and deformable interfaces are the continuity equation (mass conservation) and the Navier-Stokes equation (momentum conservation) with an interfacial term (representing the coupling with the Cahn-Hilliard equation) and a virtual force term to account for the feedback of rigid-immersed bodies. The dimensional form of the mass conservation equation for incompressible flows is as follows:
\[\nabla\cdot\mathbf{u}=0 \tag{21}\]
In order to couple the two-phases-flow-field, we use a continuous approach to introduce boundary conditions at the interface [45; 46]. As for velocity, the transition at the interface should be continuous, avoiding sudden jumps, as shown in the following expression:
\[\mathbf{u}_{1}\cdot\mathbf{n}-\mathbf{u}_{2}\cdot\mathbf{n}=0 \tag{22}\]
where \(\mathbf{n}\) is the unit normal tensor to the interface and \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) represent the velocity vectors at each side of the interface. The jump condition for the stress tensor at the interface can be written as follows:
\[\mathbf{T}_{1}\cdot\mathbf{n}-\mathbf{T}_{2}\cdot\mathbf{n}=\mathcal{K} \mathbf{\alpha}\mathbf{n}-\nabla_{s}\sigma \tag{23}\]
where \(\mathcal{K}\) is the mean curvature, \(\sigma\) is the surface tension, and \(\mathbf{T}_{1}\) and \(\mathbf{T}_{2}\) are the stress tensors at each side of the interface. The rhs of eq. (23) is composed by a normal (\(\mathcal{K}\mathbf{\alpha}\mathbf{n}\)) and a tangential (\(\nabla_{s}\sigma\)) component, with \(\nabla_{s}\) being the surface gradient operator.
The Navier-Stokes equations using the continuous approach in the binary fluid for a divergence-free velocity field is:
\[\begin{split}\rho\left(\phi\right)\left[\frac{\partial\mathbf{u }}{\partial t}+\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}\right]& =-\nabla P+\nabla\cdot\left[\eta\left(\phi\right)\left(\nabla \mathbf{u}+\nabla\mathbf{u}^{T}\right)\right]+\rho\left(\phi\right)\mathbf{ g}\\ &\quad+\nabla\cdot\left[\tau_{c}\mathcal{K}\sigma\right]+\rho \left(\phi\right)\boldsymbol{f}_{DF}\left(\psi_{s}\right),\end{split} \tag{24}\]
with \(\mathbf{u}=\left(u,v,w\right)\) as the velocity field, \(\rho\left(\phi\right)\) and \(\eta\left(\phi\right)\) as the local density and dynamic viscosity respectively, \(\tau_{c}\) as the Korteweg tensor, \(\sigma\) as the surface tension and \(\boldsymbol{f}_{DF}\) as the virtual force exerted by the solid phase.
### Non-matched properties treatment
In order to avoid numerical discontinuities and jumps across the interface, the thermo-physical properties are defined to depend on the phase field indicator \(\phi\) with smooth transitions across the interface.
We select arbitrarily the carrier phase (\(\phi=-\sqrt{\beta/\alpha}\)) as the reference property value, then the local density and viscosity are defined as:
\[\rho(\phi) =\rho_{c}\left[1+\frac{\rho_{r}-1}{2}(\frac{\phi}{\sqrt{\beta/ \alpha}}+1)\right] \tag{25}\] \[\eta(\phi) =\eta_{c}\left[1+\frac{\eta_{r}-1}{2}(\frac{\phi}{\sqrt{\beta/ \alpha}}+1)\right] \tag{26}\]
with:
\[\rho_{r}=\frac{\rho_{d}}{\rho_{c}},\quad\eta_{r}=\frac{\eta_{d}}{\eta_{c}} \tag{27}\]
where the subscript \(d\) indicates the dispersed phase and \(c\) the carrier phase.
We display two different dynamic viscosity ratios in fig. 2 (\(\eta_{r}\sim 1\) and \(\eta_{r}\sim 1\)), which shows that the definition of the equations (25) and (26) prevent the value to reduce below zero (unphysical values).
## 3. Validation: Immersed solid interacting with a flat binary-fluid interface
In this chapter we study the interactions between a flat binary-fluid interface and a single solid. The first part (sect. 3.1) focuses on the evolution of the contact line along the 2D cylindrical surface at different wetting conditions. The following section (sec. 4.1) presents the simulation of a heavy cylinder sinking in a binary fluid system and the last part, sect. 4.2, shows the study of the wetting effects on the submergence of a quasi-buoyant cylinder in a binary fluid domain.
### Contact line equilibrium in curved surfaces
We perform the simulation of the contact line equilibrium for a cylinder in a binary fluid domain at different contact angles.
Similar to Shao et. al [27], our numerical setup consists of a squared domain of a binary fluid arranged as two horizontal layers with a cylinder fixed at the center of its interface (refer to fig. 3). The upper and lower boundaries are neutrally wettable walls and the left and right boundaries have a periodic boundary condition.
The cylinder has a radius of \(r_{\mathrm{cyl}}=0.3h\) and the domain size is \(2h\times 2h\) discretized with \(128\) grid cells along the periodic direction and with \(257\) grid cells in the wall normal direction. The lower fluid is set with the the phase parameter value of \(\phi=1\) and the upper one is set with \(\phi=-1\). Depending on the wetting affinity preconditioned on the solid surface (eq. [15]), the contact line moves from the initial configuration (fig. 3) along the cylinder surface, until it reaches the equilibrium configuration. A hydrophilic surface with affinity \(\chi=0.35\), leads to an equilibrium contact angle \(\theta_{eq}=60^{\circ}\) and the
Figure 3: Initialization setup of the simulation domain.
Figure 2: Transition profile of the dynamic viscosity when \(\eta_{r}\) is greater than \(1\) (dashed-blue curve) and when \(\eta_{r}\) is smaller than \(1\) (red-plain curve). The interface is identified by the vertical gray line.
contact line reaches the equilibrium above its initial vertical position (see fig. 4). On the other hand, the contact line moves below the initial vertical position when the solid surface is preconditioned to be hydrophobic (affinity \(\chi=-0.35\)), leading to an equilibrium contact angle \(\theta_{eq}=120^{\circ}\), while a neutrally wetting surface with affinity \(\chi=0\) leads to an equilibrium contact angle \(\theta_{eq}=90^{\circ}\), where the interface remains flat and the contact line neither rises, nor goes below the initial position. This results match the theoretical angles predicted by eq. [15] and also the qualitative results obtained by other authors using different approaches [27, 36, 34].
## 4 Results
### Sinking of a heavy cylinder
Understanding the fluid dynamics developed by the motion of objects around a free surface is vital for some applications. In marine hydrodynamics, the wave loads produced by an immersed object motion, establish the basis of marine structures design, depending on the Froud number, the free surface motion may become violent and triggers several mechanisms like free surface breaking, cavity formation and cavity collapse [47]. A moving body in binary fluids is a complex scenario where the capability of numerical tools is tested in order to reproduce the fully coupled interplay of different parameters like surface tension forces, capillary forces, inertial forces and partial buoyancy forces (for a body migrating or trapped in between of two fluids).
In literature one can find numerical and theoretical studies of rigid-bodies motion near or through the free surface [48, 49, 50]. The one we are interested in this section is the simulation of a heavy cylinder sinking from the free surface of a binary fluid enclosed in a \(2\)D tank. The parameters and conditions used here are taken from the experimental study carried by Vella et al. [51].
Since the cylinder transits through two liquids, the partial buoyancy is taken into account, each time step, through the calculation of the solid portion immersed at each fluid, The numerical domain consists of a \(2\)D tank filled with a fluid \(A\) (density \(\rho_{A}\)) up to \(1.4h\) from the bottom wall, while the rest is filled with a fluid \(B\) (density \(\rho_{B}\)) as shown in fig. 5, where \(h\) represents half of the height of the domain
Figure 4: Equilibrium evolution of the contact line vertical position.
and the fluids density and dynamic viscosity ratio are \(\rho_{r}=\rho_{A}/\rho_{B}=833.3\) and \(\eta_{r}=\eta_{A}/\eta_{B}=55.6\) respectively. As a first step, a cylinder with a radius of \(r_{s}=0.125\,h\) (neglecting gravity effects) is initially placed at the interface; using a contact angle of \(\theta_{eq}=105^{\circ}\), the cylinder reaches its equilibrium position at \(h_{0}=0.465\) (see fig.5 ). Then, turning on gravity effects and using a density ratio with respect to the fluid A of \(\rho_{s}/\rho_{A}=1.92\) the cylinder is released from \(h_{0}\) respect to the free surface, letting the heavy cylinder sink. In order to compare with the experimental results [51], we use the characteristic time \(t_{c}=(\sigma/\rho g^{3})^{0.25}\), which is frequently used for gravity-capillary waves to travel the capillary length \(l_{c}=(\sigma/\rho g)^{0.5}\) (the characteristic length), since the meniscus surrounding the cylinder is in hydrostatic equilibrium. We use a Reynolds number of \(Re=250\) and the Bond number is approximately equivalent to \(Bo\cong(r_{s}/l_{c})^{2}\). The latter value indicates that surface tension effects contribution is negligible for this experiment, and a neutral contact angle (\(\theta_{eq}=90^{\circ}\)) is thus assumed for the simulations.
Fig. 6 shows a qualitative comparison of our simulations (right panel) against the experimental results (left panel), where we can observe that our simulation results are able to reproduce the main characteristic stages of the cylinder sinking experiment [51]: inflow in the region above the cylinder, cavity formation and jet generation [50].
The shape of the interface at the zone above the cylinder forms a kind of expander shape, which induces an upward jet of the fluid A and the entrainment of a portion of fluid B, attached to the cylinder surface, towards the bottom of the tank. This process is explained by fig. 7. Panel (a) shows that the cavity neck becomes narrow, squeezing the upper fluid out of it. The panel (b) and (c) then show how the neck walls merge, creating a pocket of upper fluid trapped and attached to the cylinder, which expands and coates the solid surface (neutral wetting), while the merged interface portion accelerates upwards, creating a bump above the free surface. When gravity and surface tension overcome the jet, the deformation is dissipated rapidly, turning the upward fluid motion into lateral waves \(-\)panel (d).
Figure 5: Scheme of the initial set-up for a cylinder with \(\rho_{s}=1920\,\,kg/m^{3}\) supported at an air-water free surface.
For a quantitative comparison, we plot the results of the cylinder position evolution on time (both parameters non-dimensionalized with \(t_{\rm c}\) and \(l_{\rm c}\)), as shown in fig. 8, where the numerical results, represented by a solid line, are plotted together with the experimental data [51] (represented by void circles) where a satisfactory agreement was reached.
The results obtained in this study case used a 2D domain with a periodic boundary condition in the horizontal direction and wall boundary condition in vertical direction. The grid independence test is performed to find the ideal mesh quality to efficiently reproduce experimental results. Three different mesh qualities are compared --coarse, medium and fine mesh (details can be found in table 1) using the processor model AMD Ryzen Threadripper Pro 3995WX @ 4.2_GHz_.
Table 1 shows the result of the grid sensitivity test regarding the position evolution of the cylinder on time. We observe that the three curves overlap until time \(t/t_{\rm c}=2.5\), while the buoyancy and the added mass and the viscous effects are negligible. When the cylinder starts feeling the strong change in density and viscosity, the curves take different paths. The simulations result using a medium mesh quality are indistinguishable from the one obtained using a fine mesh, this indicates that the grid independence is reached using \(128\times 256\) grid cells. Therefore, the medium mesh quality is selected for further simulations with similar configurations.
\begin{table}
\begin{tabular}{||c c c c||} \hline ID & Grid Resolution & Execution time per step [s] & Number of cores \\ \hline Coarse & \(64\times 128\) & \(35\times 10^{-3}\) & 8 \\ \hline Medium & \(128\times 256\) & \(100\times 10^{-3}\) & 8 \\ \hline Fine & \(256\times 512\) & \(280\times 10^{-3}\) & 8 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used in the simulations
Figure 6: Time sequence comparison of the experiment (left panel) against our simulations (right panel) for a sinking cylinder of density \(1920kg/m^{3}\) in an binary phase system (green-blue region respectively). The cylinder is considered immersed when the cavity collapses.
Figure 8: Simulation results of the sinking cylinder center position \((h_{0}/l_{c})\) evolution in time \((t/t_{c})\)compared with experimental data.
Figure 7: Sequence of jet formation after the cavity collapse, represented with the velocity magnitude field in the background. (a) upper fluid expelled from cavity; (b) cavity neck merged; (c) upper and lower jet formation; (d) jet rises the interface into the upper fluid.
### Submergence of a light cylinder in a binary fluid domain considering surface wetting effects
Let us consider the same geometrical configuration of the 2D domain employed in sec. 4.1, where the fluid A has a density of \(\rho_{fA}=1000\,kg/m^{3}\). A cylinder (with slightly bigger density \(\rho_{s}=1130kg/m^{3}\) than fluid \(A\)) released from the rest exactly at the interface (refer to configuration (a) in fig. 8(b)) would float indefinitely, because the sum of the capillary and buoyancy forces would exceed the cylinder weight (Kirshman and Sze, 2017). In order to verify this statement, a simulation is carried out using the following parameters: fluids density ratio \(\rho_{r}=\rho_{A}/\rho_{B}=833.3\), fluids viscosity ratio \(\eta_{r}=\eta_{A}/\eta_{B}=55.6\), cylinder radius of \(r_{s}=0.125h\) and a neutral wettable solid surface.
The results are reported in fig. 10, where panel (a) shows the time sequence of the cylinder motion, and panel (b) shows its position oscillation over time, until it reaches the equilibrium and remains floating indefinitely.
In order to submerge the cylinder, we increase its inertia by releasing it from a certain height above fluid interface as shown in fig. 8(b), considering hydrophobic and hydrophilic solid surfaces. Once the solid is released, it starts accelerating because of the gravity, until it impacts with the free surface. Due to the strong contrast between fluid densities and the capillary component caused by the deflection of meniscus, the velocity then decreases.
Subsequently, the cylinder passes to the lower fluid, usually entraining a small drop (remaining from the fluid B after the breakthrough), which is stuck to the upper surface of the cylinder (Kirshman and Sze, 2017).
Figures 11 and 12 show qualitatively the difference in the immersion dynamics of two identical cylinders with different wetting conditions the first with contact angle of \(\theta_{eq}=70^{\circ}\) and the second with \(\theta eq=110^{\circ}\). The first characteristic that attracts our attention is that the hydrophilic cylinder reaches a deeper position in the tank, generating an upward jetting stronger than in case of the hydrophobic cylinder. We can also observe how the velocity wake from the hydrophobic object is dissipated fast due to the resistance imposed by the surface tension. We may as well realize that the interface inflection due to the acute contact angle in fig. 11 helps with the formation of an upper fluid pocket trapped above the cylinder surface. Similar results for wetting objects sinking can be found in literature (Girshman and Sze, 2017; Kirshman and Sze, 2017; Kirshman and Sze, 2017).
To further compare the wettability effects on the motion of rigid bodies entering a free surface, we plot the position evolution of the cylinder on time of both cases. Fig 12(a) shows that the hydrophilic
Figure 9. (a) Initial configuration for a floating cylinder. (b) Initial configuration for a cylinder submergence by adding inertia.
cylinder reaches a deeper immersed position than the hydrophobic one. Fig. 13b shows that the hydrophobic solid decelerate from position \(z_{0}/l_{c}\) 1.5 to 4.1 where finally the cylinder direction of motion is flipped. These results confirm that varying the wetting conditions in the cylinder surface, lead to important effects on the submergence dynamics and the three phase interactions. The wetting affinity of the solid surface towards one fluid, will thus control the ability of the body to submerge.
### Wetting effects on the interaction of Solids and drops in three-phase systems
The type of binary fluids considered until chapter [3] were two stratified layers of fluids with an initial flat interface (free surface). In this chapter [4.3] we introduce a droplet as the second fluid
Figure 11: Time evolution of submerging hydrophilic disk interaction with the free surface depicted over the velocity field magnitude.
Figure 10: (a) Contact line evolution along the simulation. (b) Equilibrium evolution plot of the center position for a floating disk initialized at the free surface.
phase immersed in a carrier fluid. In the first part, we study the evolution of the shape and position of a drop sitting on a cylindrical surface at different wetting conditions. The second part of the chapter considers two solid bodies interacting with a droplet of the same order of magnitude in a flow field. After the LBD has reached its equilibrium configuration, a shear field is initialized in the carrier flow and the LBD interactions are studied.
#### 4.3.1 Solid-drop pair contact angle equilibrium
A complete work in solid-drop interactions is presented by Smith in [55], where several case studies are covered both experimentally (using one polystyrene sphere and one oil drop in an aqueous medium at different hydrophobic contact angles) and analytically (presenting equations for the prediction of the final equilibrium position for the pair in the range of \(30^{\circ}\) and \(120^{\circ}\) degrees of contact angle for various spheres-drop radius ratios). Another analytical formulation can be found in the wetting/dewetting section presented by Fakhari et al. [29], based on the premise of reaching the minimal free energy of the system by minimization of the peripheral area of a 2D drop sitting on a cylinder. These results obtained by means of the above mentioned formulation are used in the present work to validate the simulation of the dynamic contact line response to cylinder surfaces with changing wettability. Considering no gravity effects in the system during the simulations, we initialize our numerical domain by fixing a circular cylinder with radius \(r_{s}=r_{d}=0.33h\) at \(0.67h\) from the lower wall. A drop with radius \(r_{d}=0.33h\) is placed at the center of the domain. We consider a fluid-drop density ratio of \(\rho_{d}/\rho_{f}=1000\), a viscosity ratio of \(\eta_{d}/\eta_{f}=100\) and a surface tension value of \(\sigma=0.01N/m\).
The simulation is performed in a 2D domain with a wall boundary condition in the upper and lower limits and a periodic boundary condition in the side limits. A grid sensitivity study is carried out using the parameters listed in table 2 to determine the optimal mesh quality for the set of simulations with different contact angle values.
Figure 12: Time evolution of submerging hydrophobic disk interaction with the free surface depicted over the velocity field magnitude.
Figure 14: (a) Scheme of the initial configuration for the simulations. (b) Grid independence test for a drop-cylinder pair with a contact angle of \(135^{\circ}\) at equilibrium position, using: \(64\times 128\) (red line), \(128\times 256\) (green line) and \(256\times 512\) (blue line) grid cells in \(y\times z\) direction respectively.
\begin{table}
\begin{tabular}{||c c c c||} \hline ID & Grid Resolution & Execution time per step [\(s\)] & Number of cores \\ \hline Coarse & \(64\times 128\) & \(28\times 10^{-3}\) & 8 \\ \hline medium & \(128\times 256\) & \(56\times 10^{-3}\) & 8 \\ \hline fine & \(256\times 512\) & \(87\times 10^{-3}\) & 8 \\ \hline \end{tabular}
\end{table}
Table 2: Mesh quality list and numerical details for the simulation of the equilibrium configuration of a cylinder-drop pair.
Figure 13: (a) Evolution of the cylinder velocity against the center position. (b) Evolution of the velocity against the cylinder position.
Fig. 14b shows the drop interface (iso-surface at \(\phi=0\)) using different mesh qualities. One can observe as well that the grid independence is reached with \(128\times 256\) grid cells, where the difference between the drop interfaces using medium and fine mesh qualities is imperceptible.
Once the optimal mesh quality is found, a set of 7 simulations is proposed. Starting from the initial configuration presented in the initialization scheme of fig. 14a, the system is brought to its equilibrium configuration for a range of different contact angles (from \(45^{\circ}\) to \(135^{\circ}\) as presented in [29]).
The final equilibrium configuration will be represented by the variable \(l_{eq}\), which is defined as a distance from the center of the cylinder to the highest point of the drop (refer to fig. 14a).
Fig 15a shows the results of the final equilibrium configuration of the cylinder-drop pair expressed in terms of \(l_{eq}/r_{s}\) for each contact angle value used. Fig 15b shows the simulation results (represented by green markers) and the plot of the analytical solution (represented by the plain red curve). As illustrated in fig. 15b our results overlap almost perfectly over the analytical solution curve.
#### 4.3.2. Liquid Bridged Doublets (LBD) in shear flow
In contrast to the above studied cases, this section includes a pair of free-moving solid bodies with active surface. They interact with a droplet in a carrier shear flow. The droplet deformation triggers the interplay between the inertial forces and the normal capillary forces, inducing the solids motion from an initial static state.
Several studies on the equilibrium configuration of bridged cylinders/spheres doublets can be found in literature [56, 57, 58]. They report analytical solutions in terms of the contact angle, the droplet volume and the maximal bridged doublet length \(l_{max}\).
Further studies consider the interactions of a LBD in a shear flow field where the drop deformation and the contact line slippage may lead to a correlated rotation of the solids around a centered axis in the drop. Experiments on the mentioned relative rotation have been performed by Smith et. al
Figure 15. (a) Cylinder-drop pair equilibrium configuration at different \(\theta_{eq}\). (b) Plot of the final configuration length \(l_{eq}\) normalized by the cylinder radius \(r_{s}\) for different contact angles and over the analytical curve result.
[55], where three configurations of LBD (using spheres) were presented varying the relative volume of the bridging-drop with respect to the spheres.
In this thesis -- due to constraints with computational resources and time, this study has been limited to a simplified 2D numerical experiment with matched density and matched viscosity for the fluids, but the LBD geometric proportions are based on an intermediate bridging-drop volume defined in [55]. The intention of this study is to ensure the solid surface wetting parameters, the bridging-drop properties and the shear flow field definition in order to induce a relative rotation [55].
Two identical hydrophilic (\(\Theta_{eq}=73^{\circ}\)) disks of radius \(r_{s}=0.20\,h\) and density \(\rho_{s}=1120\,\,kg/m^{3}\) bridged by a droplet (immiscible liquid) of radius \(r_{d}=0.43h\) and density of \(\rho_{d}=990\,\,kg/m^{3}\) were initialized and brought to equilibrium by the surface forces -- neglecting gravity effects, obtaining a maximal distance of \(l_{max}=1.18\,h\). The LBD system in equilibrium is then released in a shear flow field using a capillary number value of \(0.24\) (fig. 16). This simulation is carried out using \(128\times 128\) grid nodes in the directions \(y\times z\) respectively, leading to a computational time step of \(t_{s}=32.8\times 10^{-3}\)_sec_ using \(8\) cores with the processor model AMD Ryzen Threadripper Pro 3995WX @ 4.2GHz.
Fig. 17 shows the phases of the rotation of the LBD in a shear flow field. The solid motion is started by the bridging-drop deformation into an elliptical shape as observed in panel (b) and (c), then, due to the shear field, the disks are accelerated horizontally in opposite directions, while the drop capillary forces keep them at the interface, giving them a vertical component of motion. This interplay results in a relative rotation of the disks and constant deformation in the topology of the bridging drop (since the solids are of comparable order of magnitude with the bridging-drop). Panels (d) and (e) show the asymmetry of the rotation respect to the vertical axis. While in the former, the bridging-drop seems to compressed, in the latter it seems to be elongated. The LBD reaches the maximal stretch when the disks reach the ellipsoidal vertices of the bridging-drop -- panel (f). There, the solid inertial forces pull the bridging-drop in the directions of the disks against the normal capillary forces. As the pair continues rotating, the capillary forces overcome the inertial ones, bringing them closer and decreasing the deformation of the bridging-drop. The rotation is symmetric for the horizontal axis. The period of rotation of the LBD configuration is \(TG=0.28\) sec.
Figure 16: Schematic of the equilibrium configuration (represented by \(l_{max}\)) of a pair of hydrophilic disks bridged by a drop, and the shear flow field (represented by arrows) on which they are initialized.
Figure 17: Sequence of the LBD rotating in a shear flow field. (a) Equilibrium position of the LBD. (b) The drop is deformed to an elliptical shape and the disks start moving. (c), (d) The elliptical shape is kept almost invariable, but the disks circulate along the drop interface. (e) The disks reach the ellipse vertices. (f), (g) The interplay between the disks inertia and the surface tension stretches and elongates the droplet. (h) The drop adopts a more rounded shape. (i) The LBD reaches almost a mirrored version of initialization configuration.
## 5 Conclusions
The dynamics of the interactions between solid and binary fluid interfaces in an incompressible Newtonian fluid have been characterized using multiphase numerical techniques: the Eulerian approach for the continuous liquid phase, the Phase Field Method to describe the drop phase topology evolution and the Direct Forcing approach for the motion of solids description. A fully coupled ternary phase numerical solver was achieved by adding into the carrier liquid a surface tension term (resulting from the dynamic effects of the drop phase) and a virtual force (which plugs the effects of the solid phase dynamics in the carrier fluid), and by using a single well potential to bound the solid region in the free-energy functional of the binary fluid system.
The settling of an immersed solid in a quiescent fluid was investigated at different fluid properties. Two-dimensional and three-dimensional simulations were performed and satisfactorily validated with analytical and experimental data. The contact line evolution was studied in cylindrical surfaces at different wettability conditions. The results showed that the fluids interface was perturbed in different ways: climbing up the solid surface for the hydrophilic case, retreating downwards for the hydrophobic case and staying still flat for the neutrally wettable case. In the second part, we performed a simulation of the interaction between a sinking cylinder and a binary fluid interface. The simulation results matched with great accuracy the experimental data, validating the phenomena both qualitatively and quantitatively. In the last part of the section, we investigated the wetting effects on the submergence of a quasi-buoyant cylinder in a binary fluid domain. From the simulations we observed that the _capillary flotation forces_ either help or resist in the submergence. For hydrophobic conditions, the solid reached shallow depths, for hydrophilic conditions, on the other hand, it sunk deeper and easier. These results are in agreement with experimental and numerical findings in literature. The final part of this paper introduces second fluid phase as a a droplet and not as a stratified layer (as the precedent case study). First, we study the evolution of the shape and position of a drop sitting on a cylindrical surface at different wetting conditions. The resulting individual equilibrium configuration of the pair is represented by a solid-drop pair length. This length is then compared with available analytical and numerical data, to which our results match remarkably. The second part is devoted to the study of two solid bodies interacting with a droplet (both sized with the same order of magnitude) in a flow field. After the lbd has reached its equilibrium configuration (represented by a LBD length) in a stationary fluid, the shear flow field is initialized. The interactions within the LBD are originated by the interplay of capillary bridging forces and shear flow field effects. These interactions bring the LBD system into a relative rotation similar to the ones observed experimentally and, to the authors' knowledge, this phenomenon has not yet been addressed numerically in literature.
A limitation of this numerical implementation is that the solid sub-field must be regenerated at every time step, which increases the CPU calculation effort as we increase the number of solid particles used in the simulation; nevertheless, this limitation can be amended using optimization strategies. The simulations of three phase interactions work in three-dimensions (3D) as fine as in 2D; some cases were tested using a 3D setup; unfortunately, meaningful results required dedication of more time and computational resources; consequently, they are not shown in this work.
The current version of the code considers the effects of the solid spheres/cylinders rotation as additional values in the solid linear velocity; thus, actual solid body rotation is not performed; however, it can be included at expenses of added computational costs. The carried out work allows a number of potential further developments in terms of computational efficiency of the solver and of modeling capabilities of the solver. From the point of view of the computational efficiency, the solid phase solver is currently designed to handle computations of dozens and even hundreds of solid particles in an optimal way. The parallelization strategy consists in the equitable distribution of the total
number of particles tasks to be computed, among all the cores allocated for the computation. The simulation of larger amount of particles (i.e. thousands or millions) may reach a bottleneck in terms of computational speed. Therefore, for the distribution and the calculation of all the particles tasks to be computed, an optimization study using GPU parallel processing is proposed instead.
From the point of view of modeling capabilities, further developments concerning non-spherical solids dynamics, big solids in drop-laden flows and lateral capillary forces in three-phase flows are suggested in the following lines. Although it is true that for a great amount of applications, the solid bodies can be modeled as cylinders or spheres in three-phase systems, there are some others (especially for microscopic, mesoscopic and macroscopic solids) where the shape of the solids plays an important role in the dynamics of the whole system. A solid shape in the latter cases can affect several parameters directly (to mention a few: the solid rotational inertia, the after-collision bounce direction, the partial buoyancy forces and the capillary forces). Therefore, a study of the effects of arbitrary-shaped solids on the interaction with binary fluid interfaces is encouraged.
Another topic to investigate further are the effects of considering big free moving particles in drop-laden flows. The use of small particles (point wise particles) for the stabilization of emulsions are broadly studied, especially in the cosmetic industry (due to the increasing demand of surfactant-free products). On the other hand, study results on the interaction of comparable size immersed solids and drops in drop-laden flows are still scarce. We therefore propose a study of the wettability effects on drops coalescence and breakage of big free moving particles in drop-laden flows. Taking a closer look at the solid dynamics around the interface, we observe that the capillary forces are the main mechanisms driving the three-phase interactions. These forces are responsible for the solids self-assembly in two-dimensional structures, on the free-surface of a binary fluid system. Two solid particles attract or repel each other when their interface perturbations overlap. Although there are several numerical studies in the field, just a few of them can handle the lateral capillary forces implicitly and without an extra model. The aim of a future study would be to carry out simulations of the effects of wetting using two identical buoyant solids attached to an interface. The simulations results must be compared with the experimental data on lateral capillary forces to determine the level of accuracy of the numerical tool and decide if a model is needed. The above mentioned capillary forces, for instance, represent in nature a mean of motility for some insects like the Pyrrhalta nymphhaeae larvae. This creature has a wetting body circumscribed by a contact line (in the liquid free-surface). Therefore, in order to advance to the highest meniscus located on the edge of the liquid vessel, the insect arches its endings, perturbing the interface and forming a meniscus. These interactions generate capillary attraction forces between the insect and the edge of the vessel. Based on this real case phenomena, a study of the interaction of a simple flexible wettable membrane with the fluid-fluid interface is encouraged.
## Acknowledgement
This work has received funding from the European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement no. 813948 (COMETE).
## Author contributions
* K. Miranda: Data curation, Visualization, Methodology, Software, Investigation, Writing- Original draft
* C. Marchioli: Conceptualization, Methodology, Investigation, Writing- Original draft |
2308.16627 | Thermodynamic Properties of Regular Phantom Black Hole | The Regular Phantom Black Holes (RPBH)s are of theoretical and observational
importance, and some properties have been studied. In this work, we study some
of the thermodynamical properties such as entropy, and temperature, ... in
three asymptotically spacetimes: flat, de--Sitter (dS), and Anti-de Sitter
(AdS). Many of the RPBH properties, including horizon radius, are (directly or
indirectly) dependent on a scale parameter b. Due to the slightly different
structure from Schwarzschild--metrics, the method to express relations between
thermodynamical variables requires a new function of the scale parameter. We
also imply the local and global thermodynamic stability through the Heat
Capacity (HC) and Gibbs Energy (GB), respectively. The calculations and graphs
show the results, in the flat background, are very similar to Schwarzschild
ones. Also, some results show that the asymptotically AdS-RPBH is more
compatible with physical laws than the dS and flat backgrounds. | Maryam Haditale, Behrooz Malekolkalami | 2023-08-31T10:49:19Z | http://arxiv.org/abs/2308.16627v2 | # Thermodynamic Properties of Regular Phantom Black Hole
###### Abstract
The Regular Phantom Black Hole (**RPBH**)s are of theoretical and observational importance, some of their properties have been studied. In this work, we study some of thermodynamical properties as entropy, temperature,... in three asymptotically spacetimes, that is, flat, de-Sitter (**dS**) and Anti-de Sitter (**AdS**). Many of the RPBH properties, including horizon radius, are (directly or indirectly) dependent on a scale parameter \(b\). Due to the slightly different structure from Schwarzschild-like metrics, the method to express relations between thermodynamical variables requires a new function of the scale parameter. We also imply the local and global thermodynamic stability through the Heat Capacity (**HC**) and Gibbs Energy (**GB**), respectively.
The calculations and graphs show the results, in the flat background, are very similar to Schwarzschild ones. Also, some of the results show that the asymptotically AdS-RPBH is more compatible with physical laws than the dS and flat backgrounds.
M. Haditale, B. Malekolkalami
Faculty of Science, University of Kurdistan, Sanandaj, P. O. Box 416, Iran
(email: [email protected], [email protected])
_Keyword_: Dark Energy, Phantom Field, RPBH, Black Hole Thermodynamics.
## 1 Introduction
Astronomical observations based on the Type Ia Supernova Project collaboration have revealed the fact that our universe is expanding much faster than
in the past [1]. This has also been confirmed by other projects as Cosmic Microwave Background (**CMB**) measurements [2, 3, 4] and studies of the large scale structure [5]. Because, the acceleration is slowed by gravity, any proposed candidate to explain the acceleration must have a sufficient negative pressure to counterbalance gravity. It is widely believed that what is causing the expansion is a mysterious entity called Dark Energy (**DE**) [6].
One of the most famous proposed candidate for DE is the cosmological constant. The successful model in this regard is \(\Lambda CDM\), the simplest model that provides a reasonably good account of the properties and behaviors of the universe. However, despite the great welcome of this theory, it faces two challenges coming from both theoretical and observational sides. The first extraction of vacuum energy from quantum field theory and second equality of a large amount of DE and Dark Matter [7]. Existence of these challenges and motivation to explain the physical nature of DE and its origin has caused a large number of researches and works to explore other proposed candidates.
Many astrophysics observations illustrated the pressure to density ratio. For example, a model-free data analysis from 172 type Ia supernovae resulted in a range of [8]. According to the Plank data during some years, [9]. Using Chandra Telescope data, hot gas analysis in 26 X-ray luminous dynamically relaxed galaxy clusters gives [10]. The data on SNIa from the SNLS3 sample estimates [11]. In fact, Several DE models with ultra-negative mode equations offer better fit with above data [12, 13, 14, 15]. All of these approaches are in favor of the Phantom DE scenario in which the constant state parameter equation is used [16, 17]. The fundamental origin of phantom fields is debatable, but they occur naturally in some models of string theory [18], supergravity [18], and theories in more than 11 dimensions, such as F-theory [20]. Because the phantom field is a candidate for dark energy, the phantom black hole show that singularity in this black hole is destroyed by dark energy [21]. Bronnikov and Fabris studied the regular BHs with self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials in vacuum which are free essential singularity known as RPBH [22]. Regularity is not unique to RPBH, and charged or Gaussian BHs can be mentioned as examples.
The thermodynamics of BH [23] is the field that seeks to apply the laws of thermodynamics despite the BH event horizon. Since the study of the statistical mechanics of blackbody radiation led to the development of the theory of quantum mechanics, the attempt to understand the statistical mechanics of BHs had a profound effect on the understanding of quantum gravity, which led to the development of the holographic principle [24]. Over the past 30 years, the research has revealed that there is a very deep and fundamental relationship between gravity, thermodynamics, and quantum theory. The
cornerstone of this relationship is the BH thermodynamics, where certain rules of BH mechanics seem to be just ordinary laws of thermodynamics that apply to a system containing BHs. In fact, the BH thermodynamic - mainly obtained by classical and semi-classical analyzes - provides much of our current physical insight into the nature of quantum phenomena occurring in strong gravitational fields [25].
We study the thermodynamic properties of RPBH. In addition, the considered times for RPBH are the asymptotically flat, dS and AdS and their results are compared with Schwarzschild.
The paper is organized as follows. In Sect.2, the the Regular Phantom Metric and conditions which are needed that the metric represents a BH is introduced. Sect. 3, the properties of thermodynamic of RPBH including: Entropy, Temperature,... is studied. The conclusion are given in Sect.4.
## 2 The Regular Phantom Metric
A convenient action describing a self-gravitating scalar field with an arbitrary potential \(V(\phi)\), can be written as [7, 22]:
\[S=\int\sqrt{-g}dx^{4}\Big{(}R+\varepsilon g^{\mu\nu}\partial_{\mu}\phi\partial_ {\nu}\phi-2V(\phi)\Big{)}, \tag{1}\]
where \(R\) is the scalar curvature, \(\varepsilon=+1\) describes the usual scalar field with positive kinetic energy and \(\varepsilon=-1\) corresponds to the phantom field. Considering static spherically symmetric configuration, a general spacetime metric can be written as:
\[ds^{2}=f(r)dt^{2}-\frac{dr^{2}}{f(r)}-p^{2}(r)\Big{(}d\theta^{2}+\sin\theta^{2 }d\varphi^{2}\Big{)}, \tag{2}\]
where \(f(r)\) and \(p(r)\) are the metric functions which are determined by the field equations. By variation of the action (1) and solving the resulting field equations, the unknown metric functions can be obtained as [7, 22]:
\[f(r)=p^{2}(r)\left(\frac{c}{b^{2}}+\frac{1}{p^{2}(r)}+\frac{3M}{b^{3}}\left[ \frac{br}{p^{2}(r)}+\arctan\Big{(}\frac{r}{b}\Big{)}\right]\right), \tag{3}\]
\[p^{2}(r)=r^{2}+b^{2}. \tag{4}\]
Also the potential \(V(\phi)\) and scalar field \(\phi\) take the following forms:
\[V(\phi(r))=-\frac{c}{b^{2}}\frac{p^{2}+2r^{2}}{p^{2}}-\frac{3M}{b^{3}}\Big{(} \frac{3br}{p^{2}}+\frac{p^{2}+2r^{2}}{p^{2}}\arctan\Big{(}\frac{r}{b}\Big{)} \,\Big{)},\]
\[\phi(r)=\sqrt{2}\in\arctan\Big{(}\frac{r}{b}\Big{)}+\phi_{0}. \tag{5}\]
he metric function(3) includes three parameters (\(M\), \(c\), \(b\)) which the first two are integration constants and the third is a (positive) scale parameter. It determines the connecting strength between phantom scalar field and the gravity.1 Dealing with these parameters requires the following considerations:
Footnote 1: For this reason, it is sometimes called a regular parameter.
1) A look at the metric function (3) reveals that a necessary condition to deal with a BH (namely, spacetime including horizon) is
\[c<0. \tag{6}\]
From here on, we will put \(c=-\alpha\) with \(\alpha>0\) and \(\alpha=4.5\) is considered for numerical calculations and graphs.
2) The spacetimes described by metric function (3) includes sixteen classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics.2 Corresponding to these three asymptotic cases, there are three bound relations as follows [22]:
Footnote 2: For a detailed discussion about the bounded values of the current parameters, see [22].
for asymptotically flat case:
\[\alpha=\frac{3\pi M}{2b} \tag{7}\]
and for asymptotically AdS or dS cases:
\[\alpha=-\frac{\pm 2b^{3}-3\pi M}{2b}, \tag{8}\]
which positive (negative) sign corresponds to AdS (dS).
3) In the limit as \(b\to 0\), the metric function (3) returns to the Schwarzschild metric, therefore, \(M\) is interpreted as the usual mass.
When the metric function (3) represents a BH, the horizon radius \(r_{+}\) can be obtained by vanishing the metric function (3), that is
\[f(r_{+})=1-\frac{\alpha}{b^{2}}p^{2}(r_{+})+\frac{3Mr_{+}}{b^{2}}+\frac{3M}{b^ {3}}p^{2}(r_{+})\arctan\left(\frac{r_{+}}{b}\right)=0. \tag{9}\]
Figure 1: The graph of function \(f(r)\) (Equation (3)) for \(b=0.1\) and \(c=-4.5\) for the three asymptotically cases, flat, dS and AdS.
function of \(b\) through equation (7) or (8). By taking this in (9), it turns out that, equation (9) describes the horizon radius \(r_{+}\) as an implicit function of \(b\). The graph of this function is illustrated in Fig.2, for three asymptotically cases as follows:
1) The flat case (left panel): the horizon is linear in \(b\). In the next Section, we will see, this linearity is equal to a scale change in the area of the horizon (or consequently in the entropy) in compared to the Schwarzschild case.
2) dS case (middle panel): First it is necessary to not that the allowed part of the graph is between two points O and A. This is for the following two reasons:
I) The horizon radius is a real single-valued function.
II) As mentioned, RPBH \(\rightarrow\) Schwarzschild BH as \(b\to 0\) and note that the Schwarzschild BH is asymptotically flat. In other words, the dS and flat cases are the same as \(b\to 0\).
So, the acceptable part of the graph in Figure 2 indicates that the scale parameter values are limited to a domain, that is
\[0<b<b_{max}=b_{A}\simeq 0.7, \tag{10}\]
note that the horizon is also restricted between a minimum and a maximum, that is \(0<r_{+}<r_{+max}=r_{+A}\). This means that the dS-RPBH can be formed only for limited values of the scale parameter.
3) In AdS case, the horizon is an increasing function until reaching a maximum (point D), then it decreases monotically to an asymptotic value (point K), that is \(\lim_{b\rightarrow\infty}r_{+}=r_{+K}\). Pay attention here, like in the dS case, there is an upper and lower limit for the horizon radius, with the difference that the scale parameter has no limits.
Figure 2: The graph of \(r_{+}-b\) for three spacetimes.
### New Parameter
In the previous section, the importance of the scale parameter \(b\) was discussed to some extent. As we know, the radius of the horizon is very important in determining the thermodynamical properties of the BH and hence, it is impossible to express them in terms of the scale parameter, because the horizon is an a implicit function of \(b\) (equation (9)).
To fix this problem, we introduce the new parameter defined by:
\[y=\frac{r_{+}}{b},\]
which also is an implicit function of \(b\), however, we will see, it is possible to obtain the inverse (explicit) function, that is \(b=b(y)\). This enable us to express the thermodynamical variables as explicit functions of \(y\) (what is not made possible by \(b\)). This, in turn, provides the possibility of illustrating thermodynamic diagrams which play an important role in describing and understanding the thermal properties.
To do what was said above, it is necessary to rewrite equation (9) in the new variable. To do this, we insert \(M\) from equations (7) or (8) into equation (9), leading to:
\[f(r_{+})=g(y)=1-\alpha-\alpha y^{2}+\frac{2\alpha}{\pi}\lambda\Big{(}y+(1+y^{2 })\arctan(y)\Big{)}=0, \tag{11}\]
where
\[\lambda=\begin{cases}1,&\text{for flat case},\\ 1\pm\frac{b^{2}}{\alpha},&\text{for $AdS(+)$ and $dS(\text{-})$ cases}.\end{cases} \tag{12}\]
For the flat case, equation (10) reads
\[g(y)=1-\alpha-\alpha y^{2}+\frac{2\alpha}{\pi}\left(y+(1+y^{2})\arctan(y) \right)=0, \tag{13}\]
which contains only \(y\) and the parameter \(\alpha>0\). It is not difficult to verify that to be a real valued \(y\), equation (12) dictates \(\alpha>1\). This equivalently means that there isn't any real root for \(0<\alpha<1\).3 Also, there is one to one corresponding between \(\alpha\) and \(y\) values, for example, for numerical value \(\alpha=4.5\), one obtains \(y\approx 1.5\). Thus, an important result is that, the new parameter has a constant value \(y_{0}\) (depending on \(\alpha\)), means that \(r_{+}\) is proportional to \(b\), that is
Footnote 3: There is only a one root correspond to \(\alpha=1\), that is \(y=0\) which isn’t applicable.
\[r_{+}=y_{0}b, \tag{14}\]
which confirms Fig.2 (left panel).
For AdS and dS cases, we obtain (from equation (11)) \(\lambda\) as:
\[\lambda=\frac{\pi}{2}\left(\frac{y^{2}+1-\frac{1}{\alpha}}{y+(1+y^{2})\arctan(y )}\right), \tag{15}\]
now by equating the right hand side (15) with the right hand side of (12) (second raw) and arranging the resulting equation in \(b^{2}\), one gets:
\[b^{2}=\pm\alpha\left(\frac{\pi\Big{(}y^{2}+1-\frac{1}{\alpha}\Big{)}}{2\Big{(} y+(1+y^{2})\arctan(y)\Big{)}}-1\right). \tag{16}\]
The last equation gives the scale parameter as explicit function of \(y\) which facilitates the expression of thermodynamical functions in terms of the new parameter.
## 3 Thermal Properties
The goal of this section is the thermodynamical analysis of RPBH. Since, entropy can play a central role in determining the well-defined thermodynamical quantities, let us examine it first.
### Entropy
According to Bekenstein formula, BH entropy is equal to quarter of area bounded by the event horizon. For the static and spherically symmetric metric (2), the horizon area is \(A=4\pi p^{2}(r_{+})\), then the entropy becomes:
\[S=\frac{A}{4}=\pi\Big{(}r^{2}+b^{2}\Big{)}_{r=r_{+}}, \tag{17}\]
which in terms of the new parameter \(y=\frac{r_{+}}{b}\), takes the following form:
\[S=\pi\Big{(}1+y^{2}\Big{)}b^{2}. \tag{18}\]
For the next purpose, it is also needful to write the last equation as:
\[S=\pi r_{+}^{2}\Big{(}1+\frac{1}{y^{2}}\Big{)}. \tag{19}\]
**Flat case**
In this case, the new parameter has constant value \(y_{0}\) (equation (14)), thus equation (18) reads:
\[S=\Big{(}1+\frac{1}{y_{0}^{2}}\Big{)}\pi r_{+}^{2}=\Big{(}1+\frac{1}{y_{0}^{2}} \Big{)}S_{sch}, \tag{20}\]
where \(S_{sch}=\pi r_{+}^{2}\) stands for the entropy of the Schwarzschild BH. Obviously, the entropy of the RPBH is always greater than that of the Schwarzschild BH. As numerical example, for \(y_{0}\approx 1.5\), equation (19) becomes:
\[S\simeq 1.44S_{sch}, \tag{21}\]
also note that \(S\longrightarrow S_{sch}\) as \(\alpha\longrightarrow\infty\). Because, from equation (12) can be easily found that \(y\longrightarrow\infty\) as \(\alpha\longrightarrow\infty\).
**dS(-) and AdS(+) cases**
In these cases, by substituting \(b^{2}\) from equation (16) into (18), we obtain the entropy as the following explicit function of \(y\):
\[S(y)=\pm\pi\alpha\left(1+y^{2}\right)\left(\frac{\pi\Big{(}y^{2}+1-\frac{1}{ \alpha}\Big{)}}{2\Big{(}y+(1+y^{2})\arctan(y)\Big{)}}-1\right). \tag{22}\]
As we will see in below, the last equation allows to plot the graph of the thermodynamic functions versus entropy.
### Mass
In BH thermodynamics, the mass of BH play the role of the internal energy and hence, it is important to consider it from the point of view of system energy changes. To do this, it is common to find the relation between mass and entropy what is presented below.
**Flat case**
In this case, the mass is obtained from (7), as
\[M=\frac{2\alpha b}{3\pi}, \tag{23}\]
which by substituting \(b\) from (18) reads
\[M=M(S)=\frac{2\alpha}{3\pi\sqrt{\pi(1+y_{0}^{2})}}\sqrt{S}\simeq 0.299\sqrt{S}, \tag{24}\]
here in the last step, we have put \(y_{0}=1.5\). The variation of mass versus entropy is shown in Fig.3 (top left panel). As expected, it is qualitatively similar to the Schwarzschild one (top right panel).
**dS and AdS cases**
In these cases, the mass is obtained from (8) as:
\[M=\frac{2b}{3\pi}(\pm b^{2}+\alpha), \tag{25}\]
here, contrary to the flat case, it is not possible to obtain the mass as an explicit function of entropy, however by substituting \(b\) from (16) into equation (25), it becomes an explicit function of new parameter as:
\[M=M(y)=\frac{2}{3\pi}\sqrt{\pm\left(\frac{\alpha\pi\left(y^{2}+1-\frac{1}{ \alpha}\right)}{2\left(y+(1+y^{2})\arctan\left[y\right]\right)}-\alpha\right) }\left(\frac{\alpha\pi\left(y^{2}+1-\frac{1}{\alpha}\right)}{2\left(y+(1+y^{ 2})\arctan\left[y\right]\right)}\right). \tag{26}\]
On other hand, equation (22) shows the entropy as an explicit function of \(y\), that is \(S=S(y)\), thus we can plot the mass variations versus entropy.4 The
Figure 3: The \(M-S\) graph, for the flat (24), dS and AdS (26) spacetimes in \(\alpha=4.5\) and the \(M-S\) diagram of Schwarzschild. The results of three spacetimes is similar to Schwarzschild.
\(M-S\) diagram is illustrated in Fig.3 for dS (bottom left panel) and AdS (bottom right panel) cases. They are also Schwarzschild like.
### Temperature
For the static, spherically symmetric BH spacetime equipped with metric (2), the Hawking temperature of the horizon is given by [22]:
\[T=\frac{f^{\prime}(r_{+})}{4\pi}, \tag{27}\]
where \(f^{\prime}(r_{+})\) denotes the derivative of the metric function (3) computed at \(r=r_{+}\). By calculation of derivative of the metric function (3), the temperature (27) becomes:
\[T=\frac{1}{4\pi}\left(-\frac{2\alpha}{b}\left(\frac{r_{+}}{b}\right)+\frac{6M }{b^{2}}\left(1+\left(\frac{r_{+}}{b}\right)\arctan\left(\frac{r_{+}}{b} \right)\right)\right), \tag{28}\]
which in terms of \(y\), reads
\[T=\left(-\frac{2\alpha}{b}y+\frac{6M}{b^{2}}\left(1+y\arctan\left(y\right) \right)\right). \tag{29}\]
In below, we plot the temperature variations versus entropy (\(T-S\) diagram), using (29).
**Flat case**
In this case, to obtain the temperature, instead of equation (27), we can use the following simpler formula:5
Footnote 5: We note that the both formulas (27) and (29) lead to the same result.
\[T=\frac{\partial M}{\partial S}, \tag{30}\]
which by equation (24) gives:
\[T(S)=\frac{\alpha}{3\pi\sqrt{\pi(1+y_{0}^{2})S}}\simeq\frac{0.149}{\sqrt{S}}. \tag{31}\]
The graph of the function (31) (\(T-S\) diagram) is shown in Fig.4 which is Schwarzschild like, as expected.
**dS and AdS cases**
Figure 4: The \(T-S\) graph, for the flat (31), dS and AdS (33) spacetimes in \(\alpha=4.5\) and the \(T-S\) diagram of Schwarzschild.
In these cases, first, by substituting the mass \(M\) from equation (25) into equation (29), we obtain
\[T=\frac{1}{4\pi b}\left(-2\alpha y+\frac{4}{\pi}(\pm b^{2}+\alpha)\left(1+y\arctan y \right)\right), \tag{32}\]
then by inserting (16) into the last equation, the temperature (29) becomes a pure function of the new parameter as:
\[T(y)=\frac{-1+\alpha-y\arctan\left(y\right)}{\pi\Big{(}y+\left(1+y^{2}\right) \arctan\left(y\right)\Big{)}\sqrt{\frac{\mp 4y\alpha\pm 2\pi(-1+\alpha+y^{2} \alpha)\mp 4(1+y^{2})\alpha\arctan(y)}{y+(1+y^{2})\arctan(y)}}}. \tag{33}\]
Now, by having (22), (33), we can plot \(T-S\) diagram illustrated in Fig.4. The diagrams are not Schwarzschild like (contrary to the flat case)6 There are two main differences are:
Footnote 6: The only similarity between them is that the temperature is a decreasing function of entropy.
1) In both dS and AdS cases, the temperature is finite as \(S\longrightarrow 0\) while in the flat case, it goes to infinity.
2) In dS case, the temperature has decreasing nature and unbounded below, that is it decreases monotonically, from the high positive values to the high negative values. In AdS case, the temperature is also decreasing, but bounded below, that is it asymptotes to a certain positive minimum. In the flat case, the temperature asymptotes to zero.
The last difference contains the following physical point: The third law of BH mechanics states that "It is not possible to form a BH with vanishing surface gravity (or equivalently vanishing temperature)". Therefore, the dS-RP BH can violate the third law. The result is that AdS case is more closely related to the physical world.
### Thermal Stability
The HC and GB play an important role in determining the stability of BHs. They are usually used to analyze the local and global stability of BHs, respectively. In this subsection, we discuss the stability of the RPBH in the three asymptotically cases.
#### 3.4.1 Heat Capacity
**Flat case**
The HC can be defined by:
\[C=\frac{dM}{dT}, \tag{34}\]
which can be written as:
\[C=\frac{dM}{dT}=\frac{\partial M}{\partial S}\frac{dS}{dT}=T\bigg{(}\frac{dT}{dS }\bigg{)}^{-1}. \tag{35}\]
By equation (31), we obtain:
\[\frac{dT}{dS}=-\frac{\alpha}{6\pi\sqrt{\pi(1+y_{0}^{2})}}S^{-3/2}, \tag{36}\]
now by inserting equations (31) and (36) into equation (35), we get:
\[C(S)=-2S. \tag{37}\]
This result is the same as HC of the Schwarzschild BH, with the important note, it does not depend on \(\alpha\). In terms of scale parameter, HC (37) becomes:
\[C(b)=-2\pi\bigg{(}1+y_{0}^{2}\bigg{)}b^{2}=-20.41b^{2}, \tag{38}\]
where in the last step we put \(y_{0}=1.5\).
The diagram \(C-b\) is illustrated in Fig.5 (left panel) which indicates the RPBH is locally unstable system for any values of the scale parameter.
**dS and AdS cases**
In this case, we rewrite the equation (34) as:
\[C=\frac{dM}{dT}=\frac{\frac{dM}{db}}{\frac{dT}{db}} \tag{39}\]
which the numerator can be obtained from equation (25) as follows:
\[\frac{dM}{db}=\frac{2}{3\pi}(\pm 3b^{2}+\alpha). \tag{40}\]
To calculate the denominator in equation (39), we can use the following formula:
\[\frac{dT}{db}=\frac{\partial T}{\partial r_{+}}\frac{\partial r_{+}}{\partial b }+\frac{\partial T}{\partial b}. \tag{41}\]
As result by equations (40)(41), HC (39) takes the following form:
\[C=\frac{dM}{dT}=\frac{\frac{2}{3\pi}(\pm 3b^{2}+\alpha)}{\frac{\partial T}{ \partial r_{+}}\frac{\partial r_{+}}{\partial b}+\frac{\partial T}{\partial b}}. \tag{42}\]
The three derivatives in the denominator of (41) can be calculated by equations (28) and (9) as follows:
\[\frac{\partial T}{\partial r_{+}}=-\frac{\alpha-\frac{2\left(\pm b^{2}+\alpha \right)\left(br_{+}+\left(b^{2}+r_{+}^{2}\right)\arctan\left[\frac{r_{+}}{b} \right]\right)}{\pi\left(b^{2}+r_{+}^{2}\right)}}{2\pi b^{2}}, \tag{43}\]
\[\frac{\partial T}{\partial b}=\frac{\pm 2b^{3}+2b\alpha-\pi r_{+}\alpha+2r_{+} \left(\pm b^{2}+\alpha\right)\arctan\left[\frac{r_{+}}{b}\right]}{2b^{2}\pi^{ 2}}. \tag{44}\]
\[\frac{\partial r_{+}}{\partial b}=\frac{r_{+}\Big{(}\pm b^{3}\mp 2b^{4}r+b \alpha-\pi r_{+}\alpha\Big{)}+\left(\mp 2b^{4}+2r_{+}^{2}\alpha\right)\arctan \left[\frac{r_{+}}{b}\right]}{b\Big{(}\pm b^{3}\pm 2b^{4}r_{+}+b\alpha+2b^{2}r_{+} \alpha-\pi r_{+}\alpha+2r_{+}\left(\pm b^{2}+\alpha\right)\arctan\left[\frac {r_{+}}{b}\right]\Big{)}}. \tag{45}\]
Substituting equations (43) to (45) into equation (42) results:
\[C=C(r_{+},b)=(-\frac{\alpha-\frac{2\left(\pm b^{2}+\alpha\right)\left(br_{+}+ \left(b^{2}+r_{+}^{2}\right)\arctan\left[\frac{r_{+}}{b}\right]\right)}{\pi \left(b^{2}+r_{+}^{2}\right)}}{2b^{2}\pi})\]
\[(\frac{r_{+}\Big{(}\pm b^{3}\mp 2b^{4}r+b\alpha-\pi r_{+}\alpha\Big{)}+ \left(\mp 2b^{4}+2r_{+}^{2}\alpha\right)\arctan\left[\frac{r_{+}}{b}\right]}{ b\Big{(}\pm b^{3}\pm 2b^{4}r_{+}+b\alpha+2b^{2}r_{+}\alpha-\pi r_{+}\alpha+2r_{+} \left(\pm b^{2}+\alpha\right)\arctan\left[\frac{r_{+}}{b}\right]\Big{)}})\]
\[+(\frac{\pm 2b^{3}+2b\alpha-\pi r_{+}\alpha+2r_{+}\left(\pm b^{2}+\alpha \right)\arctan\left[\frac{r_{+}}{b}\right]}{2b^{2}\pi^{2}}) \tag{46}\]
Since \(r_{+}\) is an implicit function of \(b\), also \(C\) becomes a function of \(b\), that is \(C=C(b)\). To plot the graph of this function, we use the relevant software code and resulting graph is shown in Fig.(5) (middle and right panels).
The followings can be deduced from the figure:
1- In dS case (middle panel), the RPBH is locally unstable in a certain interval \(0<b<b_{0}\) where \(b_{0}\) is a point in which a phase transition occurs, that is \(C(b_{0})=0\). For \(b>b_{0}\), it seems that the RPBH becomes locally stable, means that, transition from unstable to stable state is possible smoothly, but there is a subtle point that should be noted here. As we saw, in dS case, the
scale parameter values are restricted to an interval like (10) with \(b_{max}\simeq 0.7\). On the other hand, as the Fig.(5) (middle panel) shows also \(b_{0}\simeq 0.7\), means that \(b_{0}=b_{max}\). In other words, the allowable part of HC graph is between \(0<b<b_{0}\) (as horizon graph Fig. 2-middle panel), therefore, the BH is always unstable.
2- In AdS case (right panel), the RPBH is locally unstable in a certain interval \(0<b<b_{0}\) where \(b_{0}\) is a point at which a phase transition occurs and for \(b>b_{0}\), the RPBH becomes stable. But here, the phase transition point is of the infinite discontinuity type, that is \(\lim_{b\to b_{0}^{\pm}}C(b)=\pm\infty\)? In other words, transition from stable to unstable state (or vice versa) is not possible smoothly and requires an infinite jump. Therefore, depending on the \(b\) value, the RPBH is locally always unstable (\(b<b_{0}\)) or always stable (\(b>b_{0}\)).
#### 3.4.2 Gibbs Energy
To study the global stability of the BHs, the GE is a useful thermodynamic function [26]. The BHs are universally stable (unstable) provided their GE is positive (negative) [27]. Also, in order to investigate and determine the phase transition, it is necessary to calculate GE of the new BHs [28, 29, 30]. The roots of \(G=0\) are the phase transition points [29, 31].
The GE formula is defined as:
\[G=M-TS. \tag{47}\]
To discuss the stability of the BH through GE, we study its variations versus temperature, that is the \(G-T\) diagram.
**Flat case**
By considering the equations (24), (31) and (47), it is not difficult to show that:
\[M=2TS,\]
Figure 5: The \(C-b\) graph, for the flat (38), dS and AdS (46) spacetimes in \(\alpha=4.5\).
therefore
\[G=TS=\frac{\alpha^{2}}{9\pi^{3}(1+y_{0}^{2})}\frac{1}{T}, \tag{48}\]
which behaves as Homographic function. The graph of this function is illustrated in Fig.6 (top left panel) for \(\alpha=4.5\) and it shows the GE is always positive, means that the RPBH is globally stable. The result is the Schwarzschild like qualitatively (the GE for the Schwarzschild BH is \(G_{Sch}=\frac{1}{16\pi}\frac{1}{T}\)). The only difference is the numeric coefficients of \(\frac{1}{T}\) where in (47), it includes the parameter \(\alpha\). This in turn allows to provides adjustment to possible observational data.
**dS and AdS cases**
In these cases, we replace the \(S(y)\), \(M(y)\) and \(T(y)\) and in (47) by (22), (26) and (33) respectively, to obtain the GE as function of \(y\) as:
\[G(y)=\frac{1-y^{2}\left(-3+\alpha\right)-\alpha+3\left(y+y^{3}\right)\arctan \left(y\right)}{6\sqrt{2}\Big{(}y+\left(1+y^{2}\right)\arctan\left(y\right) \Big{)}}\times\]
\[\sqrt{\frac{\mp 2y\alpha\pm\pi\left(-1+\alpha+y^{2}\alpha\right)\mp 2\left(1+ y^{2}\right)\alpha\arctan\left(y\right)}{y+\left(1+y^{2}\right)\arctan \left(y\right)}}. \tag{49}\]
Now by the parametricplot code, equations (33) and (49) allow to plot the \(G-T\) diagram illustrated in Fig.6 (bottom right and left panels) for \(\alpha=4.5\). The Figure shows in dS case (bottom left panel), the GE is positive and decreasing (asymptotically zero) function of temperature, then the BH is globally stable. In addition, since the first law of BH thermodynamics prohibits zero temperature, then the maximum stability is at (positive) temperatures close to zero.
In AdS case (bottom right panel), in a certain temperature (\(T_{0}\simeq 0.25\)), the GE vanishes and as a result by passing this point undergoes a phase transition (\(G=0\)). The diagram also shows, the BH temperature can not be smaller than \(T_{0}\). For \(T\gtrsim T_{0}\), GE has positive (small) values and it becomes zero asymptotically with a slowly decreasing rate. Therefore, for this temperature region, the RPBH is globally stable.
## 4 conclusion
In this research, the thermodynamics properties of RPBH are studied and examined for three asymptotically spacetimes, flat, dS and AdS. Since many
Figure 6: The \(G-T\) graph, for the flat (48), dS and AdS (49) spacetimes in \(\alpha=4.5\) and the \(G-T\) diagram of Schwarzschild.
of the characteristics of RPBH depend on the scale parameter \(b>0\) and the Schwarzschild BH is the limiting case of RPBH as \(b\to 0\), the results are compared with the Schwarzschild BH ones. The results in the asymptotically flat case are Schwarzschild like, qualitatively. But in the asymptotically dS and AdS cases, most of the results are somewhat different. It is more significant that the AdS case is more closely related to physical laws. The main conclusions can stated as follows:
1) The horizon of the RPBH, in the flat case is a linear increasing function of \(b\). In the dS case, the horizon is a monastically increasing function of \(b\), the value of the scale parameter is bounded above \(0<b<b_{max}\). In the AdS case, the scale parameter has no restriction and the horizon approaches an asymptotic value (as \(b\rightarrow\infty\)) which is smaller than the maximum radius of horizon.
2) The entropy of RPBH is always greater than the Schwarzschild one.
3) The mass variations versus entropy (\(M-S\) diagrams) are Schwarzschild like, in three asymptotically cases.
4) The temperature variations versus entropy (\(T-S\) diagram) is Schwarzschild like in flat case. The \(T-S\) diagram, in dS case displays a decreasing and unbounded below function which vanishes in a certain entropy violating the third law of BH thermodynamics. In AdS case, The \(T-S\) diagram displays a decreasing, but bounded below function which approaches a certain positive minimum asymptotically.8 Therefore, the AdS-RPBH is more physically acceptable.
Footnote 8: Note that in the flat case, this minimum is zero.
5) In the flat case, HC of RPBH is the same as the Schwarzschild one (\(C=-2S\)) indicating the locally unstability.
In the dS, HC at allowable interval (\(0<b<b_{max}=b_{0}\)) is negative, hence the RPBH is always unstable.
In the AdS case, HC undergoes a phase transition where the phase transition point is of an infinite discontinuity, that is:
\[\lim_{b\to b_{0}^{\pm}}C(b)=\pm\infty.\]
Thus, depending on the scale parameter \(b\) is smaller or greater than the phase transition point \(b_{0}\), the RPBH is always unstable or stable, respectively.
6) The graph of the GE versus temperature \(T\) shows in the flat and dS cases, the GE is always positive, indicating the globally stable. However in the AdS case, the RPBH undergoes a phase transition at a certain temperature \(T_{0}\), that is \(G(T_{0})=0\). For \(T\gtrsim T_{0}\), it becomes globally stable. Also, in this case, the BH temperature cannot be lower than \(T_{0}\). |
2301.00228 | A Lattice Boltzmann Method for Elastic Solids Under Plane Strain
Deformation | The Lattice Boltzmann Method (LBM), e.g. in [ 1] and [2 ], can be interpreted
as an alternative method for the numerical solution of partial differential
equations. Consequently, although the LBM is usually applied to solve fluid
flows, the above interpretation of the LBM as a general numerical tool, allows
the LBM to be extended to solid mechanics as well. In this spirit, the LBM has
been studied in recent years. First publications [3], [4] presented an LBM
scheme for the numerical solution of the dynamic behavior of a linear elastic
solid under simplified deformation assumptions. For so-called anti-plane shear
deformation, the only non-zero displacement component is governed by a
two-dimensional wave equation. In this work, an existing LBM for the
two-dimensional wave equation is extended to more general plane strain
problems. The proposed algorithm reduces the plane strain problem to the
solution of two separate wave equations for the volume dilatation and the
non-zero component of the rotation vector, respectively. A particular focus is
on the implementation of types of boundary conditions that are commonly
encountered in engineering practice for solids: Dirichlet and Neumann boundary
conditions. Last, several numerical experiments are conducted that highlight
the performance of the new LBM in comparison to the Finite Element Method. | Alexander Schlüter, Henning Müller, Sikang Yan, Erik Faust, Ralf Müller | 2022-12-31T15:44:50Z | http://arxiv.org/abs/2301.00228v1 | # A Lattice Boltzmann Method for Elastic Solids
###### Abstract
The Lattice Boltzmann Method (LBM), e.g. in [1] and [2], can be interpreted as an alternative method for the numerical solution of partial differential equations. Consequently, although the LBM is usually applied to solve fluid flows, the above interpretation of the LBM as a general numerical tool, allows the LBM to be extended to solid mechanics as well. In this spirit, the LBM has been studied in recent years. First publications [3, 4] presented an LBM scheme for the numerical solution of the dynamic behavior of a linear elastic solid under simplified deformation assumptions. For so-called anti-plane shear deformation, the only non-zero displacement component is governed by a two-dimensional wave equation. In this work, an existing LBM for the two-dimensional wave equation is extended to more general plane strain problems. The proposed algorithm reduces the plane strain problem to the solution of two separate wave equations for the volume dilatation and the non-zero component of the rotation vector, respectively. A particular focus is on the implementation of types of boundary conditions that are commonly encountered in engineering practice for solids: Dirichlet and Neumann boundary conditions. Last, several numerical experiments are conducted that highlight the performance of the new LBM in comparison to the Finite Element Method.
Lattice Boltzmann Method solids plane strain computational engineering computational solid mechanics
## 1 Introduction
The mechanical behavior of solid bodies is of interest to both engineering and science. Thus, a large number of numerical methods capable of dealing with elasticity have emerged over time. The more prominent ones among these, finite differences methods (FDM), finite element methods (FEM) and finite volume methods (FVM), work on the principle of discretizing the domain of interest and replacing the governing system of differential equations by algebraic equations. Such methods take a kind of top-down approach, and can therefore be thought of as acting on a macroscopic scale. In contrast, some numerical methods, such as molecular dynamics (MD) or density functional theory (DFT), regard the interactions of a system's most basic constituents, such as individual particles and electrons, on a microscopic scale.
A different approach is taken with _Lattice-Boltzmann methods_ (LBMs). The common principle of this type of methods is to transform the given physical problem into a transport problem. Based on Boltzmann's transport equation from statistical mechanics, distribution functions are transported across phase-space, which is discretized both by a regular lattice and a set of associated lattice velocities. Information is exchanged between neighboring lattice sites in a streaming-like process along links connecting these points. This information is represented by so-called distribution functions, where each distribution function is associated with a different lattice velocity. The distribution functions are subjected to on-site interactions, or collisions, which incorporate the underlying microscopic theory in a probabilistic manner. Thus, LBMs can be said to act mesoscopically, i.e. on an intermediate scale.
LBMs are well established in computational fluid dynamics (CFD) and have subsequently been extended to further scientific fields, such as solving Schrodinger's equation [5] or Wigner's equation [6] in quantum mechanics. Developing a LBM for solid mechanics could mean using a single method on both sides of a fluid-solid-interface, which is a topic of interest [7] in CFD.
We approach the topic of a LBM for elastic bodies from the mechanical point of view. This includes a greater focus on finite domains with an appropriate boundary handling, which is of great concern for engineering problems. The advantages over the established methods in computational engineering include the generally great computational efficiency, while still being able to handle the boundary conditions of complex domains. A further improvement of computation times by can be achieved by employing parallel computing, which is easy to implement with most LBMs. This opens up possibilities in highly dynamical problems, requiring very fine resolution of the temporal domain.
The dynamic behavior of elastic solids can be described by multiple wave equations, that are superposed to obtain the aggregated deformation. This mathematical description is closer to the transport phenomena for which the LBM was initially conceived, when compared to the original Navier-Cauchy equation. In fact, LBMs have already been developed for the wave equation, see e.g. [8; 9; 10]. Furthermore, LBMs have been proposed for the numerical treatment of mechanical problems in solid bodies [11; 1; 12; 13; 14; 15; 16], and specifically elastic wave propagation [17; 18], which is also a topic of interest in geophysics and seismology. However, an extensive method for the deformation of linear elastic solids under loads, containing appropriate boundary conditions, has still to be accomplished.
In previous works [3; 19] we applied the LBM for wave equations published by Yan [8] to the mechanical problem of anti-plane shear deformation, which we then used for fracture mechanics. This work now regards the two-dimensional problem of plane strain. The fundamental idea is to decompose the plane strain problem governed by the Navier-Cauchy equation into two equivalent wave equations that are solved with the LBM for wave propagation by Chopard et al. [11].
The discussion is structured as follows: First the mechanical problem is reviewed and the relevant equations are derived. The next section introduces the LBM for the wave equation, followed by the presentation of the algorithm for the plane strain case. This includes a treatment of boundary conditions similar to [19]. Lastly three numerical examples show the feasibility of our algorithm, each compared to FEM computations, which act as benchmarks.
## 2 Plane Strain Deformation of a Linear Elastic Solid
We consider a homogeneous, isotropic, and elastic body \(\mathcal{B}\) with boundary \(\partial\mathcal{B}=\partial\mathcal{B}_{u}\cup\mathcal{B}_{t}\), which is subjected to Dirichlet boundary conditions \(\mathbf{u}=\vec{u}^{*}\) for the displacement \(\vec{u}\) on \(\partial\mathcal{B}_{u}\) and Neumann boundary conditions \(\vec{\sigma}\vec{n}=\mathbf{t}^{*}\) for the Cauchy stress tensor \(\vec{\sigma}\) on \(\partial\mathcal{B}_{t}\), see Fig. 1. For the plane strain case the problem is only regarded in two dimensions and under small strain assumptions.
A set of fundamental equation is taken as a basis for the derivation of the mathematical description of the problem. Firstly, the strain-displacement relation for small strains is given by the linearized strain tensor
\[\vec{\varepsilon}=\frac{1}{2}\left(\nabla\vec{u}+\left(\nabla\vec{u}\right)^{ T}\right), \tag{1}\]
where \(\vec{u}=\vec{u}(x,y,t)\) describes the time-dependent displacement field in two dimensions under plane strain assumptions. The general equation of motion for the small strain case, in the absence of a body force, is given by
\[\nabla\cdot\vec{\sigma}=\rho\,\frac{\partial^{2}\vec{u}}{\partial t^{2}}, \tag{2}\]
Figure 1: Body \(\mathcal{B}\) with outer normal vector \(\mathbf{n}\), subjected to Neumann boundary conditions \(\vec{t}^{*}\) on \(\mathcal{B}_{t}\) and Dirichlet boundary conditions on \(\mathcal{B}_{u}\).
with Hooke's law as the linear stress-strain relation
\[\vec{\sigma}=\lambda\,\mathrm{tr}(\vec{\varepsilon})\,\mathbf{1}+2\mu\vec{ \varepsilon}. \tag{3}\]
Herein \(\lambda\) and \(\mu\) are the Lame parameters of the material, \(\mathbf{1}\) is the second-order identity tensor and the operator \(\mathrm{tr}(*)\) denotes the trace of a second order tensor.
Equation (1) is substituted in equation (3), which is then substituted in (2). The result is the Navier-Cauchy equation
\[\left(\lambda+\mu\right)\nabla\left(\nabla\cdot\vec{u}\right)+\mu\nabla^{2}\, \vec{u}=\rho\,\frac{\partial^{2}\vec{u}}{\partial t^{2}}, \tag{4}\]
which describes the mechanical behavior of an isotropic linear elastic solid. Using the general identity
\[\nabla^{2}\vec{u}=\nabla(\nabla\cdot\vec{u})-\nabla\times(\nabla\times\vec{u}) \tag{5}\]
from vector calculus, equation (4) can be rewritten as
\[c_{d}^{2}\,\nabla(\nabla\cdot\vec{u})-c_{s}^{2}\,\nabla\times(\nabla\times \vec{u})=\frac{\partial^{2}\vec{u}}{\partial t^{2}}, \tag{6}\]
where \(c_{d}=\sqrt{\nicefrac{{(\lambda+2\mu)}}{{\rho}}}\) and \(c_{s}=\sqrt{\nicefrac{{\gamma}}{{\rho}}}\).
With regard to equation (6), two fields, \(\phi\) and \(\vec{\psi}\), can be defined as follows:
\[\phi=\nabla\cdot\vec{u}\] and \[\vec{\psi}=\nabla\times\vec{u}. \tag{7}\]
The scalar field \(\phi\) describes the dilatation of the displacement field \(\vec{u}\), whereas the vector field \(\vec{\psi}\) describes the rotation of \(\vec{u}\). In two dimensions, the latter reduces to \(\vec{\psi}=\psi\,\vec{e}_{z}\). The Navier-Cauchy equation can be restated in terms of these fields
\[c_{d}^{2}\,\nabla\phi-c_{s}^{2}\,\nabla\times\vec{\psi}=\frac{\partial^{2} \vec{u}}{\partial t^{2}}. \tag{8}\]
In conjunction with the definitions in equation (7), applying the divergence to both sides of equation (8) results in
\[c_{d}^{2}\,\nabla^{2}\phi=\frac{\partial^{2}\phi}{\partial t^{2}},\] (9a) while applying the curl results in \[c_{s}^{2}\,\nabla^{2}\vec{\psi}=\frac{\partial^{2}\vec{\psi}}{\partial t^{2}}. \tag{9b}\]
Thus, the Navier-Cauchy equation (8) can be reduced to two wave equations, the dilatational wave equation (9a) for \(\phi=\nabla\cdot\vec{u}\) with wave speed \(c_{d}\), and the rotational wave equation (9b) for \(\vec{\psi}=\psi\,\vec{e}_{z}=\nabla\times\vec{u}\) with wave speed \(c_{s}\). Note that there are other 'decompositions' of the displacement field besides (7) that lead to similar wave equations, see e.g. [20].
Figure 2: a) Lattice representation of the elastic solid and b) and the associated lattice velocity vectors (lattice links) for a single lattice point.
## 3 Lattice Boltzmann Method for Plane Strain
The proposed numerical strategy for the plane strain case relies on solving the wave equations (9) by means of the LBM by Chopard et al. [11]. In the LBM, a body \(\mathcal{B}\) is typically approximated by a regular lattice with lattice spacing \(\Delta h\) as depicted in Fig. 2 a). The approach by Chopard et al. is based on a D2Q5 scheme, see also Fig. 2, with the lattice velocities
\[\vec{c}^{0} =(c_{x}^{0},c_{y}^{0})=(0,0)\] \[\vec{c}^{4} =(c_{x}^{4},c_{y}^{1})=(c,0)\] \[\vec{c}^{2} =(c_{x}^{2},c_{y}^{2})=(0,c) \tag{10}\] \[\vec{c}^{3} =(c_{x}^{3},c_{y}^{3})=(\text{-}c,0)\] \[\vec{c}^{4} =(c_{x}^{4},c_{y}^{4})=(0,\text{-}c)\]
where \(c=\nicefrac{{\Delta h}}{{\Delta t}}\) is the speed at which information can travel in the lattice. Thus, the lattice velocities \(\vec{c}^{4},\vec{c}^{2},\vec{c}^{8},\vec{c}^{4}\) allow for information to be transported to each of the four neighbors of a lattice point in a so-called D2Q5 scheme1 in one time step, whereas \(\vec{c}^{0}\) is associated with information remaining at a particular lattice point. Information is represented by distribution functions, e.g. \(f^{\alpha}\) represents information, which is transported with lattice velocity \(\vec{c}^{n}\).
Footnote 1: D2Q5 refers to the dimension of the lattice, i.e. two in this case, and the number of lattice velocities, i.e. five in this case.
In order to simulate the wave equations (9), the distribution functions need to be interpreted, i.e. a relation to the macroscopic fields needs to be established. We introduce two sets of distribution functions to simulate both wave equations and relate them to the macroscopic fields through
\[\sum_{\alpha=0}^{4}f_{\psi}^{\alpha}=\psi,\quad\sum_{\alpha=0}^{4}f_{\phi}^{ \alpha}=\phi. \tag{11}\]
The Lattice Boltzmann equation (LBE) models transport as well as the interaction of distribution functions between and at lattice points respectively. Since two wave equations need to be solved, we also introduce two associated LBEs for \(\psi\) and \(\phi\) respectively
\[f_{\psi|\phi}^{\alpha}\left(\vec{x}+\vec{c}^{\alpha}\Delta t,t+\Delta t\right)=\] \[f_{\psi|\phi}^{\alpha}(\vec{x},t)-\frac{\Delta t}{\tau}\left[f_{ \psi|\phi}^{\alpha}(\vec{x},t)\right.\left.-f_{\text{eq},\psi|\phi}^{\alpha}( \vec{x},t)\right], \tag{12}\]
where the notation \((\psi|\phi)\) indicates that either \(\psi\) or \(\phi\) need to be chosen for the whole equation and the common BGK approximation is employed, see [21] and [22]. Equation (12) is universal to many Lattice Boltzmann models. The specific physics can be modeled by choosing the equilibrium distribution functions \(f_{\text{eq},\psi|\phi}^{\alpha}\) and relaxation time \(\tau\) in a certain way. In order to model a wave equation Chopard et al. propose \(\tau=0.5\Delta t\) and
\[f_{\text{eq},\psi|\phi}^{0} =a_{0,\psi|\phi}(\psi|\phi)\] \[f_{\text{eq},\psi|\phi}^{\alpha} =a_{\psi|\phi}(\psi|\phi)+b\frac{\vec{c}^{n}\cdot\vec{J}_{\psi| \phi}}{2c^{2}},\] \[\text{with }\vec{J}_{\psi|\phi} =\sum_{\alpha=0}^{4}\vec{c}^{n}f_{\psi|\phi}^{\alpha}(\vec{x},t), \text{ for }\alpha\neq 0, \tag{13}\]
where again the notation \((\psi|\phi)\) indicates that either \(\psi\) or \(\phi\) need to be chosen for the whole equation. The parameters also need to fulfill the requirements
\[b=1,\quad\text{conservation of }\vec{J}_{\psi|\phi}\] \[a_{0,\psi|\phi}+4a_{\psi|\phi}=1,\quad\text{conservation of }\psi|\phi \tag{14}\] \[a_{0,\psi|\phi}\geq 0,\quad\text{stability}\]
and
\[c_{s}|c_{d}=\frac{\Delta h}{\Delta t}\sqrt{2a_{\psi|\phi}}, \tag{15}\]
see Chopard [1]. Equation (15) allows us to adjust the macroscopic wave speed modeled by the LBM independently of the time step \(\Delta t\) or the lattice spacing \(\Delta h\) by choosing \(a_{\psi|\phi}\) accordingly. We exploit this in order to be able to simulate
both wave equations (9) on the same lattice, i.e. fixed \(\Delta h\), and the same time discretization, i.e. fixed \(\Delta t\). Apart from (15), the requirements (14) still need to be fulfilled. This can be accomplished for the general case \(c_{s}<c_{d}\) by setting
\[0\leq a_{\phi}\leq 0.25,\]
\[a_{\psi}=\frac{c_{s}^{2}}{c_{d}^{2}}a_{\phi}, \tag{16}\]
\[\Delta t=\frac{\Delta h}{c_{s}}\sqrt{2a_{\psi}}=\frac{\Delta h}{c_{d}}\sqrt{2a _{\phi}}.\]
Note that the Courant-Friedrichs-Lewy (CFL) stability condition [23], which is critical for the numerical analysis of hyperbolic PDEs with explicit schemes, for the larger - and more critical - wave speed
\[\frac{c_{d}\Delta t}{\Delta h}\leq 1, \tag{17}\]
is always guaranteed by (16).
The LBE (12) is only part of the overall algorithm that is used to solve a plane strain problem as summarized in Fig. 3. The other parts of the algorithm are discussed in the following.
The initial preprocessing step builds the lattice for a given geometry and also computes cells at each boundary lattice point as depicted in Fig. 4. Without going into detail, the algorithm for creating individual cells starts with a quadratic cell of side length \(\Delta h\) centered around a boundary lattice point. This original cell is subsequently modified to match the boundary geometry. The volume \(V_{C}\) of a cell and the surface that such a cell \(C\) shares with the external boundary \(\partial C_{\text{ext}}\subset\partial\mathcal{B}\) are relevant for the computation of the acceleration at boundary lattice points later on.
After preprocessing, the material velocity \(\dot{\mathbf{u}}\) and the displacement \(\vec{u}\) are initialized first. Subsequently, \(\psi\) and \(\phi\) are initialized by a finite difference approximation of (7). Lastly, the initial distribution functions are determined to be the value of the equilibrium distribution function
\[f^{\alpha}(\vec{x},0)_{\psi|\phi}=f^{\alpha}_{\text{eq},\psi|\phi}(\psi(\vec{ x},0)|\phi(\vec{x},0)). \tag{18}\]
In the time loop, the acceleration is computed from the Navier-Cauchy equation (8) at interior lattice points, whereas boundary conditions determine the acceleration at the boundary lattice points. Once the acceleration at each lattice point is known, the displacement is computed by explicit integration via the Newmark method, i.e.
\[\vec{u}(\vec{x},t+\Delta t)=\vec{u}(\vec{x},t)+\Delta t\dot{\vec{ u}}(\vec{x},t)+\frac{\Delta t^{2}}{2}\ddot{\vec{u}}(\vec{x},t),\] \[\dot{\vec{u}}(\vec{x},t+\Delta t)=\dot{\vec{u}}(\vec{x},t)+ \Delta t\,\ddot{\vec{u}}(\vec{x},t). \tag{19}\]
Figure 3: Summary of the employed lattice Boltzmann algorithm.
After the integration step, the displacement field is already updated, i.e. \(\vec{u}(\vec{x},t+\Delta t)\) is determined at all lattice points. The rotation \(\psi(\vec{x},t)\), the dilatation \(\phi(\vec{x},t)\) and the associated distribution functions \(f^{\alpha}_{\psi|\phi}\) have not been updated yet, but they are required to compute the acceleration at interior lattice points in the next time step.
We prepare the required update by computing rotation and dilatation fields as well as distribution functions at the boundary lattice points. All of these must be consistent with the applied boundary conditions as well, see the next section for details. Subsequently, the rotation and dilatation fields are updated in the interior by the LBM. This step includes the update of all interior distribution functions by (12) and the computation of the rotation and the dilatation by (11).
### Boundary Conditions
In this section, the treatment of boundary conditions is explained in more detail since it is the most complex part of the proposed LBM. The overall strategy is to first determine the acceleration \(\vec{\tilde{u}}(\vec{x},t)\), that is consistent with the boundary conditions for each boundary lattice point. Subsequently, the displacement field is updated everywhere via the Newmark integration mentioned above. Last, the rotation and dilatation fields as well as the distribution functions are reconciled with the displacement.
#### 3.1.1 Consistent Acceleration at Boundary Lattice Points
For Neumann type boundary conditions the consistent acceleration is determined by computing cells \(C\) with size \(V_{C}\) and boundary \(\partial C\) around each boundary lattice point \(\vec{x}_{k}\) as shown in Fig. 4. For each of these cells, we consider a balance of momentum
\[\int_{C}\rho\vec{\tilde{u}}(\vec{x}_{k},t)\,\mathrm{d}v=\] \[\int_{\partial C_{\text{int}}}\vec{\sigma}(\vec{x},t)\vec{n}\, \mathrm{d}a+\int_{\partial C_{\text{int}}}\vec{t}^{\text{*}}(\vec{x},t)\, \mathrm{d}a, \tag{20}\]
where \(\partial C_{\text{int}}\) is the part of the boundary of the cell which is shared with neighboring cells and \(\partial C_{\text{ext}}\) is part of the boundary of the cell that is shared with the boundary of the body. Equation (20) is simplified by assuming that \(\rho\) and \(\vec{\tilde{u}}(\vec{x}_{k},t)\) are constant across the cell and that the stress \(\vec{\sigma}_{kr}\) is constant for each segment of the internal boundary shared with a particular neighbor \(\vec{x}_{r}\),
\[\tilde{\vec{u}}(\vec{x}_{k},t)\approx\] \[\frac{1}{\rho V_{C}}\left(\sum_{r\in\text{Neighbors}}\vec{\sigma} _{kr}\vec{n}_{kr}+\int_{\partial C_{\text{int}}}\vec{t}^{\text{*}}(\vec{x},t )\,\mathrm{d}a\right)\,. \tag{21}\]
The surface measure, i.e. the length of of the boundary segment in 2D and the normal vector are denoted by \(l_{kr}\) and \(\vec{n}_{kr}\) respectively. The stress tensor at each segment is approximated by
\[\vec{\sigma}_{kr}=\frac{1}{2}\left(\vec{\sigma}(\vec{x}_{k},t)+\vec{\sigma}( \vec{x}_{r},t)\right), \tag{22}\]
Figure 4: Cells are generated around each boundary lattice point in order to apply Neumann boundary conditions.
where the stress at the lattice points \(\vec{x}_{k}\) and \(\vec{x}_{r}\) is computed by a finite difference approximation of (3).
For Dirichlet type boundary conditions \(\vec{u}=\vec{u}^{*}\) on \(\partial B_{u}\), where \(\vec{u}^{*}\) is the prescribed displacement value, the acceleration \(\vec{\ddot{u}}(\vec{x}_{k},t)\) at boundary lattice points is determined from the integration scheme (19). Although extrapolation to a non-lattice conforming boundary is also possible, we limit the discussion of Dirichlet boundary conditions to situations in which boundary lattice points lie exactly on the boundary. In this case, it is
\[\vec{u}(\vec{x}_{k},t+\Delta t)=\vec{u}^{*}. \tag{23}\]
Thus, (19) can be solved for the required acceleration
\[\vec{\ddot{u}}(\vec{x}_{k},t)=\frac{2}{\Delta t^{2}}(\vec{u}^{*}(t)-\vec{u}( \vec{x}_{k},t))-\frac{2}{\Delta t}\dot{\vec{u}}(\vec{x}_{k},t). \tag{24}\]
1.2 Consistent Displacement, Rotation, Dilatation and Distribution Functions at Boundary Lattice Points
After the acceleration at time \(t\) is known at all lattice points, the displacement as well as the distribution functions for the next time step need to be computed in a consistent manner. In this context, we regard the distribution functions, and
Figure 5: Updating the displacement \(\mathbf{u}\) as well as the rotation \(\psi\) and dilation \(\phi\) in a simple square domain. The outer circles represent the state of \(\psi\) and \(\phi\) at the particular lattice points, whereas the inner circles represent the state of \(\mathbf{u}\). Yellow indicates that quantities still have a value associated with the previous time step \(t\), whereas green color indicates that \(\mathbf{u}\) or \(\psi\) and \(\phi\) are already updated to their values at \(t+\Delta t\). a) the state after the previous time step. b) integration is performed at all lattice points which updates the displacement. c) finite differences (stencil is indicated by the red lines) are used to update \(\psi\) and \(\phi\) at the boundary lattice points in a way that is consistent with the new displacement field. d) all boundary points are updated. e) the LBM update, i.e. solving the wave equations, leads to a consistent rotation and dilatation at interior lattice points (red lines indicate from which neighbors information is streamed to an interior lattice point). f) all interior points have consistent fields after the LBM update. g) intermediate ‘second row’ boundary points are more problematic since there is also information streamed from boundary points that does not originate from the LBE for the wave equations, but from the handling of boundary conditions (red lines indicate from which neighbors information is streamed to an interior lattice point). h) Fields are consistent – considering the previous remarks – at all lattice points.
consequently \(\psi\) and \(\phi\), to be consistent with the updated displacement field if
\[\begin{split}\sum_{\alpha=0}^{4}f_{\psi}^{\alpha}(\vec{x},t+\Delta t )=&\psi(\vec{x},t+\Delta t)\\ &\approx(\nabla\times\vec{u})|_{(\vec{x},t+\Delta t)},\\ \sum_{\alpha=0}^{4}f_{\phi}^{\alpha}(\vec{x},t+\Delta t)=& \phi(\vec{x},t+\Delta t)\\ &\approx(\nabla\cdot\vec{u})|_{(\vec{x},t+\Delta t)}.\end{split} \tag{25}\]
Herein, \((*)|_{(\vec{x},t+\Delta t)}\) means that the spatial derivative \(*\) is performed at the lattice point \(\vec{x}\) and time \(t+\Delta t\) via second order accurate finite differences, which is a non-local operation that also involves neighbor lattice points to \(\vec{x}\). Fig. 5 displays the utilized stencils for this operations as red lines.
The starting point for the algorithm is the situation after the previous time step has been completed as depicted for a quadratic domain in Fig. 5 a). In this figure, lattice points are represented as circles. In order to illustrate the strategy of obtaining consistent displacement and distribution functions, the color of the inner circles also represents the state of the displacement field, i.e. the not yet updated state \(\vec{u}(*,t)\) is represented by yellow and the updated state \(\vec{u}(*,t+\Delta t)\) is indicated by green color. Similarly, the color of the outer circle indicates the state of the rotation \(\psi\) and dilatation \(\phi\). A yellow outer circle indicates that rotation and dilatation are not updated yet, i.e. the state \([\psi(*,t),\phi(*,t)]\), whereas a green outer circle indicates the updated state \([\psi(*,t+\Delta t),\phi(*,t+\Delta t)]\).
The acceleration at all lattice points is known from the Navier-Cauchy equation (8) or the boundary conditions (21) and (24), which allows to update the displacement field via (19) as a next step. This leads to an inconsistent situation where the displacement is already updated, but the rotation and the dilatation fields are not, see Fig. 5 b).
The rotation and dilatation fields in the interior are updated via the LBM for the wave equation. This works fine in the interior, where we have pointed out that the Navier-Cauchy equation and the wave equations (9) are equivalent. Consequently the update of the displacement field by (8) and (19) on the one hand, and the update of the distribution functions by (12) and the derived rotation and the dilatation by and (11) on the other hand are consistent within the limits of the LBM by Chopard et al. [11], see Fig. 5 e) and f).
However, at boundary lattice points the displacement field is updated from the boundary conditions and the distribution functions can only partially be updated via the LBE.
Moreover, the neighbors of boundary lattice points, the'second row' boundary lattice points, also cannot be in a consistent state in the sense of (25) since the finite difference approximation of \(\nabla\times\vec{u}\) and \(\nabla\cdot\vec{u}\) at those points depends on the displacement of boundary lattice points which in turn is determined only by the boundary conditions and not by the Navier-Cauchy equation.
Thus, in order to model the boundary conditions for the LBM correctly, it is necessary to accomplish two things:
* Setting the distribution functions at boundary lattice points consistent with (25).
* Modifying the LBE at'second row' lattice points in such a way that consistency is achieved at these points in the sense of (25).
The first requirement is satisfied by setting
\[\begin{split}\psi(\vec{x}_{k},t+\Delta t)=&(\nabla \times\vec{u})|_{(\vec{x}_{k},t+\Delta t)},\\ \phi(\vec{x}_{k},t+\Delta t)=&(\nabla\cdot\vec{u}) |_{(\vec{x}_{k},t+\Delta t)}\end{split} \tag{26}\]
at boundary lattice points \(\vec{x}_{k}\), see Fig. 5 c) and d), and
\[\begin{split} f_{\psi|\phi}^{0}(\vec{x}_{k},t+\Delta t)=& a _{0,\psi|\phi}(\psi|\phi)(\vec{x}_{k},t+\Delta t),\\ f_{\psi|\phi}^{\alpha}(\vec{x}_{k},t+\Delta t)=& a _{\psi|\phi}(\psi|\phi)(\vec{x}_{k},t+\Delta t)\\ &+b\frac{\vec{c}^{\alpha}\cdot\vec{J}_{\psi|\phi}(\vec{x}_{k},t) }{2c^{2}}\\ &\text{for }\alpha\neq 0.\end{split} \tag{27}\]
In order to fulfill the second requirement, we envision that all changes of \(\psi\) and \(\phi\) at a boundary lattice points over one time step \(t\to t+\Delta t\) are transported as waves to its neighbors. Assuming that a linear change in time is a
reasonable approximation, the average state of a boundary lattice point during this transition is given by the average of its distribution functions at two discrete time steps, i.e.
\[\tilde{f}^{\alpha}(\vec{x}_{k},\tilde{t})=\frac{1}{2}\left(f^{\alpha}_{\psi| \phi}(\vec{x}_{k},t+\Delta t)+f^{\alpha}_{\psi|\phi}(\vec{x}_{k},t)\right). \tag{28}\]
The transport of this intermediate state to the boundary conditions is obtained by the modified LBE
\[f^{\alpha}_{\psi|\phi} (\vec{x}+\vec{c}^{n}\Delta t,t+\Delta t)= \tag{29}\] \[\hat{f}^{\alpha}_{\psi|\phi}(\vec{x},t)-\frac{1}{\tau}\left[\hat{ f}^{\alpha}_{\psi|\phi}(\vec{x},t)-f^{\alpha}_{\text{eq},\psi|\phi}(\vec{x},t) \right],\]
where
\[\hat{f}^{\alpha}_{\psi|\phi}=\begin{cases}\tilde{f}^{\alpha}_{\psi|\phi}(\vec {x},\tilde{t}),&\text{if }\vec{x}\text{ is boundary lattice point}\\ f^{\alpha}_{\psi|\phi}(\vec{x},t),&\text{otherwise.}\end{cases}\]
Thus, the modification is only employed if \(\vec{x}+\vec{c}^{n}\Delta t\) is a'second row' boundary lattice point. This is not exact, but it is an approximation that leads to a reasonable state of'second row' boundary lattice points, see Fig. 5 g) and h).
### Periodic Synchronization
The proposed LBM for plane strain is susceptible to instabilities if the dilatation and rotation fields \(\psi\) and \(\phi\) become inconsistent with the displacements \(\mathbf{u}\). Since there is no inherent synchronization of these fields, rounding errors are amplified over time and eventually the computed acceleration is sufficiently misaligned with the actual displacement such that the Navier-Cauchy equation (8) is violated. Small inconsistencies originate from the handling of the boundary conditions as described in the last paragraphs of the previous section.
In order to remedy this problem, we introduce another step in the LBM algorithm, that periodically (every \(l\)-th timestep where \(l\gg 1\)) computes \(\psi\) and \(\phi\) directly from the displacement field with a finite difference approximation of (7). As soon as \(\psi(\vec{x}_{k},t_{l})\) and \(\phi(\vec{x}_{k},t_{l})\) are known, the distribution functions are also corrected according to (27) and the algorithm continues normally in the next time step.
## 4 Numerical Examples
In order to demonstrate the performance of the proposed LBM, we perform several numerical experiments in which the LBM is compared to results obtained via the established Finite Element Method (FEM). The experiments also demonstrate that the proposed LBM successfully solves boundary value problems that are prevalent in engineering practice and is not restricted to often rather academic types of boundary value problems, e.g. with periodic boundary conditions.
In all numerical examples, we formulate the problem in terms of the ratio of wave speeds \(\nicefrac{{c_{s}}}{{c_{d}}}=\nicefrac{{1}}{{\sqrt{3}}}\), the wave speed \(c_{s}\), the shear modulus \(\mu\), the length scale \(L\), and the reference displacement which is also set to \(L\). The parameters of the equilibrium distribution functions are defined by (14) and (16), where a rather extreme value of \(a_{0,\phi}=0.9999\) has been found to be required for sufficient stability. Note that setting \(a_{0,\phi}=0.9999\) also severely reduces the time step.
The benchmark FEM simulations are performed with bi-linear finite elements and implicit time integration via the standard Newmark method.
Figure 6: A square domain subjected to a tensile load. The right plot displays the applied stress \(\sigma_{0}(t)\) as a function of time.
### Tension
For the first numerical example, a square domain is subjected to a time-dependent, tensile traction \(\mathbf{t}^{*}=\pm\sigma_{0}\mathbf{e}_{y}\) load at the top and bottom edges, see Fig. 6. The load is linearly increased from \(\sigma_{0}(t=0)=0\) to \(\sigma_{0}(t=\nicefrac{{L}}{{c_{s}}})=0.005\mu\) and held constant afterwards. In this simulation, no periodic synchronization, see section 3.2, is employed. In order to study the performance of the LBM algorithm, we compare the LBM results to an FEM simulation. Fig. 7 shows a deformed heat map of the FEM results in the background, whereas black squares indicate the displaced position2 of the lattice points. Both simulations are evaluated at time \(t=\nicefrac{{L}}{{c_{s}}}\) and the deformation is scaled by a factor of 100. It can be observed that the LBM matches the FEM results well and predicts phenomena such as lateral contraction accurately. Fig. 8 explicitly displays the displacement of the top left corner \(P\), see also Fig 13, and confirms these findings. We can
Figure 8: Displacement at the top left corner \(P\) of a square domain subjected to a tensile load.
Figure 7: Deformed heat map for a square domain subjected to a tensile load at time \(t=\nicefrac{{L}}{{c_{s}}}\). The deformation is scaled by factor 100. The heat map displays the FEM benchmark results, whereas the black squares indicate the displaced positions of the lattice points.
observe an expected dynamic overshoot and low frequency oscillations in both displacement components, that both the FEM and LBM simulations predict. Nonetheless, Fig. 8 also reveals that erroneous higher frequency oscillations occur in the later stages of the LBM simulation, see the green graph in the plot of \(u_{y}\) for \(t>1.5\nicefrac{{L}}{{c_{*}}}\), which indicate that a periodic synchronization may be useful.
As discussed above, we assume that inconsistencies in the sense of violations of (25) are the cause for these instabilities and that they occur primarily at the'second row' boundary points, see also Fig. 5 g). In order to test this hypothesis, an error measure that is in line with (25) is defined as
\[e(\mathbf{x},t)=\left\|\left(\sum_{\alpha=0}^{4}f_{\psi}^{\alpha}(\vec{x},t)-( \nabla\times\vec{u})|_{(\vec{x},t)}\right)\right\|_{2}. \tag{30}\]
Fig. 9 displays this error at time \(t\approx 0.002\nicefrac{{L}}{{c_{*}}}\) after the corresponding time step has been completely processed. It can be observed that inconsistencies indeed occur at the'second row' boundary points at the top and bottom edges. Although the error is small, without periodic synchronization, it amplifies and eventually manifests as oscillations that can be observed in Fig. 8.
### Simple Shear
The second numerical example uses the same geometric configuration, but differs in terms of the applied boundary conditions. The top edge is subjected to a shear traction that is linearly increased over time, i.e. \(\sigma_{0}=0.005t\nicefrac{{{\mu^{c}}}}{{L}}\), see Fig. 10. The bottom edge is subjected to homogeneous Dirichlet boundary conditions \(w(x,y=\nicefrac{{L}}{{2}},t)=0\). Furthermore, the LBM simulations are run with a periodic synchronization every \(50^{\rm th}\) time step and without synchronization. Fig. 11 displays a deformed (scaled by factor 100) heat map of the FEM results in the background and the displaced lattice points as black squares of the synchronized LBM simulation in the foreground at time \(t=\nicefrac{{L}}{{c_{*}}}\). The LBM accurately captures the shear deformation as well. However, as can be observed in Fig. 12, in this experiment it is strictly necessary to employ the synchronization step, since the LBM simulations without synchronization differ severely from the FEM benchmark after \(t\approx 0.7\nicefrac{{L}}{{c_{*}}}\).
Figure 10: A square domain subjected to a shear load. The right plot displays the applied stress \(\sigma_{0}(t)\) as a function of time.
Figure 9: Error \(e\) for a square domain subjected to a tensile load at \(t\approx 0.002\nicefrac{{L}}{{c_{*}}}\). The maximum error is \(4.3\cdot 10^{-12}\) which coincides with the maximum value displayed in the legend color scheme, i.e. dark red.
### Plate with a Circular Hole
The last numerical example again considers a square domain that is subjected to a tensile traction load. In order to illustrate the LBMs capabilities to handle non-lattice conforming geometries, the domain includes a circular hole of diameter \(0.266L\), see Fig. 13. As in the previous example, we run LBM simulations with periodic synchronization every \(50^{\rm th}\) timestep and without any periodic synchronization. Fig 14 displays the scaled deformed configuration for the FEM in the background, as well as for the LBM with synchronization as the black squares in the foreground at time \(t=l/c_{s}\). Again the LBM agrees well with the FEM reference. However, this is only the case if the synchronization is utilized, see Fig. 15, as the simulation becomes unstable quickly if synchronization is omitted. The oscillations
Figure 11: Deformed heat map for a square domain subjected to a shear load at time \(t=l/c_{s}\). The deformation is scaled by factor 100. The heat map displays the FEM benchmark results, whereas the black squares indicate the displaced positions of the lattice points. For the LBM results a periodic synchronization was performed every 50th time step.
Figure 12: Displacement at the top left corner \(P\) of a square domain subjected to a shear load.
originate from the non-lattice conforming boundaries at the hole as can also be observed in Fig. 15: the displacement field close to the hole at point \(Q\) becomes unstable long before oscillations can be observed at point \(P\).
## 5 Conclusion
In this work, a new Lattice Boltzmann Method (LBM) for solving the general plane strain problems is proposed. The plane strain problem is governed by the Navier-Cauchy equation which can be decomposed into two wave equations with different wave speeds for the rotational part of the displacement field and the dilatational part respectively. Based on this observation the new LBM is constructed by employing the established LBM by Chopard et al. [11] to solve the two wave equations separately. Chopard et al.'s approach allows enough flexibility to choose the simulated macroscopic
Figure 14: Deformed heat map for a square domain with a hole subjected to a tensile load at time \(t=\nicefrac{{L}}{{c_{e}}}\). The deformation is scaled by factor 100. The heat map displays the FEM benchmark results, whereas the black squares indicate the displaced positions of the lattice points. For the LBM results a periodic synchronization was performed every \(50^{\rm th}\) time step.
Figure 13: A square domain with a hole subjected to a tensile load. Point \(Q\) is located at (\(-0.175L,0.025L\)) relative to a coordinate system which has its origin in the center of the hole. The right plot displays the applied stress \(\sigma_{0}(t)\) as a function of time.
wave speed rather independently of the time step and lattice spacing. Thus, the proposed method solves both wave equations on the same D2Q5 lattice with the same time discretization. However, this also limits the maximum time step and reduces the computational efficiency of the approach in situations in which larger time step sizes may be feasible. The displacement field is eventually obtained by integrating the Navier-Cauchy equation and making use of rotation and dilatation fields computed by the LBM.
In order to apply Dirichlet and Neumann boundary conditions, a consistent acceleration is computed at boundary lattice points. This is then used for the integration step mentioned above at these points. In order to reconcile the displacements obtained in this way with the LBM quantities such as the rotation and dilatation as well as the distribution functions, the rotation and dilatation fields are computed from a finite difference approximation of the gradient of the displacement field at boundary lattice points. Afterwards, the distribution functions are computed consistently with these rotation and dilatation fields at the boundary points. We mention some of the remaining causes of inconsistencies between rotation and dilatation fields on the one side and the displacement field on the other side.
These inconsistencies manifest as instabilities in the performed simulations. We address this issue by performing a periodic synchronization in which we compute the rotation and dilatation fields from a a finite difference approximation of the gradient of displacement and subsequently set the distribution functions accordingly.
Lastly, several numerical benchmarks highlight the performance of the new method compared to benchmark FEM simulations. The simulation of a square domain without periodic synchronization under tensile loading shows that the LBM accurately captures simple loading and domains without the synchronization step. However, this example also reveals that the inconsistencies mentioned above indeed occur. The second numerical example studies a square domain under simple shear loading conditions. Here, the results only accurately match the FEM simulations if the synchronization step is employed every \(50^{\mathrm{th}}\) time step. The third numerical example considers the square domain with a hole under tensile load and illustrates that the developed LBM is indeed capable of solving problems in which the geometry does not conform with the lattice, i.e. the boundary does not exactly match the lattice point positions.
We find the performance of the LBM in relation to the FEM promising. However, the periodic synchronization step as well as the rather fine time discretization, that is dictated by the method remains unsatisfactory. In future work, we envision to investigate alternative LBM approaches, but we also want to address the shortcomings of the present LBM by refining the treatment of boundary conditions and exploring the possibility of giving up the same time discretization
Figure 15: Displacement at the top left corner \(P\) (top row) and close to the hole \(Q\) (bottom row) of a square domain with a hole subjected to a tensile load.
for both simulated wave equations. This would allow us to use larger time steps and thus increase computational efficiency, but this approach will also involve an additional interpolation step between time steps.
## Acknowledgments
Open access funding enabled and organized by Projekt DEAL. The authors gratefully acknowledge the funding by the German Research Foundation (DFG) within the project 423809639.
|
2309.03103 | ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation
Following the Metaphor Identification Procedure | This paper presents ContrastWSD, a RoBERTa-based metaphor detection model
that integrates the Metaphor Identification Procedure (MIP) and Word Sense
Disambiguation (WSD) to extract and contrast the contextual meaning with the
basic meaning of a word to determine whether it is used metaphorically in a
sentence. By utilizing the word senses derived from a WSD model, our model
enhances the metaphor detection process and outperforms other methods that rely
solely on contextual embeddings or integrate only the basic definitions and
other external knowledge. We evaluate our approach on various benchmark
datasets and compare it with strong baselines, indicating the effectiveness in
advancing metaphor detection. | Mohamad Elzohbi, Richard Zhao | 2023-09-06T15:41:38Z | http://arxiv.org/abs/2309.03103v2 | ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure
###### Abstract
This paper presents ContrastWSD, a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD) to extract and contrast the contextual meaning with the basic meaning of a word to determine whether it is used metaphorically in a sentence. By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods that rely solely on contextual embeddings or integrate only the basic definitions and other external knowledge. We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection.
## 1 Introduction
A metaphor is a rhetorical device that compares, implicitly, two objects or concepts that are seemingly dissimilar but share symbolic or figurative similarities, with the intention of illuminating a fresh perspective and a more elaborate and nuanced comprehension of the world. Metaphors are not only intrinsic to creative writing, but they are also ubiquitous in human communication. Metaphors typically involve employing words in a manner that diverges from their basic definition, and their figurative sense is dependent on the context in which they are used. While novelty is an indicator of greater creativity in metaphors, sometimes they become widely used and established in the language, ultimately entering the lexicon as conventional metaphors, also known as dead metaphors.
Automatic metaphor detection, the process of identifying metaphoric expressions within a given text, is essential for various Natural Language Processing (NLP) tasks, such as sentiment analysis, text paraphrasing, and machine translation [17]. The development of metaphor detection models presents a significant challenge, as it requires the identification and analysis of both the basic and contextual meanings of words within their respective contexts, as recommended by the Metaphor Identification Procedure (MIP) [1, 2] (see Figure 1).
Early approaches relied on extensive manual efforts in feature engineering. However, with the advent of word embedding techniques and neural networks, more efficient and effective methods emerged for this task [20]. Notably, transformer-based models have demonstrated promising capabilities in detecting metaphors [21, 2, 1, 1]. Despite these advancements, there is still scope for further improvement to simulate the Metaphor Identification Procedure effectively. Therefore, the main objective of this study is to investigate the efficacy of transformer-based models in word sense disambiguation and in following the systematic Metaphor Identification Procedure to extract and contrast between the contextual word sense and the basic definitions of a target word to enhance automatic metaphor detection.
Our proposed method is evaluated on established benchmark datasets, and the results demonstrate significant improvements. Comparatively, our model consistently achieves superior (and occasionally comparable) precision, recall, and F1 scores when compared to other recent and robust metaphor detection models.
## 2 Background
Metaphor detection has been a subject of active research in NLP for several years. Traditional approaches to metaphor detection have relied on hand-crafted or automatically acquired linguistic
features, but recent advancements in NLP have resulted in the development of transformer-based pre-trained models that have demonstrated state-of-the-art performance across various NLP tasks [11, 12]. DeepMet [13] formulates metaphor detection as a reading comprehension task and uses a RoBERTa-based model that incorporates global and local text context, question information, and part-of-speech as features to detect metaphors.
MelBERT [11] is a RoBERTa-based model that incorporates linguistic metaphor identification theories. MelBERT captures the context-dependent nature of metaphors and has demonstrated state-of-the-art performance on multiple benchmark datasets. While the authors considered the MIP procedure in their design, their focus was on leveraging contextual and out-of-context word embeddings to represent the word sense and basic definitions of the word. However, utilizing contextual word embeddings may not always accurately represent the word sense definition; instead, it may lean more towards the general contextual meaning. Similarly, out-of-context word embeddings may not necessarily reflect the basic meaning of the word, as they may be influenced by the frequent meaning, which might not align with the word's basic sense [15, 16]. In contrast, we encode both the contextual and basic definitions of the target words, which are extracted from the dictionary. This enables us to provide a more comprehensive understanding of their meanings and better align with the MIP procedure.
Researchers have also explored the use of external knowledge sources, such as definitions, word senses, and frames, to enhance the performance of metaphor detection models. For instance, Wan et al. wan2021visual used gloss definitions to improve metaphor detection by considering both contextual embedding and contextual definition that best fits the context. Similarly, Babieno et al. babieno222explored explored the integration of the most basic definitions from Wiktionary to improve MelBERT's performance, achieving comparable or superior results. In contrast, our model extracts the contextualized definitions and contrasts them with the basic definitions to align with the MIP procedure. FrameBERT [14] proposed a new approach that incorporates FrameNet [1] embeddings to detect concept-level metaphors, achieving comparable or superior performance to existing models. Although encoding concepts may improve the model's understanding of similarities, it might not fully capture the variations in word meanings in various contexts.
Word-Sense Disambiguation (WSD), which involves identifying the correct sense of a word in context, is a challenging task in NLP with various applications. Our study shows that WSD can aid in the process of identifying metaphors by disambiguating the word sense in given contexts. Multiple state-of-the-art WSD models have been proposed, including a modified version of BERT [12] trained on a combination of gloss selection and example sentence classification objectives. Bevilacqua and Navigli bevilacqua2020visual propose a method for incorporating knowledge graph information into WSD systems to improve their performance. The authors use a large-scale knowledge graph (DBpedia) to provide additional context and semantic information for each word in the text. SenseBERT [10] pre-trains BERT on a large-scale sense-annotated corpus using a modified loss function to incorporate sense-aware training objectives.
Incorporating WSD models to obtain contextual definitions for use in metaphor detection has also been explored. For example, Metaphorical Polysemy Detection (MPD) [15] proposed a novel approach that incorporates FrameNet [12] embeddings to detect concept-level metaphors, achieving comparable or superior performance to existing models.
Figure 1: The Metaphor Identification Procedure
2022) has been proposed, which focuses on detecting conventional metaphors in WordNet. By combining MPD with a WSD model, this method can determine whether a target word represents a conventional metaphor within a given sentence. The authors have identified the two limitations mentioned earlier regarding MelBERT, which hinder its alignment with the MIP procedure. Particularly, attempting to implicitly infer word sense from the target word's contextual representation and assuming that the out-of-context embedding of the target word represents its basic meaning.
To address these issues, the authors trained an MPD model jointly with a WSD model to detect the metaphoricity of a target word, leveraging the word senses predicted by the WSD model for the target word in a given context. However, we argue that this model still lacks full alignment with MIP since it implicitly contrasts the basic definition with the word sense.
To achieve a more explicit alignment with MIP, we propose a different approach. Firstly, we utilize a WSD model to extract the word sense from a lexicon. Secondly, we tackle the second problem by considering the first definition listed in Wiktionary as the basic definition. This choice aligns with the dictionary's recommendation to utilize the logical hierarchy of word senses (Babieno et al., 2022). The explicit contrast between the basic and word sense definitions corresponds better to steps 3 to 5 outlined in the flowchart of the MIP procedure (see Figure 1), as we elaborate on in the following section.
## 3 Methodology
In this section, we present the methodology used to develop and train ContrastWSD. Figure 2 provides an illustration of the data augmentation process and the subsequent metaphor detection process. We commence by introducing the datasets utilized in the study, followed by an overview of the word-sense augmentation procedure. Subsequently, we outline the modifications that were made to the MelBERT model's structure to enhance metaphor detection.
### Data Augmentation
A major contribution of our research is the adherence to the systematic approach of the Metaphor Identification Procedure for detecting linguistic metaphors in the VUA datasets. The MIP procedure (as outlined in Figure 1) involves: (1) comprehending the general meaning of the text, (2) determining lexical units, (3) identifying the contextual meaning of the units, (4) and verifying if there is a more basic meaning. (5) If the contextual meaning deviates from the basic meaning, (6) the unit is labeled as metaphorical.
To align with MIP, we augmented the existing datasets used in the MelBERT model through a two-step procedure. Firstly, we employed a BERT WSD model fine-tuned on a sequence-pair ranking task (Yap et al., 2020) to extract the word sense contextual definition of the target word from WordNet. To retrieve the contextual word sense, we feed a sentence \(S_{c}=(w_{1},...,\,\texttt{[TGT]},w_{t},\texttt{[TGT]},...,w_{n})\) to the WSD model, where [TGT] is a special token marking the location of the target word \(w_{t}\). The WSD model then performs gloss selection from WordNet and chooses the best definition \(D_{c}\) of \(w_{t}\) that fits the context. Secondly, we retrieved the basic definitions from the datasets compiled by Babieno et al. (2022). The authors selected the first definition listed in the Wiktionary dictionary as the basic definition, following the dictionary's recommendation to utilize the logical hierarchy of word senses in their guidelines.
### Model Structure
Our approach to metaphor detection, involves treating it as a binary classification problem based on the target word in a given sentence \(S=(w_{1},...,w_{t},...,w_{n})\). Our aim is to predict whether the target word \(w_{t}\) is being used metaphorically or literally in the sentence. To accomplish this, we leverage the contextual and basic meanings of the target word in the given sentence. Our model utilizes three separate RoBERTa models to encode the sentence \(S\), the isolated target word \(w_{t}\), as well as the contextual and basic definitions \(D_{c}\) and \(D_{b}\).
Following the MelBERT design, we modify the sentence \(S\) by appending the POS tag to its end, and we enclose it with two segment separation tokens [SEP]. Additionally, we employ three types of extra embeddings: (1) The target embedding, used to indicate the target word. (2) The local context embedding, which marks either the words in the clause containing the target word between two commas or the definition and word sense. (3) The POS embedding, used to mark the position of the POS tag. We incorporate the target
word and the definitions by prepending the word to the definitions we retrieve from WordNet (for the word sense) or Wiktionary (for the basic definition). This format is similar to how words appear in a dictionary, and we do this to utilize their hidden representation later in the model. We separate the target word from the definitions using the segment separation token [SEP].
The RoBERTa models produce the hidden representations \(\mathbf{h}_{S}\), \(\mathbf{h}_{c}\), and \(\mathbf{h}_{b}\) encoding the sentence, the contextual definition, and the basic definition, respectively with the extensions as described. \(\mathbf{h}_{c}\) and \(\mathbf{h}_{b}\) are produced by averaging the embedding of all the output tokens.
\[(\mathbf{e}_{0},...,\mathbf{e}_{t},...,\mathbf{e}_{n}) =\mathbf{h}_{S}=\texttt{RoBERTa}(S)\] \[\mathbf{h}_{c} =avg(\texttt{RoBERTa}(D_{c}))\] \[\mathbf{h}_{b} =avg(\texttt{RoBERTa}(D_{b}))\]
Since the MIP procedure focuses on the semantic contrast between the basic and contextual meanings of a word, we encode the basic and contextual meanings. Thus, our MIP layer uses the word sense embedding \(\mathbf{h}_{c}\) and the basic definition embedding \(\mathbf{h}_{b}\). We also use cosine similarity to measure the semantic gap between the embeddings, similar to the approach employed by Babieno et al. (2022).
\[\mathbf{h}_{MIP}=l(\mathbf{h}_{c}\oplus\mathbf{h}_{b}\oplus cos(\mathbf{h}_{c },\mathbf{h}_{b})) \tag{1}\]
Where \(l(.)\) is a fully-connected layer. We have also introduced two additional helper layers to our model. The first layer learns the relationship between the target word's contextual embedding \(\mathbf{e}_{t}\) and the target word embedding adjacent to the word's sense \(\mathbf{e}_{t_{c}}\), while the second layer learns the relationship between the target word's contextual embedding \(\mathbf{e}_{t}\) and the target word embedding adjacent to the word's basic definition \(\mathbf{e}_{t_{b}}\). We believe that these helper layers will assist the MIP layer when the WSD model fails to distinguish between the word sense and the basic definition, particularly in the case of detecting novel metaphors that lack multiple definitions in the dictionary.
\[\mathbf{h}_{1} =l(\mathbf{e}_{t_{c}}\oplus\mathbf{e}_{t}\oplus cos(\mathbf{e}_{t _{c}},\mathbf{e}_{t}))\] \[\mathbf{h}_{2} =l(\mathbf{e}_{t_{b}}\oplus\mathbf{e}_{t}\oplus cos(\mathbf{e}_{ t_{b}},\mathbf{e}_{t}))\]
We concatenate the hidden vectors from the MIP and the two helper layers before feeding them to the final binary classification layer for metaphor prediction.
## 4 Experiments
In this section, we present the datasets, baseline models, and experimental setup used to evaluate our model. We also discuss various aspects of our experimentation process.
Datasets:Our study utilized the VU Amsterdam Metaphor Corpus (VUA) (Steen et al., 2010) in five pre-processed versions: **VUA-18**(Leong et al., 2018), **VUA-20**(Leong et al., 2020), and **VUAverb**, which focuses exclusively on verb metaphors. Additionally, we used **VUA-MPD-All**, which is a subset that focuses on the words
Figure 2: ContrastWSD overall framework showing both stages: (i) the data augmentation stage and (ii) the metaphor detection stage.
that are available in WordNet, and **VUA-MPD-Conv**, a subset that focuses on conventional metaphors Maudslay and Teufel (2022). Each dataset comprises sentences with a labeled target word, categorized as either metaphorical or not, along with a corresponding POS tag for the target word. For each dataset, there are separate train and test datasets. The VUA-18 dataset further includes four testing subsets, each corresponding to a different POS: **verb, noun, adjective, and adverb**. We augmented the datasets with word senses extracted from WordNet using the BERT-based WSD model, and we included the basic definitions extracted from Wiktionary, as explained in the previous section.
However, we encountered instances where the WSD model failed to find a word sense or cases where there was no definition available in the dataset. To ensure a fair comparison between our model and the baselines, we created two versions of the VUA-18, VUA-20, and VUAverb training and testing datasets:
1. The original datasets were retained without removing any records. In places where no word sense was produced, we substituted it with the target word itself. For these target words, our model behaves comparable to MelBERT, enabling a comparison of the target word's embedding within and outside the context. Only in this case, our model is dependent on the contextual embeddings of the target word without external knowledge.
2. We removed the records where the WSD model failed to find a word sense and marked the pruned dataset with a minus sign (-). This was done to avoid potential noise arising from incomplete information during the model's learning process and to enable fair comparisons, ensuring that the model trains only on instances where all the necessary information is available.
For the VUA-MPD training and testing datasets, as well as the VUA-18 POS testing subsets, we used only one version which corresponds to how we handled the datasets in version 1.
ing datasets (refer to Tables 1 and 3). Additionally, the models trained on the VUA-18 training dataset were tested on the verb, noun, adverb, and adjective testing subsets (refer to Table 2), while the models trained on the VUA-20 training dataset were evaluated on the VUA-verb testing dataset which is appended by a star (\(\star\)) sign in Table 3. We also train and test our model on the VUA-MPD datasets (refer to Table 4).
All models were trained for three epochs using a learning rate of \(3e-5\), a linear scheduler with a two-epoch warmup, and a dropout ratio of 0.2. To ensure robustness, we repeated the experiments five times using seeds 1, 2, 3, 4, and 5, following the approach in Babieno et al. (2022). The evaluation results from the test datasets were averaged over the 5 runs. We set the metaphor class weight to 3 on the original datasets, while a class weight of 4 was used on the pruned datasets. The experiments were conducted on a 2-GPU NVIDIA Tesla V100 with 16GB memory.
We presented the findings from the MPD_WSD model as reported in the paper (Maudslay and Teufel, 2022) trained on both the VUA-MPD-All and VUA-MPD-Conv subsets. Unfortunately, we encountered memory constraints that prevented us from replicating the reported results. To maintain consistency with the MPD_WSD model, which employs the BERT-base-cased model, we also developed an alternative version of our model trained on BERT-base-cased instead of RoBERTa-base appended with a (\(\beta\)) sign (see Table 4).
## 5 Empirical Results and Case Studies
Statistical SignificanceThe objective of this analysis is to assess the statistical significance of the performance improvements observed in the ContrastWSD model. We conduct a two-tailed t-test for each dataset, comparing our model to the baseline models, setting a significance level of p = 0.05. The results indicate that the reported differences between our model and the other baseline models are statistically significant at a confidence level of 95%, with only a few exceptions. These exceptions, which are discussed later, are marked in brackets () in Tables 2 and 3. We did not analyze the significance of performance of the MPD_WSD compared to our model as we reported the results from their paper. In the tables, the results in **bold** correspond to the best performing model, while the underlined results indicate the second best performing model.
Overall ResultsTable 1 presents a comparison of evaluation results for the VUA-18, VUA-20, VUA-18 (-), and VUA-20 (-) datasets. The
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Model & F1 - All & F1 - Conv \\ \hline MPD\_WSD & 63.10 & 65.90 \\ ContrastWSD (\(\beta\)) & 73.42 & 72.31 \\ ContrastWSD & **73.84** & **73.07** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison on the VUA-MPD-All and the VUA-MPD-Conv subsets.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Dataset & Model & Rec & Prec & F1 \\ \hline \multirow{3}{*}{verb} & MelBERT & 80.25 & 71.88 & 75.83 \\ & MsW\_cos & **81.85** & 70.91 & 75.97 \\ & FrameBERT & 75.11 & 74.00 & 74.55 \\ & ContrastWSD & 78.87 & **75.70** & **77.25** \\ \hline \multirow{3}{*}{noun} & MelBERT & 66.98 & 73.74 & (70.19) \\ & MsW\_cos & **68.54** & 73.12 & (70.76) \\ & FrameBERT & 62.57 & 75.35 & 68.35 \\ & ContrastWSD & 66.29 & **76.36** & **70.97** \\ \hline \multirow{3}{*}{adverb} & MelBERT & **69.20** & 71.83 & 70.43 \\ & MsW\_cos & 64.57 & 71.43 & 67.80 \\ & FrameBERT & 65.90 & 74.56 & 69.88 \\ & ContrastWSD & 68.52 & **77.45** & **72.63** \\ \hline \multirow{3}{*}{adjective} & MelBERT & **67.06** & 65.97 & 66.47 \\ & MsW\_cos & 66.35 & 64.39 & 65.34 \\ \cline{1-1} & FrameBERT & 64.25 & 68.09 & 66.06 \\ \cline{1-1} & ContrastWSD & 65.73 & **70.83** & **68.18** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison by POS tags. The results in between brackets indicate no statistically significant differences compared to ContrastWSD.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline Dataset & Model & Rec & Prec & F1 \\ \hline \multirow{3}{*}{VUAverb} & MelBERT & **81.08** & 55.24 & 65.57 \\ & MsW\_cos & 77.88 & 61.49 & 68.68 \\ & FrameBERT & 73.33 & **71.95** & **(72.62)** \\ & ContrastWSD & 79.18 & 66.97 & 72.54 \\ \hline \multirow{3}{*}{VUAverb (-)} & MelBERT & **81.40** & 51.27 & 62.87 \\ & MsW\_cos & 79.26 & 59.46 & 67.92 \\ & FrameBERT & 74.63 & **70.68** & **(72.56)** \\ & ContrastWSD & 79.28 & 66.66 & 72.42 \\ \hline \multirow{3}{*}{VUAverb (\(\star\))} & MelBERT & 72.22 & 76.45 & 74.27 \\ & MsW\_cos & **75.09** & 78.00 & **(76.51)** \\ \cline{1-1} & FrameBERT & 69.96 & 77.60 & 73.55 \\ \cline{1-1} & ContrastWSD & 73.81 & **78.39** & 76.03 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results on VUA-verb, VUA-verb (-), and on the VUA-verb (\(\star\)) datasets.
ContrastWSD model consistently outperforms all baseline models in terms of F1-score on both VUA-18 and VUA-20 datasets, even though these datasets had instances where the word sense was absent. Interestingly, the improvement in F1-score for ContrastWSD compared to other models doubled on the VUA-18 (-) and VUA-20 (-) datasets. This suggests that while the presence of word sense in the dataset may have contributed to the model's superior performance, its absence, occasionally, did not hinder its overall effectiveness.
Table 2 presents the performance comparison for the verbs, adjectives, nouns, and adverbs testing subsets. Our model shows a notable advantage in detecting metaphorical adverbs, surpassing the other models by at least \(2\%\) in F1-score. Additionally, it achieves at least a \(1\%\) improvement on verbs and adjectives. Furthermore, it significantly outperforms the recent FrameBERT model by more than \(2\%\) on the noun dataset, while achieving comparable results to MsW_cos and MelBERT with at least a \(0.21\%\) increase in F1 that was not statistically significant, while showing a notable precision gain.
Table 3 presents the comparison between our model and the baseline models on the VUA-verb and VUA-verb (-) datasets. The results demonstrate that our model significantly outperformed MelBERT and MsW_cos on the small training dataset. Moreover, when the missing word senses were removed, our model's performance remained consistently better compared to MelBERT and MsW_cos. On the other hand, while FrameBERT showed an insignificant gain of performance than our model on the small training datasets, its performance significantly declined when tested with the model trained on the larger VUA-20 training dataset. This observation suggests that our model maintains good performance even with small datasets, showing no signs of overfitting or noise introduction when trained on a larger dataset. Additionally, the performance of our model improved with considerably higher precision when trained on the larger dataset.
Table 4 compares the performance on the VUA-MPD subsets used by the MPD_WSD model. We compare the F1-score reported in their paper against our model's results, revealing a significant improvement of a minimum of \(10.32\%\) F1-score gain for our models on the subset encompassing both novel and conventional metaphors, as well as a minimum of \(6.41\%\) F1-score gain in detecting conventional metaphors.
Case Studies:As shown in the results, our ContrastWSD model exhibits relatively higher gains compared to other models. To exemplify instances where ContrastWSD correctly labeled examples that were incorrectly labeled by the baselines, we conducted several case studies. Table 5 presents a few cases that demonstrate the benefits of our approach, involving contrasting word senses while considering the context of the target sentence. For these case studies, we selected the models trained on the VUA-20 dataset. For each model, we chose the best performing one among the 5 seeds. The examples shown in the table are drawn from the VUA-20 testing dataset.
For instance, we observed the word "plant" mentioned 15 times in the testing dataset: 14 times in a metaphorical sense and only once in a non-metaphorical sense. Both MsW_cos and MelBERT labeled all of these occurrences as non-metaphorical. In contrast, our model correctly identified \(47\%\) of these occurrences, while FrameBERT only identified \(33\%\) correctly. The other models have only recognized the non-metaphorical instance and none of the metaphorical instances correctly. Three of these examples are mentioned in the table.
Another example involves the word "honey", which was mentioned twice as a metaphor in the testing dataset. In the first instance, it was used in a conventional way, and our model correctly annotated it as metaphorical, leveraging the extracted word sense. The other models did not recognize this metaphorical usage. In the second instance, "honey" appeared as a novel metaphor where our model, along with the other baseline models, marked it as metaphorical. Even though the word sense was similar to the basic sense, our model still identified it as metaphorical. This indicates that our model can recognize both novel and conventional metaphors.
Finally, the word "jump" occurs two times in the testing dataset, appearing in two different tenses and senses. In one instance, the word "jumping" was used in the literal sense, and our model correctly identified it as literal, considering that the word sense was similar to the definition. However, the other models did not recognize it as such. In the other occurrence, "jump" was used metaphorically, and our model, along with the other models,
correctly identified it as metaphorical.
## 6 Conclusion and Future Work
In this paper, we presented a RoBERTa-based model for metaphor detection that follows the Metaphor Identification Procedure by utilizing a WSD model to extract and contrast the contextual meaning with the basic meaning of a target word. We evaluated our model on several benchmark datasets and demonstrated that leveraging senses and contrasting them can enhance the performance of metaphor detection models. Our proposed model outperformed other state-of-the-art metaphor detection models. Our work provides compelling evidence for further exploration of the use of WSD models and sense-contrasting techniques to enhance the performance of metaphor detection models.
In future work, we plan to investigate the integration of commonsense models such as COMET
(Bosselut et al., 2019) to extract and utilize the common sense knowledge from the target word. COMET was used extensively in recent metaphor generation models as previously shown in our prior work (Elzohbi and Zhao, 2023). We believe that this integration will enable better differentiation of novel metaphors from nonsensical expressions.
|
2309.15952 | The Discovery of the Zeeman Effect in 38 GHz Class II Methanol Masers | Magnetic fields likely play an important role in star formation, but the
number of directly measured magnetic field strengths remains scarce. We
observed the 38.3 and 38.5 GHz Class II methanol (CH$_3$OH) maser lines toward
the high mass star forming region NGC 6334F for the Zeeman effect. The observed
spectral profiles have two prominent velocity features which can be further
decomposed through Gaussian component fitting. In several of these fitted
Gaussian components we find significant Zeeman detections, with $zB_{\rm los}$
in the range from 8 to 46 Hz. If the Zeeman splitting factor $z$ for the 38 GHz
transitions is of the order of $\sim$1 Hz mG$^{-1}$, similar to that for
several other CH$_3$OH maser lines, then magnetic fields in the regions traced
by these masers would be in the range of 8-46 mG. Such magnetic field values in
high mass star forming regions agree with those detected in the better-known
6.7 GHz Class II CH$_3$OH maser line. Since Class II CH$_3$OH masers are
radiatively pumped close to the protostar and likely occur in the accretion
disk or the interface between the disk and outflow regions, such fields likely
have significant impact on the dynamics of these disks. | E. Momjian, A. P. Sarma | 2023-09-27T19:09:37Z | http://arxiv.org/abs/2309.15952v1 | # The Discovery of the Zeeman Effect in 38 GHz Class II Methanol Masers
###### Abstract
Magnetic fields likely play an important role in star formation, but the number of directly measured magnetic field strengths remains scarce. We observed the 38.3 and 38.5 GHz Class II methanol (CH\({}_{3}\)OH) maser lines toward the high mass star forming region NGC 6334 F for the Zeeman effect. The observed spectral profiles have two prominent velocity features which can be further decomposed through Gaussian component fitting. In several of these fitted Gaussian components we find significant Zeeman detections, with \(zB_{\rm los}\) in the range from 8 to 46 Hz. If the Zeeman splitting factor \(z\) for the 38 GHz transitions is of the order of \(\sim\)1 Hz mG\({}^{-1}\), similar to that for several other CH\({}_{3}\)OH maser lines, then magnetic fields in the regions traced by these masers would be in the range of 8-46 mG. Such magnetic field values in high mass star forming regions agree with those detected in the better-known 6.7 GHz Class II CH\({}_{3}\)OH maser line. Since Class II CH\({}_{3}\)OH masers are radiatively pumped close to the protostar and likely occur in the accretion disk or the interface between the disk and outflow regions, such fields likely have significant impact on the dynamics of these disks.
Interstellar magnetic fields (845), Star forming regions (1565), Star formation (1569), Astrophysical masers (103), Interstellar molecules (849), Interstellar medium (847), High resolution spectroscopy (2096), Spectropolarimetry (1973)
## 1 Introduction
A complete understanding of the role of magnetic fields in star formation remains elusive, even though major strides have been made in understanding how magnetic fields impact the star formation process (Pattle et al., 2023; Tsukamoto et al., 2023). Masers offer the opportunity to observe star forming regions at high angular resolution, because they are bright and compact sources (Richards et al., 2020). In particular, Class II methanol (CH\({}_{3}\)OH) masers are known to form close to protostars, pumped by infrared radiation from the protostar itself. Ellingsen et al. (2018) observed 38.3 and 38.5 GHz Class II CH\({}_{3}\)OH maser transitions at high angular resolution (\(2\farcs 3\times 1\farcs 4\)) with the Australia Telescope Compact Array (ATCA). One of the objects in their sample, NGC 6334 F, shows strong maser lines (\(>100\) Jy) at these frequencies. Consequently, this prompted us to target NGC 6334 F for the Zeeman effect, which is the most direct method for measuring magnetic fields in star forming regions.
There are two classes of CH\({}_{3}\)OH masers: Class I and Class II (Menten, 1991). Class I CH\({}_{3}\)OH masers in star forming regions are known to be collisionally pumped in outflows (see, e.g., Leurini et al., 2016). Class II CH\({}_{3}\)OH masers, meanwhile, are pumped by infrared radiation and are therefore located close to protostars (Cragg et al., 2005). They are exclusively associated with high mass star forming regions (Ellingsen, 2006). The most observed and well known Class II transitions are at 6.7 GHz and 12.2 GHz (see, e.g., Nguyen et al., 2022, and references therein), but about 20 different Class II CH\({}_{3}\)OH maser transitions have been observed to date (Breen et al., 2019, and references therein). Maser emission in the Class II CH\({}_{3}\)OH lines at 38.3 and 38.5 GHz was discovered by Haschick et al. (1989) through single-dish observations. Ellingsen et al. (2018) carried out radio interferometric observations of the 38.3
and 38.5 GHz methanol masers toward a sample of 11 high mass star forming regions that host strong 6.7 GHz CH\({}_{3}\)OH masers and detected 38.3 GHz transitions toward seven sources in their sample, and 38.5 GHz transitions toward six sources. They found that these transitions arise from the same location as the strong 6.7 GHz Class II CH\({}_{3}\)OH maser transitions, although they are less co-spatial with the 6.7 GHz maser spots compared to the 12.2 GHz masers.
NGC 6334 is a well-known molecular cloud complex and star-forming region located at a distance of 1.3 kpc (Chibueze et al., 2014). Strong infrared and mm sources, hypercompact (HC) and ultracompact (UC) H II regions, and powerful outflows reveal evidence of ongoing high mass star formation activity in this region (Rodriguez et al., 2007; Andre et al., 2016; Brogan et al., 2016; Hunter et al., 2021). At 4.9 GHz, Rodriguez et al. (1982) found six discrete continuum sources in NGC 6334. These six sources lie along a ridge of emission parallel to the Galactic Plane and were named A-F in order of increasing Right Ascension (R.A.), as shown in Figure 1. Source F, the target of the observations reported in this paper, is highly obscured at optical wavelengths, but prominent in cold dust traced by 850 \(\mu\)m submm continuum observations (Matthews et al., 2008). The dust clump associated with source F was determined by Matthews et al. (2008) to have a mass of 2000 \(M_{\odot}\), indicating that it is a large reservoir of material for star formation. Also coincident with source F is the far-infrared (FIR) source I (Emerson et al., 1973; Loughran et al., 1986). In the CO \(J=3-2\) transition, Zhang et al. (2014) observed a blueshifted outflow toward the southwest of the FIR source I, and a redshifted outflow toward the northeast. Hunter et al. (2006) discovered four 1.3 mm continuum sources toward NGC 6334 I; these were later renamed as MM1-MM4 by Brogan et al. (2016). Of these, the ultracompact (UC) H II region MM3 overlaps with the peak of NGC 6334 F. An outburst in the mm emission from MM1, accompanied by simultaneous flaring in multiple maser species, was ascribed to episodic accretion in this source (Hunter et al., 2021, and references therein).
In this paper, we report the detection of the Zeeman effect in the 38.3 GHz and 38.5 GHz Class II CH\({}_{3}\)OH masers toward NGC 6334 F. The details of the 2017 observations and data reduction are given in SS 2. The results are presented in SS 3, and discussed in SS 4. In SS 5, we state our conclusions.
## 2 Observations and Data Reduction
The observations of the Class II CH\({}_{3}\)OH maser emission lines \(6_{2}\to 5_{3}\) A\({}^{-}\) at 38.3 GHz and \(6_{2}\to 5_{3}\) A\({}^{+}\) at 38.5 GHz toward NGC 6334 F were carried out with the Karl G. Jansky Very Large Array (VLA)1 on 2021 March 23 and 2021 April 7. Each of these observing sessions was 2 hr long. The VLA was in its most compact (D) configuration with a maximum baseline length of 1 km.
Footnote 1: The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
The Wideband Interferometric Digital ARchitecture (WIDAR) correlator was configured to deliver 4 MHz sub-bands with dual polarization products (RR, LL) and 1024 spectral channels per sub-band. The resulting channel spacing was 3.90625 kHz, which corresponds to \(\sim 0.0305\) km s\({}^{-1}\) at the observed frequencies. In addition to NGC 6334 F, the source J1331+3030 (3C 286) was observed as the absolute flux density scale calibrator. The uncertainty in the flux density calibration at the observed frequencies, accounting for various observational parameters (e.g., weather, reference pointing, and elevation effects), is expected to be up to 10%. We also observed the source J1720\(-\)3552 as the complex gain calibrator for part of each observing session to derive the absolute position of the masers in the target source. The phase referencing cycle time was six minutes; 5 minutes on the target and 1 minute on the complex gain calibrator. All the data reduction steps, including calibration, imaging, and deconvolution, were
Figure 1: Contour image of the 1.67 GHz continuum from NGC 6334 taken from Sarma et al. (2000), showing the sources A-F (Rodriguez et al., 1982). The observations reported in this work are toward source F.
carried out independently for each observing session using the Astronomical Image Processing System (AIPS; Greisen 1990). After Doppler correcting the spectral-line data of each transition and observing session independently, the spectral channel with the brightest maser emission in each was split off, and self-calibrated first in phase, then in both phase and amplitude, and imaged in iterative cycles. The final self-calibration solutions were then applied to the corresponding full spectral-line \(uv\) data sets of NGC 6334 F from each session. We note that AIPS calculates the Stokes parameter \(I\) as the average of the right circular polarization (RCP) and left circular polarization (LCP), so that \(I\) = (RCP + LCP)/2, whereas Stokes V is calculated by AIPS as half the difference between RCP and LCP, so that \(V\) = (RCP \(-\) LCP)/2; henceforth, all values of \(I\) and \(V\) are based on this implementation in AIPS. Also, we note that RCP is defined here in the standard radio convention, in which it is the clockwise rotation of the electric vector when viewed along the direction of wave propagation. Table 1 summarizes the parameters of the VLA observations and the corresponding synthesized beamwidths at full width half maximum (FWHM) and other parameters for each observing session.
## 3 Results
We detected Class II CH\({}_{3}\)OH masers at 38.3 and 38.5 GHz toward the position located at R.A. (J2000) = 17\({}^{\rm h}\) 20\({}^{\rm m}\) 53:37, Decl. (J2000) = \(-\)35\({}^{\circ}\) 47\({}^{\prime}\) 01\(\farcs\)40 in NGC 6334 F. The spectral profile of the Class II methanol maser line detected at 38.3 GHz toward this position is shown in Figure 2. It has two prominent spectral features in velocity. We have labeled them as I and II in the figure, and separated them by a vertical dashed line. Visually, spectral feature I looks like it could be fit by a single Gaussian component, but it did require a second shallower and broader component to fit the line wing on the side nearer to \(-\)11 km/s. We have labeled these as component I and I-s respectively; despite the risk of confusion, we have chosen this nomenclature to emphasize that the spectral feature I in Figure 2 is almost a single-component Gaussian. The peak intensity of components I and I-s, together with the velocity at line center and the Full Width at Half Maximum (FWHM) linewidth, are listed in Table 2 and shown in Figure 3. Henceforth, we have tried to be clear from the context whether we are referring to spectral feature I shown in Figure 2, or component I that is listed in Table 2 and shown in Figure 3. The peak intensity of component I listed in Table 2 is 64.49 Jy beam\({}^{-1}\) and its center velocity is \(-\)11.24 km s\({}^{-1}\). It is narrow, with a linewidth of 0.18 km s\({}^{-1}\). Component I-s is shallower, with about 1/4 the peak intensity of component I. It is also broader, with a FWHM linewidth of 0.327 km s\({}^{-1}\). Note that even though component I-s extends into the velocity space of component II, it was not used in the fit for component II. Meanwhile, at 38.5 GHz we observe a spectral profile that resembles the 38.3 GHz profile. As listed in Table 2, the intensity of component I at 38.5 GHz is similar to that at 38.3 GHz; likewise for component I-s. Within the errors, the center velocities and FWHM linewidths at 38.3 and 38.5 GHz are the same for components I and I-s.
\begin{table}
\begin{tabular}{l r r} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{38.3 GHz Value} & \multicolumn{1}{c}{38.5 GHz Value} \\ \hline Date & & 2021 Mar 23 \& Apr 7 \\ Configuration & & D \\ R.A. of field center (J2000) & & 17\({}^{\rm h}\) 20\({}^{\rm m}\) 53:370 \\ Dec. of field center (J2000) & & \(-\)35\({}^{\circ}\) 47\({}^{\prime}\) 02\(\farcs\)0 \\ Total bandwidth (MHz) & & 4 \\ No. of channels & & 1024 \\ Channel spacing (km s\({}^{-1}\)) & & 0.0305 \\ Approx. time on source (hr) & & 2.83 \\ Rest frequency (GHz) & & 38.293270 & 38.452629 \\ FWHM of synthesized beam & & 5\(\farcs\)33 \(\times\) 1\(\farcs\)53 & 5\(\farcs\)60 \(\times\) 1\(\farcs\)50 \\ & & P.A. = 6.99\({}^{\circ}\) & P.A. = 6.63\({}^{\circ}\) \\ Line rms noise (mJy beam\({}^{-1}\)) \(a\) & & 10.5 & 12.0 \\ \hline \end{tabular} \({}^{a}\)The line rms noise was measured from the Stokes \(I\) image cube using maser line free channels.
\end{table}
Table 1: Parameters for VLA Observations
Three Gaussian components were fitted to spectral feature II at 38.3 GHz; we have designated them as IIa, IIb, and IIc. The intensity, center velocity, and FWHM linewidth of these three components are listed in Table 2 and shown in Figure 3. The peak intensities of components IIa and IIb, 100.51 and 78.33 Jy beam\({}^{-1}\) respectively, are larger than that of component I listed in Table 2, and their center velocities are \(-10.59\) and \(-10.75\) km s\({}^{-1}\). Like component I, both component IIa and IIb are narrow, with FWHM linewidths of 0.22 km s\({}^{-1}\) and 0.21 km s\({}^{-1}\) respectively. Meanwhile, component IIc has lower intensity (35.67 Jy beam\({}^{-1}\)), broader FWHM linewidth (0.39 km s\({}^{-1}\)), and a center velocity of \(-10.84\) km s\({}^{-1}\). The resultant Gaussian profile obtained by summing component I and I-s, and separately summing IIa, IIb, and IIc is shown in Figure 2, together with the residuals from the fit. Again, the situation was similar at 38.5 GHz, where components IIa and IIb had higher intensity than component I (see Table 2). Within the errors, the center velocities and linewidths of IIa and IIb at 38.5 GHz were the same as those at 38.3 GHz. Component IIc also had lower intensity and broader linewidth than IIa and IIb at 38.5 GHz. In the approach that was followed, as de
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Frequency & & Intensity & Center Velocity & Velocity Linewidtha \\ (GHz) & Component & (Jy beam\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) \\ \hline
38.3 & I & \(64.49\pm 5.41\) & \(-11.24\pm 0.02\) & \(0.182\pm 0.020\) \\
38.5 & I & \(67.34\pm 3.89\) & \(-11.25\pm 0.02\) & \(0.181\pm 0.019\) \\ \hline
38.3 & I-s & \(16.62\pm 1.37\) & \(-10.99\pm 0.04\) & \(0.327\pm 0.205\) \\
38.5 & I-s & \(18.98\pm 3.63\) & \(-10.97\pm 0.09\) & \(0.356\pm 0.219\) \\ \hline
38.3 & IIa & \(100.51\pm 8.54\) & \(-10.59\pm 0.02\) & \(0.223\pm 0.021\) \\
38.5 & IIa & \(96.81\pm 5.14\) & \(-10.60\pm 0.02\) & \(0.228\pm 0.020\) \\ \hline
38.3 & IIb & \(78.33\pm 5.98\) & \(-10.75\pm 0.02\) & \(0.205\pm 0.020\) \\
38.5 & IIb & \(73.21\pm 3.55\) & \(-10.76\pm 0.02\) & \(0.202\pm 0.019\) \\ \hline
38.3 & IIc & \(35.67\pm 5.22\) & \(-10.84\pm 0.04\) & \(0.387\pm 0.057\) \\
38.5 & IIc & \(37.18\pm 2.86\) & \(-10.83\pm 0.02\) & \(0.403\pm 0.037\) \\ \hline \end{tabular}
\end{table}
Table 2: Maser Line Parameters from Gaussian Fits
Figure 2: Observed spectral profile (black histogram-like line) toward NGC 6334 F at 38.3 GHz showing two prominent spectral features in velocity, which we have labeled as I and II. The dashed vertical line separates these two features. The blue curve to the left of the dashed vertical line is the resultant profile obtained by summing components I and I-s listed in Table 2 and shown in Figure 3, and the blue curve to the right of the dashed vertical line is the resultant profile obtained by summing components IIa, IIb, and IIc. The red dashed curve represents the residuals from the fit.
scribed above, the two velocity features in both the 38.3 and 38.5 GHz maser transitions were treated separately and independently when fitting Gaussian components to their profiles. This was necessitated by the simultaneous optimization of the Gaussian fits to the Stokes \(I\) spectra and their scaled derivatives to the Stokes \(V\) spectra. This process also took into account the minimization of the residuals while using as few Gaussian components as possible.
The observed spectral profiles shown in Figure 2 and 3 are the Stokes \(I\) profiles at 38.3 GHz. The Stokes \(I\) and \(V\) profiles toward NGC 6334 F at 38.3 GHz are shown together in Figure 4. Following our usual procedure (see, e.g., Momjian & Sarma, 2017), we fit the Stokes \(V\) profile to the derivative of the Stokes \(I\) profile and a scaled replica of the \(I\) profile using the equation (Troland & Heiles, 1982; Sault et al., 1990):
\[V=aI+\frac{b}{2}\,\frac{dI}{d\nu} \tag{1}\]
The fit parameter \(a\) is included to account for small calibration errors in RCP versus LCP and we obtained \(a\lesssim 10^{-3}\). The fit parameter \(b=zB_{\rm los}\), where \(z\) is the Zeeman splitting factor and \(B_{\rm los}\) is the line-of-sight magnetic field strength. We fitted independently for spectral features I and II shown in Figure 2; the dashed vertical line in Figure 4 separates the channels that we included in the fits for each of these two features. The fits were carried out using the AIPS task ZEMAN (Greisen, 2017), which allows for multiple Gaussian components in Stokes \(I\) to be fitted simultaneously to Stokes \(V\) for different values of \(b\) as was done, for example, for the three components IIa-IIc. Figure 5 shows the individual Gaussian components in the upper panel, and the derivatives of each component scaled by the fitted value of \(zB_{\rm los}\) in the lower panel. Unlike in Figure 3, components I and I-s are shown only to the left of the dashed vertical line, and components IIa-IIc are shown to the right of the dashed vertical line; that is, confined to the velocity space in which they were summed to obtain the blue profiles superposed on the black histogram-like profiles in the upper and lower panels of Figure 4. Figure 6 is the equivalent of Figure 4 and Figure 7 is the equivalent of Figure 5, but at 38.5 GHz.
The results of the fitting of Stokes \(V\) using equation (1) are given in Table 3. Although Lankhaar et al. (2018) published values of the Zeeman splitting factor for several CH\({}_{3}\)OH maser lines, values of \(z\) for 38.3 and 38.5 GHz lines are not available, so we will leave our results in terms of \(zB_{\rm los}\) in Hz. We obtained three significant detections and two upper limits. Generally, we consider a detection significant if the signal-to-noise ratio in \(zB_{\rm los}\) is 3-\(\sigma\) or higher. Here, we impose a stronger criterion; we will consider the detection significant only if the signal-to-noise ratio in \(zB_{\rm los}\) is 3-\(\sigma\) or higher at both 38.3 GHz and 38.5 GHz. Component I listed in Table 2 shows a significant detection of \(-45.10\pm 3.40\) Hz at 38.3 GHz, and \(-46.08\pm 3.63\) Hz at 38.5 GHz. By convention, a negative value for \(B_{\rm los}\) implies that the line-of-sight magnetic field is pointing toward the observer. Component IIa and IIb also show significant detections, and \(B_{\rm los}\) traced by component IIa points toward the observer, whereas \(B_{\rm los}\) traced by component IIb points away from the observer. Component IIa shows a significant detection of \(-7.48\pm 1.81\) Hz
Figure 3: Same spectral profile (black histogram-like line) as in Figure 2, but now showing the individual Gaussian components fitted to the observed profile at 38.3 GHz. The solid red and dashed blue curves show components I and I-s listed in Table 2. The magenta, solid blue, and green curves show components IIa, IIb, and IIc respectively. Note that for visual clarity, the entire dashed blue and solid green curves are shown, even though the dashed blue curve is used only in the fit for spectral feature I shown in Figure 2, and the solid green curve is used only in the fit for feature II.
at 38.3 GHz, and \(-12.88\pm 2.55\) Hz at 38.5 GHz. Finally, component IIb shows a significant detection of \(16.26\pm 2.59\) Hz at 38.3 GHz, and \(20.07\pm 3.96\) Hz at 38.5 GHz. Although a value of \(z\) is not available for these two transitions, values published by Lankhaar et al. (2018) for many prominent CH\({}_{3}\)OH transitions are near 1 Hz mG\({}^{-1}\), and \(B_{\rm los}\) values of 8-46 mG appear reasonable in such regions.
## 4 Discussion
We observed the 38.3 and 38.5 GHz Class II CH\({}_{3}\)OH maser transitions toward NGC 6334 F with the VLA for the Zeeman effect. At both 38.3 and 38.5 GHz, the observed spectral profiles contain two prominent velocity features. We fitted one with a narrow Gaussian component and a shallow broad component, which we have labeled as components I and I-s respectively (Table 2 and Figure 3). The other was fitted with two narrow Gaussian components labeled IIa and IIb, and a lower intensity broad component, which we have labeled as IIc. Our observed maser profiles are consistent with the Australia Telescope Compact Array (ATCA) observations of Ellingsen et al. (2018). However, it is worth noting that their velocity resolution was 0.3 km s\({}^{-1}\), a factor of \(\sim\)10 coarser compared to ours. They observed our two prominent velocity features (which we have labeled as I and II in Figure 2) to be blended in velocity space, but with a significant asymmetry on the left side of the profile (near \(-11\) km s\({}^{-1}\)), consistent with the presence of our spectral feature I. Of interest is that the single dish observations of Haschick et al. (1989) with 0.065 km s\({}^{-1}\) velocity resolution show feature II but only weak emission at the velocity corresponding to feature I. This could be due to variability, but there is no way to directly compare the two observations taken with such different instruments so many years apart. The base of the spectral profile observed by Haschick et al. (1989) at both 38.3 and 38.5 GHz indicates the presence of a broad emission component, consistent with the shallower and
Figure 4: Stokes \(I\) (upper panel, black histogram-like line) and Stokes \(V\) (lower panel, black histogram-like line) profiles toward NGC 6334 F at 38.3 GHz. The blue curve in the upper panel is the same as that shown in, and described in the caption to, Figure 2. The blue curve superposed on the Stokes \(V\) profile in the lower panel is the sum of the solid red and dashed blue curves shown in the lower panel of Figure 5 to the left of the dashed vertical line, and the sum of the magenta, solid blue, and green curves in the lower panel of Figure 5 to the right of the dashed vertical line.
Figure 5: Stokes \(I\) (upper panel, black histogram-like line) and Stokes \(V\) (lower panel, black histogram-like line) profiles toward NGC 6334 F at 38.3 GHz, as in Figure 4. The solid red and dashed blue curves in the upper panel are Gaussian components I and I-s, respectively. The magenta, solid blue, and green curves are components IIa, IIb, and IIc, respectively. The fit parameters for all five components are listed in Table 2. The curves superposed on Stokes \(V\) in the lower panel are the derivatives of the corresponding colored curves in the upper panel, scaled by the fitted value of \(zB_{\rm los}\) for each curve listed in Table 3.
\begin{table}
\begin{tabular}{c c c} \hline \hline Frequency & & z\(B_{\rm los}\) \\ (GHz) & Component & (Hz) \\ \hline \multicolumn{3}{c}{**Significant Detections**} \\ \hline
38.3 & I & \(-45.10\pm 3.40\) \\
38.5 & I & \(-46.08\pm 3.63\) \\ \hline
38.3 & IIa & \(-7.48\pm 1.81\) \\
38.5 & IIa & \(-12.88\pm 2.55\) \\ \hline
38.3 & IIb & \(16.26\pm 2.59\) \\
38.5 & IIb & \(20.07\pm 3.96\) \\ \hline \multicolumn{3}{c}{**Upper Limits**} \\ \hline
38.3 & I-s & \(-15\pm 49\) \\
38.5 & I-s & \(-27\pm 41\) \\ \hline
38.3 & IIc & \(68\pm 10\) \\
38.5 & IIc & \(34\pm 14\) \\ \hline \end{tabular}
\end{table}
Table 3: Zeeman Effect Measurements
broader components I-s and IIc that we have fitted (Figure 3). In both the single dish observations of Haschick et al. (1989) and the ATCA observations of Ellingsen et al. (2018), the 38.5 GHz spectral profile has higher intensity than the 38.3 GHz profile, consistent with our VLA observations (Table 2).
We have obtained significant detections of the Zeeman effect in components I, IIa, and IIb (Table 3), with \(zB_{\rm los}\) in the range 8-46 Hz. Obtaining the line-of-sight magnetic field values from these is possible only if we know the value of the Zeeman splitting factor \(z\) for the 38.3 and 38.5 GHz transitions. Lankhaar et al. (2018) published values of \(z\) for a wide range of CH\({}_{3}\)OH maser transitions, but not for the 38 GHz transitions. If, however, the values of \(z\) for these transitions are near 1 Hz mG\({}^{-1}\), as it is for many prominent transitions of CH\({}_{3}\)OH, then we would get \(B_{\rm los}\) values of the same order as \(zB_{\rm los}\) in these regions. Values in the range 8-46 mG appear reasonable for high mass star forming regions. For example, from the statistical analysis of a flux-limited sample of 6.7 GHz Class II CH\({}_{3}\)OH masers, Surcis et al. (2022) found line-of-sight magnetic fields in the range 9-40 mG. It is worth noting that the CH\({}_{3}\)OH maser transitions result from a series of hyperfine transitions, each of which has a different \(z\). Therefore, the magnetic field values would be different depending on which hyperfine transition, or combination of hyperfine transitions, is responsible for the maser transition. Also, it is worth noting that the reversal in sign of \(B_{\rm los}\) from component IIa to IIb is usually interpreted by astronomers as a field reversal in the regions traced by these masers. However, Lankhaar et al. (2018) have proposed a different scenario in which the change in sign is caused by the population inversion of two different hyperfine transitions. Values in the range 8-46 mG are also in general agreement with field strengths of 4.4-6.4 mG measured in 1.6 and 6.0 GHz OH maser lines in NGC 6334 F by Chanapote et al. (2019).
Following standard procedure in reporting the Zeeman effect, we discuss now whether the detected signal could be caused by instrumental effects, or by processes other than the Zeeman effect. It is well known that a velocity gradient across an extended source could mimic a Stokes \(V\) signal caused by the Zeeman effect. However, masers are compact sources confined to a narrow velocity range. Therefore, it is unlikely that the observed Stokes \(V\) signal reported in this paper is due to such a velocity gradient. Moreover, the close agreement of the detected \(zB_{\rm los}\) value at 38.3 and 38.5 GHz, i.e., two transitions observed in independent spectral windows, gives us confidence that this is not a spurious detection. Processes other than the Zeeman effect that could contribute to structure in the Stokes \(V\) profile include, for masers with strong linear polarization, changes in the orientation of the magnetic field along the line of sight that could cause rotation of the linear polarization vectors to produce circular polarization (Wiebe & Watson, 1998). In order to optimize our observations for detection of the Zeeman effect, we did not propose for full polarization observations. We note, however, that for a wide array of Class II CH\({}_{3}\)OH maser transitions, Breen et al. (2019) found linearly polarized emission in the range 1.5-7.5%. If the 38 GHz Class II CH\({}_{3}\)OH maser transitions reported in this paper have similar levels of linear polarization, then it is unlikely that the observed Stokes \(V\) could be caused by the rotation of linear polarization vectors due to changes in the magnetic field orientation along the line of sight. Another possible non-Zeeman origin comes from maser radiation scattering off foreground molecules that can enhance antisymmetric spectral profiles in Stokes \(V\)(Houde, 2014). In such cases, if the Stokes \(V\) were considered to be caused solely by the Zeeman effect, then the resulting magnetic field strengths would be too large, as demonstrated for SiO masers by Houde (2014). Unless \(z\) for 38 GHz transitions is determined to be extremely low so that the values of \(zB_{\rm los}\) we have detected translate to very high magnetic fields, it appears unlikely that this effect is of consequence for our observations. Finally, if the maser
Figure 6: As in Figure 4, but at 38.5 GHz.
Figure 7: As in Figure 5, but at 38.5 GHz.
stimulated emission rate \(R\) is larger than the frequency shift due to the Zeeman effect \(g\Omega\), which is the product of the Lande g-value for the upper state of the transition and the gyrofrequency, \(\Omega\), of the electron, a rotation of the axis of symmetry for the molecular quantum states could also cause circular polarization (Vlemmings et al., 2011). In our observations toward NGC 6334 F, we have \(g\Omega\approx 10\) s\({}^{-1}\). The stimulated emission rate \(R\), taken from Vlemmings et al. (2011), is
\[R\simeq\frac{AkT_{b}\Delta\Omega}{4\pi h\nu} \tag{2}\]
where \(k\) and \(h\) are the Boltzmann and Planck constants, respectively, \(\nu\) is the frequency of the maser transition, \(A\) is the Einstein coefficient, equal to \(4.726\times 10^{-8}\) s\({}^{-1}\) for the \(6_{2}\to 5_{3}\) A\({}^{-}\) transition at 38.3 GHz, and \(4.829\times 10^{-8}\) s\({}^{-1}\) for the \(6_{2}\to 5_{3}\) A\({}^{+}\) transition at 38.5 GHz, \(T_{b}\) is the maser brightness temperature, and \(\Delta\Omega/4\pi\approx 10^{-3}\) is a conservative estimate for the maser beaming angle (Nesterenok, 2016). The lower limit on \(T_{b}\) from our observations is \(10^{6}\) K. Using these values, equation (2) gives \(R\approx 10^{-5}\) s\({}^{-1}\), which in turn implies that \(R\ll g\Omega\). Therefore it is unlikely that a rotation of the axis of symmetry is causing the splitting that results in the observed Stokes \(V\) profile. Moreover, such an effect would cause an intensity-dependent polarization, but component IIa with higher intensity than IIb has a lower \(zB_{\rm los}\) than that detected in IIb.
Class II CH\({}_{3}\)OH masers are exclusively found in high mass star forming regions and are located close to the protostar, unlike Class I CH\({}_{3}\)OH masers which are located in outflows. However, there is as yet no consensus on the structures that they trace in these protostellar environments. One scenario puts them in the accretion disk itself, whereas an alternative places them in the intermediate region between the outflow and the disk (Sanna et al., 2010; Goddi et al., 2011; Sugiyama et al., 2014; Bartkiewicz et al., 2020). We can use our calculated value for \(B_{\rm los}\) to compare the magnetic energy density with other relevant quantities in NGC 6334 F. The magnetic energy density is given by \(B^{2}/8\pi\), where \(B^{2}=3B_{\rm los}^{2}\), as determined by Crutcher (1999) on statistical grounds; we note that this is strictly only valid for an ensemble of measurements. For \(B_{\rm los}\)=8 mG reported in Section 3, which is the lowest magnetic field value we measure, the magnetic energy density is equal to \(7.6\times 10^{-6}\) erg cm\({}^{-3}\). The kinetic energy density is a relevant quantity with which to compare this value; it is given by \((3/2)\,mn\sigma^{2}\), where \(m=2.8\)\(m_{p}\), with \(m_{p}\) being the proton mass, the numerical factor of 2.8 also accounting for 10% helium. The particle density is \(n=10^{6}\) cm\({}^{-3}\), from modeling of several Class II CH\({}_{3}\)OH maser lines in NGC 6334 F, including the 38 GHz transitions (Cragg et al., 2001). The velocity dispersion is given by \(\sigma=\Delta v/(8\ln 2)^{1/2}\). Rather than using the narrower \(\Delta v\) from maser observations, which may not be indicative of the thermal motions in the clump of gas where the maser transition arises, we use the broader value of 3 km s\({}^{-1}\), from the velocity gradient in HC\({}_{3}\)N and NH\({}_{3}\) emission lines toward NGC 6334 F (Jackson et al., 1988; Bachiller & Cernicharo, 1990). This gives a kinetic energy density equal to \(1.1\times 10^{-7}\) ergs cm\({}^{-3}\). This means that the magnetic energy density in the region traced by the masers in NGC 6334 F is at least comparable to, if not larger than, the kinetic energy density. If the 38 GHz CH\({}_{3}\)OH masers occur in a rotating accretion disk, another relevant quantity of interest is the rotational energy density, given by \((1/2)\,I\omega^{2}/V\), where \(\omega\) is the angular velocity and \(V\) is the volume of the disk. The moment of inertia \(I\) is of the order \(MR^{2}\); \(M\) is the mass and \(R\) is the radius of the disk. Using a rotational velocity of \(v=4\) km s\({}^{-1}\) for methanol maser disks from Norris et al. (1998), where \(v=R\omega\), the expression for the rotational energy density then becomes \((1/2)\,(M/V)\,v^{2}\), where \(M/V=2.8\,m_{p}\,n\), with \(m_{p}\) being the proton mass, and \(n\) the particle density with a value of \(10^{6}\) cm\({}^{-3}\), as discussed above. The derived rotational energy density is then about \(3.7\times 10^{-7}\) ergs cm\({}^{-3}\). Once again, the magnetic energy density is at least comparable to, if not greater than, the rotational energy density. Therefore, if the 38 GHz Class II CH\({}_{3}\)OH masers are located in the accretion disk around the protostar, then the magnetic field likely plays a significant role in shaping the dynamics in that accretion disk.
## 5 Conclusion
We observed the 38.3 and 38.5 GHz Class II CH\({}_{3}\)OH maser transitions toward the high mass star forming region NGC 6334 F for the Zeeman effect. Both transitions have similar spectral profiles, each with two prominent spectral features. We fitted one of these two features with a single narrow Gaussian and a shallow broad component, with FWHM linewidths of 0.18 km s\({}^{-1}\) and 0.33 km s\({}^{-1}\) respectively at 38.3 GHz. These components were labeled I and I-s respectively. We fitted the other prominent spectral feature with two narrow Gaussian components labeled IIa and IIb, and a broad, lower intensity component IIc; at 38.5 GHz, component IIc with a FWHM linewidth of 0.403 km s\({}^{-1}\) is almost twice as wide as component IIb, which has a FWHM linewidth of 0.202 km s\({}^{-1}\). The center velocities of these components are quite close, ranging from \(-10.59\) km s\({}^{-1}\) to \(-11.25\) km s\({}^{-1}\).
We have obtained significant detections of the Zeeman effect in components I, IIa, and IIb. Values of \(zB_{\rm los}\) in
these masers range from 8 to 46 Hz, and there is a change in sign from component IIa to IIb. The Zeeman splitting factor \(z\) for these 38 GHz transitions is not known, but if it is of the order \(\sim\)1 Hz mG\({}^{-1}\) as it is for a range of CH\({}_{3}\)OH maser transitions, then the magnetic fields in the regions traced by these masers would be in the range 8-46 mG. There is a reversal in the sign of \(zB_{\rm los}\) from component IIa to IIb, usually interpreted as a reversal in \(B_{\rm los}\) from one component to another; alternatively, it may be a result of different hyperfine transitions being responsible for the maser transitions. Magnetic fields in the range 8-46 mG agree well with fields detected in the better-known 6.7 GHz CH\({}_{3}\)OH transition. Class II CH\({}_{3}\)OH masers are known to form close to the protostar in accretion disks or in the interface region between the accretion disk and outflow. Using the lowest magnetic field value of \(B_{\rm los}\)= 8 mG from Table 3, we find that the magnetic energy density is at least comparable to, if not greater than, the kinetic energy density and the rotational energy density in the disk. Such fields may exert significant influence on the dynamics of the accretion disk and likely play an important role in the star formation process.
We thank an anonymous referee for insightful comments that helped to improve the paper. APS acknowledges a Summer Research Grant from the University Research Council (URC) at DePaul University. VLA AIPS (Greisen, 1990)
|
2309.16283 | Self-supervised Cross-view Representation Reconstruction for Change
Captioning | Change captioning aims to describe the difference between a pair of similar
images. Its key challenge is how to learn a stable difference representation
under pseudo changes caused by viewpoint change. In this paper, we address this
by proposing a self-supervised cross-view representation reconstruction
(SCORER) network. Concretely, we first design a multi-head token-wise matching
to model relationships between cross-view features from similar/dissimilar
images. Then, by maximizing cross-view contrastive alignment of two similar
images, SCORER learns two view-invariant image representations in a
self-supervised way. Based on these, we reconstruct the representations of
unchanged objects by cross-attention, thus learning a stable difference
representation for caption generation. Further, we devise a cross-modal
backward reasoning to improve the quality of caption. This module reversely
models a ``hallucination'' representation with the caption and ``before''
representation. By pushing it closer to the ``after'' representation, we
enforce the caption to be informative about the difference in a self-supervised
manner. Extensive experiments show our method achieves the state-of-the-art
results on four datasets. The code is available at
https://github.com/tuyunbin/SCORER. | Yunbin Tu, Liang Li, Li Su, Zheng-Jun Zha, Chenggang Yan, Qingming Huang | 2023-09-28T09:28:50Z | http://arxiv.org/abs/2309.16283v1 | # Self-supervised Cross-view Representation Reconstruction for
###### Abstract
Change captioning aims to describe the difference between a pair of similar images. Its key challenge is how to learn a stable difference representation under pseudo changes caused by viewpoint change. In this paper, we address this by proposing a self-supervised cross-view representation reconstruction (SCORER) network. Concretely, we first design a multi-head token-wise matching to model relationships between cross-view features from similar/dissimilar images. Then, by maximizing cross-view contrastive alignment of two similar images, SCORER learns two view-invariant image representations in a self-supervised way. Based on these, we reconstruct the representations of unchanged objects by cross-attention, thus learning a stable difference representation for caption generation. Further, we devise a cross-modal backward reasoning to improve the quality of caption. This module reversely models a "hallucination" representation with the caption and "before" representation. By pushing it closer to the "after" representation, we enforce the caption to be informative about the difference in a self-supervised manner. Extensive experiments show our method achieves the state-of-the-art results on four datasets. The code is available at [https://github.com/tuyunbin/SCORER](https://github.com/tuyunbin/SCORER).
## 1 Introduction
Change captioning is a new task of vision and language, which requires not only understanding the contents of two similar images, but also describing their difference with natural language. In real world, this task brings a variety of applications, such as generating elaborated reports about monitored facilities [8, 10] and pathological changes [18, 14].
While single-image captioning is already regarded as a very challenging task, change captioning carries additional difficulties. Simply locating inconspicuous differences is one such challenge (Fig. 1 (a) (b)). Further, in a dynamic environment, it is common to acquire two images under different viewpoints, which leads to pseudo changes about objects' scale and location (Fig. 1 (c) (d)). As such, change
Figure 1: The examples of change captioning. (a) is from a surveillance scene with underlying illumination change. (b) is from an image editing scene. (c) shows that with both object move and moderate viewpoint change. (d) shows that with both object move and extreme viewpoint change. Changed objects and referents are shown in red and green boxes, respectively.
captioning needs to characterize the real change while resisting pseudo changes. To locate change, the most intuitive way is to subtract two images [22, 7], but this risks computing difference features with noise if two images are unaligned [31]. Recently, researchers [25] find that same objects from different viewpoints would have similar features, so they match object features between two images to predict difference features. This paradigm has been followed by some of the recent works [11, 24, 38, 31, 30].
Despite the progress, current match-based methods suffer from learning stable difference features under pseudo changes. In detail, the matching is directly modeled between two image features, usually by cross-attention. However, the features of corresponding objects might shift under pseudo change. This case is more severe under drastic viewpoint changes (Fig. 1 (d)). Such feature shift appearing in most objects would overwhelm the local feature change, thus making it less effective to directly match two images.
For this challenge, we have two new observations. (1) While the feature difference might be ignored between a pair of similar images, it is hard to be overwhelmed between two images from different pairs. As such, contrastive difference learning between similar/dissimilar images can help the model focus more on the change of feature and resist feature shift. (2) Pseudo changes are essentially different distortions of objects, so they just construct cross-view comparison between two similar images, rather than affecting their similarity. Motivated by these, we study cross-view feature matching between similar/dissimilar images, and maximize the alignment of similar ones, so as to learn two view-invariant image representations. Based on these, we can reconstruct the representations of unchanged objects and learn a stable difference representation.
In this paper, we tackle the above challenge with a novel **S**elf-supervised **C**r**O**ss-view **RE**presentation **R**econstruction (SCORE) network, which learns a stable difference representation while resisting pseudo changes for caption generation. Concretely, given two similar images, we first devise a multi-head token-wise matching (MTM) to model relationships between cross-view features from similar/dissimilar images, via fully interacting different feature subspaces. Then, by maximizing cross-view contrastive alignment of the given image pair, SCORE learns their representations that are invariant to pseudo changes in a self-supervised way. Based on these, SCORE mines their reliable common features by cross-attention, so as to reconstruct the representations of unchanged objects. Next, we fuse the representations into two images to highlight the unchanged objects and implicitly infer the difference. Through this manner, we can obtain the difference representation that not only captures the change, but also conserves referent information, thus generating a high-level linguistic sentence with a transformer decoder.
To improve the quality of sentence, we further design a cross-modal backward reasoning (CBR) module. CBR first reversely produces a "hallucination" representation with the full representations of sentence and "before" image, where the "hallucination" is modeled based on the viewpoint of "before". Then, we push it closer to the "after" representation by maximizing their cross-view contrastive alignment. Through this self-supervised manner, we ensure that the generated sentence is informative about the difference.
**Our key contributions are**: **(1)** We propose SCORE to learn two view-invariant image representations for reconstructing the representations of unchanged objects, so as to model a stable difference representation under pseudo changes. **(2)** We devise MTM to model relationships between cross-view images by fully interacting their different feature subspaces, which plays a critical role in view-invariant representation learning. **(3)** We design CBR to improve captioning quality by enforcing the generated caption is informative about the difference. **(4)** Our method performs favorably against the state-of-the-art methods on four public datasets with different change scenarios.
## 2 Related Work
**Change Captioning** is a new task in vision-language understanding and generation [13, 19, 17, 29, 5, 35]. The pioneer works [10, 27] describe the difference between two aligned images (Fig. 1 (a) (b)). Since there usually exist viewpoint changes in a dynamic environment, recent works [22, 11] collect two datasets to simulate moderate (Fig. 1 (c)) and extreme viewpoint changes (Fig. 1 (d)). To describe the difference under viewpoint changes, previous works [22, 15] compute the difference by direct subtraction, which could compute difference with noise [25]. Recent methods [11, 24, 31, 30, 28, 39] directly match two images to predict difference features. However, due to the influence of pseudo changes, these methods are hard to learn stable difference features. In contrast, our SCORE first learns two view-invariant image representations by maximizing their cross-view contrastive alignment. Then, it mines their common features to reconstruct the representations of unchanged objects, thus learning a stable difference representation for caption generation. We note that the latest work [38] pre-trains the model with three self-supervised tasks, in order to improve cross-modal alignment. Different from it, we enforce the cross-modal alignment by implementing cross-modal backward reasoning in a self-supervised way. Meanwhile, our overall architecture is trained in an end-to-end manner, which improves the training efficiency.
**Token-wise Matching** has been used in latest image/video retrieval works [37, 36] to compute cross-modal interaction between image/video and text features. However, since pseudo changes would induce feature shift between object pairs, it is insufficient to only match cross
view features at token level. Hence, we further design a multi-head token-wise matching for finer-level interaction between different feature subspaces of cross-view images. This is key to learn the view-invariant representations.
**Cross-modal Consistency Constraint** is to verify the quality of caption by using it and "before" image to rebuild "after" image. This idea has been tried by the latest works [7, 11]. However, both works only enforce the consistency among the caption, the changed object in "before" and "after" images, while ignoring the constraint for referents. If the changed object is similar to other objects (Fig. 1 (c) (d)), describing both the change and its referent is helpful to convey accurate change information. Considering this, we perform backward reasoning with the full representations of "before" and "after" images, which helps generate a high-level sentence about the change and its referent.
## 3 Methodology
As shown in Fig. 2, our method consists of four parts: (1) A pre-trained CNN encodes a pair of cross-view images into two representations. (2) The proposed SCORER learns two view-invariant representations to reconstruct the representations of unchanged objects and model the difference representation. (3) A transformer decoder translates the difference representation into a high-level linguistic sentence. (4) The proposed CBR improves the quality of sentence via enforcing it to be informative about the difference.
### Cross-view Image Pair Encoding
Formally, given a pair of images "before" \(I_{bef}\) and "after" \(I_{aft}\), we utilize a pre-trained CNN model to extract their grid features, denoted as \(X_{bef}\) and \(X_{aft}\), where \(X\in\mathbb{R}^{C\times H\times W}\). \(C\), \(H\), \(W\) indicate the number of channels, height, and width. We first project both representations into a low-dimensional embedding space of \(\mathbb{R}^{D}\):
\[\tilde{X}_{o}=\text{conv}_{2}(X_{o})+pos(X_{o}), \tag{1}\]
where \(o\in(bef,aft)\). \(\text{conv}_{2}\) denotes a 2D-convolutional layer; \(pos\) is a learnable position embedding layer.
### Self-supervised Cross-view Representation Reconstruction
The core module of SCORER is the multi-head token wise matching (MTM). MTM aims to model relationships between cross-view images by performing fine-grained interaction between different feature subspaces, which plays a key role in view-invariant representation learning. In the following, we first elaborate MTM and then describe how to use it for view-invariant representation learning. Finally, we introduce how to reconstruct the representations of unchanged objects for difference representation learning.
#### 3.2.1 Multi-head Token-wise Matching.
We first introduce the single-head token-wise matching (TM) and then extend it into the multi-head version. Formally, given a query \(Q\in\mathbb{R}^{N\times D}\) and a key \(K\in\mathbb{R}^{N\times D}\), we first compute the similarity of \(i\)-th query token with all key tokens and select the maximum one as its token-wise maximum similarity with \(K\). Then, we perform average pooling over the token-wise maximum similarity of all query tokens to obtain the similarity of \(Q\) to \(K\). By analogy, we compute the average token-wise maximum similarity of \(K\) to
Figure 2: The architecture of the proposed method, including a pre-trained CNN model, the **self-supervised cross-view representation reconstruction** network, a transformer decoder, and the **cross-modal backward reasoning** module. \(\tilde{X}_{bef}^{\prime}\) and \(\tilde{X}_{aft}^{\prime}\) denote the “before” and “after” image features from different pairs in the training batch. \(B\) is the batch size; \(N\) indicates the feature number in each image.
\(Q\), which ensures capturing correct relationships between them. The above computation is formulated as follows:
\[\text{TM}(Q,K)=\left[\frac{1}{N}\sum_{i=1}^{N}\max_{j=1}^{N}\left(e_{i,j}\right)+ \frac{1}{N}\sum_{j=1}^{N}\max_{i=1}^{N}\left(e_{i,j}\right)\right]/2, \tag{2}\] \[e_{i,j}=\left(q_{i}\right)^{\top}k_{j}.\]
Further, we extend TM into a multi-head version to jointly match different feature subspaces of \(Q\) and \(K\), so as to perform fine-grained interaction between them:
\[\text{MTM}(Q,K)=\text{ Concat }_{i^{\prime}=1...h}\left(\text{ head }_{i^{\prime}}\right), \tag{3}\] \[\text{head }_{i^{\prime}}=\text{TM}\left(QW_{i^{\prime}}^{Q},KW_{i^ {\prime}}^{K}\right).\]
#### 3.2.2 View-invariant Representation Learning
In a training batch, we sample \(B\) image pairs of "before" and "after". For \(k\)-th "before" image \(\tilde{X}_{k}^{b}\), \(k\)-th "after" image \(\tilde{X}_{k}^{a}\) is its positive, while other "after" images will be the negatives in this batch. First, we reshape \(\tilde{X}\in\mathbb{R}^{D\times H\times W}\) to \(\tilde{X}\in\mathbb{R}^{N\times D}\), where \(N=HW\) denotes the number of features. Then, we use MTM to compute similarity (\(B\times B\) matrix) of "before" to "after" and "after" to "before", respectively. Next, we maximize cross-view contrastive alignment between \(\tilde{X}_{k}^{b}\) and \(\tilde{X}_{k}^{a}\) while minimizing the alignment of non-similar images, by the InfoNCE loss [20]:
\[\mathcal{L}_{b2a}=-\frac{1}{B}\sum_{k}^{B}\log\frac{\exp\left( \text{MTM}\left(\tilde{X}_{k}^{b},\tilde{X}_{k}^{a}\right)/\tau\right)}{\sum_ {r}^{B}\exp\left(\text{MTM}\left(\tilde{X}_{k}^{b},\tilde{X}_{r}^{a}\right)/ \tau\right)}, \tag{4}\] \[\mathcal{L}_{a2b}=-\frac{1}{B}\sum_{k}^{B}\log\frac{\exp\left( \text{MTM}\left(\tilde{X}_{k}^{a},\tilde{X}_{k}^{b}\right)/\tau\right)}{\sum_ {r}^{B}\exp\left(\text{MTM}\left(\tilde{X}_{k}^{a},\tilde{X}_{k}^{b}\right)/ \tau\right)},\] \[\mathcal{L}_{\text{cv}}=\frac{1}{2}(\mathcal{L}_{b2a}+\mathcal{L} _{a2b}),\]
where \(\tau\) is the temperature hyper-parameter. In this self-supervised way, we can make the representations of \(\tilde{X}_{bef}\) and \(\tilde{X}_{aft}\) invariant to pseudo changes, so as to facilitate the following cross-view representation reconstruction.
#### 3.2.3 Cross-view Representation Reconstruction
Based on the two view-invariant representations \(\tilde{X}_{bef}\) and \(\tilde{X}_{aft}\), we use a multi-head cross-attention (MHCA) [33] to mine their common features for reconstructing the representations of unchanged objects in each image. Here, representation reconstruction indicates that the unchanged representations of each image are distilled from the other one, _e.g._, the unchanged representations of \(\tilde{X}_{bef}\) are computed by transferring similar features on \(\tilde{X}_{aft}\) back to the corresponding positions on \(\tilde{X}_{bef}\). In this way, we reconstruct the unchanged representations for each image, respectively:
\[\tilde{X}_{bef}^{u}=\text{MHCA }(\tilde{X}_{bef},\tilde{X}_{aft}, \tilde{X}_{aft}), \tag{5}\] \[\tilde{X}_{aft}^{u}=\text{ MHCA}(\tilde{X}_{aft},\tilde{X}_{bef}, \tilde{X}_{bef}).\]
Then, instead of subtracting them from image representations [25, 31, 30], which leads to information (_e.g._, referents) loss, we integrate them into image representations to highlight the unchanged objects and deduce the difference information, so as to learn the stable difference representation in each image:
\[\tilde{X}_{o}^{c}=\text{LN}(\tilde{X}_{o}+\tilde{X}_{o}^{u}). \tag{6}\]
Herein, \(o\in(bef,aft)\) and LN is short for LayerNorm [2]. Finally, we obtain the difference representation between two images by fusing \(\tilde{X}_{bef}^{c}\) and \(\tilde{X}_{aft}^{c}\), which is implemented by a fully-connected layer with the ReLU function:
\[\tilde{X}_{c}=\mathrm{ReLU}\left(\left[\tilde{X}_{bef}^{c};\tilde{X}_{aft}^{c} \right]W_{h}+b_{h}\right), \tag{7}\]
where [;] is a concatenation operation.
### Caption Generation
After leaning \(\tilde{X}_{c}\in\mathbb{R}^{N\times D}\), we use a transformer decoder [33] to translate it into a sentence. First, the multi-head self-attention takes the word features \(E[W]=\{E[w_{1}],...,E[w_{m}]\}\) (ground-truth words during training, predicted words during inference) as inputs and computes a set of intra-relation embedded word features, denoted as \(\hat{E}[W]\). Then, the decoder utilizes \(\hat{E}[W]\) to query the most related features \(\hat{H}\) from \(\tilde{X}_{c}\) via the multi-head cross-attention. Afterward, the \(\hat{H}\) is passed to a feed-forward network to obtain an enhanced representation \(\tilde{H}\). Finally, the probability distributions of target words are calculated by:
\[\tilde{W}=\mathrm{Softmax}\left(\tilde{H}W_{c}+b_{c}\right), \tag{8}\]
where \(W_{c}\in\mathbb{R}^{D\times U}\) and \(b_{c}\in\mathbb{R}^{U}\) are the parameters to be learned; \(U\) is the dimension of vocabulary size.
### Cross-modal Backward Reasoning
To improve the quality of generated sentence, we devise the CBR to first reversely model a "hallucination" representation with the sentence and "before" image. Then, we push it closer to the "after" representation to enforce the sentence to be informative about the difference. Concretely, we first fuse \(\tilde{H}\in\mathbb{R}^{m\times D}\) by the mean-pooling operation to obtain a sentence feature \(T\). Then, we broadcast \(\tilde{T}\in\mathbb{R}^{D}\) as \(\tilde{T}\in\mathbb{R}^{D\times H\times W}\) and concatenate it with \(\tilde{X}_{bef}\), so as to obtain the "hallucination" \(\hat{X}_{hal}\):
\[\hat{X}_{hal}=\text{conv}_{2}([\tilde{X}_{bef};\tilde{T}]),\hat{X}_{hal}\in \mathbb{R}^{D\times H\times W}. \tag{9}\]
\(\tilde{X}_{hal}\) and \(\tilde{X}_{bef}\) are kept as the same shape to ensure that spatial information is not collapsed. Next, we capture the relationships between different locations in \(\tilde{X}_{hal}\) based on the multi-head self-attention (MHSA), which is essential for backward reasoning and computed by:
\[\tilde{X}_{hal}=\text{conv}_{2}[\text{MHSA }(\hat{X}_{hal},\hat{X}_{hal},\hat{X}_{ hal})], \tag{10}\]
Since the "hallucination" representation is produced based on the viewpoint of "before" representation, it is less effective to directly match it with the "after" representation.
To this end, we sample unrelated representations of "hallucination" and "after" from different pairs, which are as erroneous candidates for CBR. Similarly, in each batch, for \(k\)-th "hallucination" \(\tilde{X}_{k}^{k}\), \(k\)-th "after" \(\tilde{X}_{k}^{a}\) is its positive, while the other "after" images will be the negatives. Also, we use MTM to capture relationships between positive/negative pairs. Subsequently, we maximize cross-view contrastive alignment of positive pairs by the InfoNCE loss [20], which is similar to Eq. (4):
\[\mathcal{L}_{\text{cm}}=\frac{1}{2}(\mathcal{L}_{h2a}+\mathcal{L}_{a2h}). \tag{11}\]
Through this self-supervised manner, we make the sentence sufficiently describe the difference information.
### Joint Training
The proposed overall network is trained in an end-to-end manner by maximizing the likelihood of the observed word sequence. Given the ground-truth words \((w_{1}^{*},\ldots,w_{m}^{*})\), we minimize the negative log-likelihood loss:
\[\mathcal{L}_{cap}(\theta)=-\sum_{t=1}^{m}\log p_{\theta}\left(w_{t}^{*}\mid w _{<t}^{*}\right), \tag{12}\]
where \(p_{\theta}\left(w_{t}^{*}\mid w_{<t}^{*}\right)\) is computed by Eq. (8), and \(\theta\) are the parameters of the network. Besides, the network is self-supervised by the losses of two contrastive alignments. Hence, the final loss function is optimized as follows:
\[\mathcal{L}=\mathcal{L}_{cap}+\lambda_{v}\mathcal{L}_{\text{cv}}+\lambda_{m} \mathcal{L}_{\text{cm}}, \tag{13}\]
where \(\lambda_{v}\) and \(\lambda_{m}\) are the trade-off parameters, which are discussed in the supplementary material.
## 4 Experiments
### Datasets
**CLEVR-Change** is a large-scale dataset [22] with moderate viewpoint change. It has 79,606 image pairs, including five change types, _i.e._, "Color", "Texture", "Add", "Drop", and "Move".We use the official split with 67,660 for training, 3,976 for validation and 7,970 for testing.
**CLEVR-DC** is a large-scale dataset [11] with extreme viewpoint shift. It includes 48,000 pairs with same change types as CLEVR-Change. We use the official split with 85% for training, 5% for validation, and 10% for testing.
**Image Editing Request** dataset [27] includes 3,939 aligned image pairs with 5,695 editing instructions. We use the official split with 3,061 image pairs for training, 383 for validation, and 495 for testing.
**Spot-the-Diff** dataset [10] includes 13,192 aligned image pairs from surveillance cameras. Following SOTA methods, we mainly evaluate our model in a single change setting. Based on the official split, the dataset is split into training, validation, and testing with a ratio of 8:1:1.
### Evaluation Metrics
Following the current state-of-the-art methods, five metrics are used to evaluate the generated sentences, _i.e._, BLEU-4 (B) [21], METEOR (M) [3], ROUGE-L (R) [16], CIDEr (C) [34], and SPICE (S) [1]. The results are computed based on the Microsoft COCO evaluation server [4].
### Implementation Details
For a fair comparison, we follow the SOTA methods to use a pre-trained ResNet-101 [6] to extract grid features of an image pair, with the dimension of 1024 \(\times\) 14 \(\times\) 14. We first project these features into a lower dimension of 512. The hidden size in the overall model and word embedding size are set to 512 and 300. The proper head and layer numbers of SCORE are discussed below. The head and layer numbers in the decoder are set to 8 and 2 on the four datasets. During training, We use Adam optimizer [12] to minimize the negative log-likelihood loss of Eq. (13). During inference, the greedy decoding strategy is used to generate captions. Both training and inference are implemented with PyTorch [23] on an RTX 3090 GPU. More implementation details are described in the supplementary material.
### Performance Comparison
#### 4.4.1 Results on the CLEVR-Change Dataset.
We compare with the state-of-the-art methods in: 1) total performance under both semantic and pseudo changes; 2) semantic change; 3) different change types. The comparison methods are categorized into 1) end-to-end training: DUDA [22], DUDA+ [7], R\({}^{3}\)Net+SSP [31], VACC [11], SRDRL+AVS [32], SGCC [15], MCCFormers-D [24], IFDC [9], BDLSCR [26], NCT [30], and VARD-Trans [28]; 2) reinforcement learning: M-VAM+RAF [25]; 3) pre-training: PCL w/ pre-training [38].
In Table 1, our method achieves the best results on all metrics against the end-to-end training methods. Besides, our method performs much better than these two methods augmented by pre-training and reinforcement learning.
We note that SCORER outperforms MCCFormers-D by a large margin. MCCFormers-D is a classic match-based method that directly correlates two image representations to learn a difference representation, which is then fed into a transformer decoder for caption generation. Different from it, our SCORER first learns two view-invariant image representations by maximizing their cross-view contrastive alignment. Then, SCORER reconstructs the representations of unchanged objects, so as to learn a stable difference representation under pseudo changes for caption generation.
In Table 2, under the detailed change types, our method surpasses the current methods by a large margin in almost every category. Under the most difficult type "Move", our SCORER+CBR achieves the relative improvement of 4.7% against R\({}^{3}\)Net+SSP. This validates the necessary of view-invariant representation learning. Moreover, under different settings, CBR helps yield an extra performance boost, which shows it does improve captioning quality.
#### 4.4.2 Results on the CLEVR-DC Dataset
On CLEVR-DC with extreme viewpoint changes, we compare SCORER/SCORER+CBR with several state-of-the-art methods: DUDA/DUDA+CC [22], M-VAM/M-VAM+CC [25], VA/VACC [11], MCCFormers-D [24], NCT [30], and VARD-Trans [28]. For fair-comparison, we compare them based on the usage of cross-modal consistency constraint. We implement MCCFormers-D based on the released code on CLEVR-DC and Image Editing Request datasets.
The results are shown in Table 3. Our SCORER achieves the best results on most metrics. This benefits from learning two view-invariant representations to reconstruct representations of unchanged objects, thus learning a stable difference representation under extreme viewpoint changes. When we implement CBR, the performance of SCORER+CBR is further boosted, especially achieving 16.7% improvement against VACC on CIDEr. This shows that our CBR can calibrate the model to generate a linguistic sentence describing the change and its referent.
#### 4.4.3 Results on the Image Editing Reques Dataset
To validate the generalization of our method, we conduct the experiment on a challenging dataset of Image Editing Request (IER). We compare with the following SOTA methods: DUDA [22], Dyn rel-att [27], MCCFormers-D [24], BDLSCR [26], NCT [30], and VARD-Trans [28].
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \hline & \multicolumn{6}{c|}{Total} & \multicolumn{6}{c}{Semantic Change} \\ Method & B & M & R & C & S & B & M & R & C & S \\ \hline PCL w/ Pre-training (AAAI 2022) [38] & 51.2 & 36.2 & 71.7 & **128.9** & - & - & - & - & - & - \\ \hline M-VAM+RAF (ECCV 2020) [25] & 51.3 & 37.8 & 70.4 & 115.8 & 30.7 & - & - & - & - & - \\ \hline DUDA (ICCV 2019) [22] & 47.3 & 33.9 & - & 112.3 & 24.5 & 42.9 & 29.7 & - & 94.6 & 19.9 \\ DUDA+ (CVPR 2021) [7] & 51.2 & 37.7 & 70.5 & 115.4 & 31.1 & 49.9 & 34.3 & 65.4 & 101.3 & 27.9 \\ R\({}^{3}\)Net+SSP (EMNLP 2021) [31] & 54.7 & 39.8 & 73.1 & 123.0 & 32.6 & 52.7 & 36.2 & 69.8 & 116.6 & 30.3 \\ VACC (ICCV 2021) [11] & 52.4 & 37.5 & - & 114.2 & 31.0 & - & - & - & - & - \\ SGCC (ACM MM 2021) [15] & 51.1 & 40.6 & 73.9 & 121.8 & 32.2 & - & - & - & - & - \\ SRDRL+AVS (ACL 2021) [32] & 54.9 & 40.2 & 73.3 & 122.2 & 32.9 & 52.7 & 36.4 & 69.7 & 114.2 & 30.8 \\ MCCFormers-D (ICCV 2021) [24] & 52.4 & 38.3 & - & 121.6 & 26.8 & - & - & - & - & - \\ IFDC (TMM 2022) [9] & 49.2 & 32.5 & 69.1 & 118.7 & - & 47.2 & 29.3 & 63.7 & 105.4 & - \\ NCT (TMM 2023) [30] & 55.1 & 40.2 & 73.8 & 124.1 & 32.9 & 53.1 & 36.5 & 70.7 & 118.4 & 30.9 \\ VARD-Trans (TIP 2023) [28] & 55.4 & 40.1 & 73.8 & 126.4 & 32.6 & - & - & - & - & - \\
**SCORER (Ours)** & 55.8 & 40.8 & 74.0 & 126.0 & 33.0 & 54.1 & 37.4 & 71.5 & 122.0 & 31.2 \\
**SCORER+CBR (Ours)** & **56.3** & **41.2** & **74.5** & **126.8** & **33.3** & **54.4** & **37.6** & **71.7** & **122.4** & **31.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with the state-of-the-art methods on CLEVR-Change under the settings of total performance and semantic change.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & \multicolumn{6}{c}{CIDEr} \\ Method & CL & T & A & D & MV \\ \hline PCL w/ PT & 131.2 & 101.1 & **133.3** & 116.5 & 81.7 \\ \hline M-VAM+RAF & 122.1 & 98.7 & 126.3 & 115.8 & 82.0 \\ DUDA & 120.4 & 86.7 & 108.2 & 103.4 & 56.4 \\ DUDA+ & 120.8 & 89.9 & 119.8 & 123.4 & 62.1 \\ R\({}^{3}\)Net+SSP & 139.2 & 123.5 & 122.7 & 121.9 & 88.1 \\ SRDRL+AVS & 136.1 & 122.7 & 121.0 & 126.0 & 78.9 \\ BDLSCR & 136.1 & 122.7 & 121.0 & 126.0 & 78.9 \\ IFDC & 133.2 & 99.1 & 128.2 & 118.5 & 82.1 \\ NCT & 140.2 & 128.8 & 128.4 & 129.0 & 86.0 \\
**SCORER** & 143.2 & 135.2 & 129.4 & 132.6 & **91.6** \\
**SCORER+CBR** & **146.2** & **133.7** & **131.1** & **133.9** & **92.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: A detailed breakdown of evaluation on CLEVR-Change with different change types: “(CL) Color”, “(T) Texture”, “(A) Add”, “(D) Drop”, and “(MV) Move”. PT is short for pre-training.
Table 11 shows SCORER+CBR outperforms the SOTA methods on most metrics. Especially on BLEU-4, SCORER+CBR obtains the relative improvement of 23.5% against the latest method NCT (TMM 2023). The edited objects are usually inconspicuous. This indicates that the proposed method can fully mine the common features by maximizing cross-view contrastive alignment between two images, so as to accurately describe which part of the "before" image has been edited. Further, the generated sentence is refined in the process of cross-modal backward reasoning.
#### 4.4.4 Results on the Spot-the-Diff Dataset
To further validate the generalization, we conduct the experiment on Spot-the-Diff that includes aligned image pairs from the surveillance cameras. The following SOTA methods are compared: DUDA+ [7], M-VAM/M-VAM+RAF [25], VACC [11], SRDRL+AVS [32], MCCFormers-D [24], IFDC [9], BDLSCR [26], and VARD-Trans [28].
In Table 5, our method achieves superior results on most metrics, which shows its generalization on different scenarios. Besides, our method performs lower on METEOR and SPICE when implementing CBR. Our conjecture is that image pairs on this dataset actually contain one or more changes. For fair-comparison, we conduct experiments mainly based on the single-change setup. This makes the "hallucination" representation, which is reversely modeled by the "before" representation and single-change caption, not fully matched with the "after" representation. As such, SCORER+CBR does not gain significant improvement.
In short, compared with the state-of-the-art methods in different change scenarios, our method achieves the impressive performance. The superiority mainly results from that 1) SCORE learns two view-invariant image representations for reconstructing the representations of unchanged objects, so as to learn a stable difference representation for generating a linguistic sentence; 2) CBR can further improve the quality of generated sentence.
Besides, when we augment RR and SCORER with CBR, both RR+CBR and SCORER+CBR achieve better performances. This not only validates that CBR improves captioning quality, but also proves that CBR is generalizable.
Table 7 shows the ablation study of each module under semantic change and only pseudo change, separately. We can obtain observations similar to the total performance. Besides, we find that SCORER is much better than RR under semantic change, but under only pseudo change, SCORER brings less gain. This results from that in this case, the learned difference representation contains less information, making SCORER difficult to align it with words. By contrast, SCORER+CBR significantly improves RR on both settings, which shows that SCORER and CBR supplement each other. More ablation studies on the other datasets are in the supplementary material.
**Ablation Study of MTM.** Instead of using MTM to perform fine-grained matching between different feature subspaces of cross-view images, we use max/mean-pooling to obtain the global feature of each image and compute their similarity. Besides, we implement TM without multi-head operation. The results in Fig. 3 show that MTM achieves the best results, which demonstrates that it plays a critical role in view-invariant representation learning. Besides, only implementing token-wise matching is not better than simple mean-pooling. Our conjecture is that the changed object commonly appears in a local region with weak feature, so it is insufficient to reveal this slight difference by only interacting features at token level. As such, it is necessary to match two image features at finer level, _i.e._, subspace level.
**Effect of Head Number of SCORER.** We further investigate the effect of head number for SCORER, _i.e._, the head number of MTM and MHCA (Eq. (5)). The results are shown in Fig. 4. We find that the best results are achieved on the four datasets when setting the head number as 8.
**Effect of Layer Number of SCORER.** We investigate the effect of layer number for SCORER in Fig. 5. On four datasets, we find that increasing the layer number does not bring better performance, because deeper layers could result in the problem of over-fitting. Besides, the layer number is the deepest on Spot-the-Diff. Our conjecture is that objects have no good postures and background information is more complex in a surveillance scenario. As such, we empirically set proper layer number of 2, 1, 3, and 2 on four datasets.
### Captioning and change localization results with varied viewpoints
To intuitively evaluate the efficacy of our method to handle pseudo changes, we show the captioning (Fig. 6 (a)) and change localization (Fig. 6 (b)) results of SCORER+CBR and SOTA method MCCFormers-D [24] with varied viewpoints. The amount of viewpoint change is measured by the IoUs of objects' bounding boxes across an image pair (lower IoU means higher difficulty). For change localiza
\begin{table}
\begin{tabular}{c|c c c c c|c c c c c} \hline \hline & \multicolumn{6}{c|}{Semantic Change} & \multicolumn{6}{c}{Only Pseudo Change} \\ \hline Method & B & M & R & C & S & B & M & R & C & S \\ \hline Subtraction & 50.2 & 34.1 & 67.1 & 108.0 & 28 & 57.3 & 48.4 & 74.7 & 113.8 & 34.0 \\ RR & 53.3 & 37.1 & 70.8 & 119.1 & 30.4 & 61.1 & 50.7 & 76.4 & 114.9 & 34.6 \\ SCORER & 54.3 & 37.5 & 71.5 & 122.0 & 31.2 & 61.4 & 50.6 & 76.5 & 116.4 & 34.7 \\ RR+CBR & 54.1 & 37.4 & 71.5 & **122.4** & 31.2 & 60.7 & 51.2 & 76.9 & 114.9 & 34.6 \\ SCORER+CBR & **54.4** & **37.6** & **71.7** & **122.4** & **31.6** & **62.0** & **51.7** & **77.4** & **117.9** & **35.0** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation study on CLEVR-Change under the evaluation of semantic change and only pseudo change.
Figure 4: Effect of head number of SCORER on four datasets.
Figure 5: Effect of layer number of SCORER on four datasets.
Figure 3: Ablation studies of MTM on four datasets.
Figure 6: Captioning and change localization of varied viewpoints.
tion, the pioneer work DUDA [22] tried Pointing Game to evaluate attention maps of change localization, where maps are computed by using the captured difference to directly query related regions on each image. In contrast, we consider simultaneously evaluating change localization and cross-modal alignment, _i.e_., attention maps of cross-modal alignment, to check whether the model can locate changed regions when generating corresponding words. This is more challenging but more reasonable. In Fig. 6, we find that our method outperforms MCCFormers-D and shows better robustness with varied viewpoint changes on both evaluations, which benefit from view-invariant representation learning and cross-modal backward reasoning.
### Qualitative Analysis
To intuitively evaluate our method, we conduct qualitative analysis on the four datasets. Fig. 7 illustrates three cases in different change scenarios. For each case, we visualize the generated caption along with the attention weight at each word. When the weight is higher, the region is brighter. We observe that when generating the words about the changed object or its referents, SCORER+CBR can adaptively locate the corresponding regions. In Fig. 8, we visualize the alignment between unchanged objects under different change scenes. The compared method is the SOTA method MCCFormers-D [24]. We implement it based on the released code. We find that when directly correlating two image features, MCCFormers-D only aligns salient objects between two images. Instead, our SCORER first learns two view-invariant representations in a self-supervised way. Based on these, SCORER can better align and reconstruct the representations of unchanged objects, so as to facilitate subsequent difference representation learning. More qualitative examples are shown in the supplementary material.
## 5 Conclusion
This paper proposes a novel SCORER to learn a stable difference representation while resisting pseudo changes. SCORER first learns two view-invariant image representations in a self-supervised way, by maximizing the cross-view contrastive alignment of two images. Based on these, SCORER mines their common features to reconstruct the representations of unchanged objects. This helps learn a stable difference representation for caption generation. Further, we design the CBR to improve captioning quality by enforcing the yielded caption is informative about the difference in a self-supervised manner. Extensive experiments show that our method achieves the state-of-the-art results on four public datasets with different change scenarios.
## Acknowledgements
This work was supported by the National Key Research and Development Program of China under Grant (2018AAA0102000), National Nature Science Foundation of China (62322211, U21B2024, 61931008, 62071415, 62236008, U21B2038), Fundamental Research Funds for the Central Universities, "Pioneer", Zhejiang Provincial Natural Science Foundation of China (LDT23F01011F01, LDT23F01015F01, LDT23F01014F01) and "Leading Goose" R&D Program of Zhejiang Province (2022C01068), and Youth Innovation Promotion Association of Chinese Academy of Sciences (2020108).
Figure 8: Visualization of the alignment of unchanged objects computed by MCCFormers-D [24] and our SCORER.
Figure 7: Three cases from different scenarios, where the generated captions along with the attention weight at each word are visualized. |
2309.10194 | The Kernel Density Integral Transformation | Feature preprocessing continues to play a critical role when applying machine
learning and statistical methods to tabular data. In this paper, we propose the
use of the kernel density integral transformation as a feature preprocessing
step. Our approach subsumes the two leading feature preprocessing methods as
limiting cases: linear min-max scaling and quantile transformation. We
demonstrate that, without hyperparameter tuning, the kernel density integral
transformation can be used as a simple drop-in replacement for either method,
offering protection from the weaknesses of each. Alternatively, with tuning of
a single continuous hyperparameter, we frequently outperform both of these
methods. Finally, we show that the kernel density transformation can be
profitably applied to statistical data analysis, particularly in correlation
analysis and univariate clustering. | Calvin McCarter | 2023-09-18T22:54:05Z | http://arxiv.org/abs/2309.10194v2 | # The Kernel Density Integral Transformation
###### Abstract
Feature preprocessing continues to play a critical role when applying machine learning and statistical methods to tabular data. In this paper, we propose the use of the kernel density integral transformation as a feature preprocessing step. Our approach subsumes the two leading feature preprocessing methods as limiting cases: linear min-max scaling and quantile transformation. We demonstrate that, without hyperparameter tuning, the kernel density integral transformation can be used as a simple drop-in replacement for either method, offering protection from the weaknesses of each. Alternatively, with tuning of a single continuous hyperparameter, we frequently outperform both of these methods. Finally, we show that the kernel density transformation can be profitably applied to statistical data analysis, particularly in correlation analysis and univariate clustering.
## 1 Introduction
Feature preprocessing is a ubiquitous workhorse in applied machine learning and statistics, particularly for structured (tabular) data. Two of the most common preprocessing methods are min-max scaling, which linearly rescales each feature to have the range \([0,1]\), and quantile transformation (Bartlett, 1947; Van der Waerden, 1952), which nonlinearly maps each feature to its quantiles, also lying in the range \([0,1]\). Min-max scaling preserves the shape of each feature's distribution, but is not robust to the effect of outliers, such that output features' variances are not identical. (Other linear scaling methods, such as \(z\)-score standardization, can guarantee output uniform variances at the cost of non-identical output ranges.) On the other hand, quantile transformation reduces the effect of outliers, and guarantees identical output variances (and indeed, all moments) as well as ranges; however, all information about the shape of feature distributions is lost.
In this paper, we observe that, by computing definite integrals over the kernel density estimator (KDE) (Rosenblatt, 1956; Parzen, 1962; Silverman, 1986) and tuning the kernel bandwidth, we may construct a tunable "happy medium" between min-max scaling and quantile transformation. Our generalization of these transformations is nevertheless both conceptually simple and computationally efficient, even for large sample sizes. On a wide variety of tabular datasets, we demonstrate that the kernel density integral transformation is broadly applicable for tasks in machine learning and statistics.
We hasten to point out that kernel density estimators of quantiles have been previously proposed and extensively analyzed (Yamato, 1973; Azzalini, 1981; Sheather, 1990; Kulczycki & DaWidowicz, 1999). However, previous works used the KDE to yield quantile estimators with superior statistical qualities. Statistical consistency and efficiency are desired for such estimators, and so the kernel bandwidth \(h\) is chosen such that \(h\to 0\) as sample size \(N\to\infty\). However, in our case, we are not interested in estimating the true quantiles or the cumulative distribution function (c.d.f.), but instead use the KDE merely to construct a preprocessing transformation for downstream prediction and data analysis tasks. In fact, as we will see later, we choose \(h\) to be large and non-vanishing, so that our preprocessed features deviate substantially from the empirical quantiles.
Our contributions are as follows. First, we propose the kernel density integral transformation as a feature preprocessing method that includes the min-max and quantile transforms as limiting cases. Second, we provide a computationally-efficient approximation algorithm that scales to large sample sizes. Third, we demonstrate the use of kernel density integrals in correlation analysis, enabling a useful compromise between
Pearson's \(r\) and Spearman's \(\rho\). Third, we propose a discretization method for univariate data, based on computing local minima in the kernel density estimator of the kernel density integrals.
## 2 Methods
### Preliminaries
Our approach is inspired by the behavior of min-max scaling and quantile transformation, which we briefly describe below.
The min-max transformation can be derived by considering a random variable \(X\) defined over some known range \([U,V]\). In order to transform this variable onto the range \([0,1]\), one may define the mapping \(S:\mathbb{R}\rightarrow[0,1]\) defined as \(x\to S(x):=\frac{x-U}{V-U}\), with the upper and lower bounds achieved at \(x=U\) and \(x=V\), respectively. In practice, one typically observes \(N\) random samples \(X_{1},\dots,X_{N}\), which may be sorted into order statistics \(X_{(1)}\leq X_{(2)}\leq\dots\leq X_{(N)}\). Substituting the minimum and maximum for \(U\) and \(V\) respectively, we obtain the min-max scaling function
\[\hat{S}_{N}(x):=\begin{cases}\frac{x-X_{(1)}}{X_{(N)}-X_{(1)}},&X_{(1)}\leq x \leq X_{(N)}\\ 0,&x\leq X_{(1)}\\ 1,&X_{(N)}\leq x.\end{cases} \tag{1}\]
Quantile transformation can be derived by considering a random variable \(X\) with known continuous and strictly monotonically increasing c.d.f. \(F_{X}:\mathbb{R}\rightarrow[0,1]\). The quantile transformation (to be distinguished from the quantile function, or inverse c.d.f.) is identical to the c.d.f, simply mapping each input value \(x\) to \(F_{X}(x):=P[X\leq x]\). One typically observes \(N\) random samples \(X_{1},\dots,X_{N}\) and obtains an empirical c.d.f. as follows:
\[\hat{F}_{N}(x)=\frac{1}{N}\sum_{n=1}^{N}I\{X_{n}\leq x\}:=\hat{P}[X\leq x]. \tag{2}\]
The quantile transformation also requires ensuring sensible behavior despite ties in the observed data.
### The kernel density integral transformation
Our proposal is inspired by the observation that, just as the quantile transformation is defined by a data-derived c.d.f., the min-max transformation can also be interpreted as the c.d.f. of the uniform distribution over \([X_{(1)},X_{(N)}]\). We thus propose to interpolate between these via the kernel density estimator (KDE) (Silverman, 1986). Recall the Gaussian KDE with the following density:
\[\widehat{f}_{h}(x)=\frac{1}{N}\sum_{n=1}^{N}K_{h}(x-X_{n})=\frac{1}{Nh}\sum_{ n=1}^{N}K\Big{(}\frac{x-X_{n}}{h}\Big{)}, \tag{3}\]
where Gaussian kernel \(K(z)=\exp(-z^{2}/2)/\sqrt{2\pi}\) and \(h>0\) is the kernel bandwidth. Consider further the definite integral over the KDE above as a function of its endpoints:
\[P_{h}(a,b):=\int_{a}^{b}\widehat{f}_{h}(x)dx. \tag{4}\]
If we replace the empirical c.d.f. in Eq. (2) with the KDE c.d.f. using Eq. (4), we obtain the following:
\[\hat{F}_{N}^{\text{KDI,naive}}(x;h):=P_{h}(-\infty,x). \tag{5}\]
However, we change the integration bounds from \((-\infty,\infty)\) to \((X_{(1)},X_{(N)})\), such that we obtain \(\hat{F}_{N}^{\text{KDI}}(x)=0\) for \(x\leq X_{(1)}\) and \(\hat{F}_{N}^{\text{KDI}}(x)=1\) for \(X_{(N)}\leq x\). To do this, while keeping \(\hat{F}_{N}^{\text{KDI}}(x)\) continuous, we define the
final version of the kernel density integral (KD-integral) transformation as follows:
\[\hat{F}_{N}^{\text{KDI}}(x;h):=\begin{cases}\frac{P_{h}(X_{(1)},x)}{P_{h}(X_{(1)},X_{(N)})},&X_{(1)}\leq x<X_{(N)}\\ 0,&x<X_{(1)}\\ 1,&X_{(N)}\leq x.\end{cases} \tag{6}\]
It can be seen that our proposed estimator matches the behavior of the min-max transformation and the quantile transformation at the extrema \(X_{(1)}\) and \(X_{(N)}\). Also, in practice, we parameterize \(h=\alpha\hat{\sigma}_{X}\), where \(\alpha\) is the _bandwidth factor_, and \(\hat{\sigma}_{X}\) is an estimate of the standard deviation of \(X\).
As the kernel bandwidth \(h\to\infty\), the KD-integral transformation will converge towards the min-max transformation. Meanwhile, when \(h\to 0\), the KD-integral transformation converges towards the quantile transformation. (Proofs are given in the Appendix.) Furthermore, as noted previously, \(\hat{F}_{N}^{\text{KKDQ,naive}}(x)\) is exactly the formula for computing the kernel density estimator of quantiles. However, in this paper, we propose (for the first time, to our knowledge) choosing a large kernel bandwidth. Rather than estimating quantiles with improved statistical efficiency as in (Sheather, 1990; Kulczycki & DaWidowicz, 1999), our aim is carry out a transformation that is an optimal compromise between the min-max transformation and the quantile transformation, for a given downstream task. As we will see in the experiment section, this optimal compromise is often struck at large bandwidths such as \(h=1\cdot\hat{\sigma}_{X}\), even as the sample size \(N\to\infty\).
### Efficient computation
For supervised learning preprocessing, the KD-integral transformation must accomodate separate train and test sets. Thus, it is desirable that it be estimated quickly on a training set, efficiently (yet approximately) represented with low space complexity, and then applied quickly to test samples. For the Gaussian kernel, we compute and store KD-integrals at \(R=\text{min}(R_{\text{max}}=1000,N)\) reference points, equally spaced in \([0,1]\). New points are transformed by linearly interpolating these KD-integrals. This approach has runtime complexity of \(O(RN_{\text{train}})\) at training time and \(O(\log(R)N_{\text{test}})\) at test time, and space complexity of \(O(R)\). While this approach offers tunable control of the error and efficiency at test-time, the tradeoff between precision and training time is not ideal, as shown in Figure 1.
To address this, we applied the \(\text{poly}(|x|)e^{-|x|}\) kernel (Hofmeyr, 2019) to our setting. The polynomial-exponential family of kernels, parameterized by the order \(\kappa\) of the polynomial, can be evaluated via dynamic programming with runtime complexity \(O(\kappa N)\), eliminating the quadratic dependence on the number of samples. We use order \(\kappa=4\), which yields a smooth kernel with infinite support that closely approximates the Gaussian kernel with an appropriate rescaling of the bandwidth (Hofmeyr, 2019). More details are given in the Appendix.
As shown in Figure 1, for 10k samples, this results in almost a 1000x speedup compared to exact evaluation of the Gaussian KDE cdf, across a range of bandwidth factors. It also offers a 10x to 100x training time
Figure 1: Precision versus runtime tradeoff for different bandwidth factors \(\alpha\in\{0.1,1,10\}\). The data consisted of \(N_{\text{train}}=10\),000 training points sampled from \(\text{LogNormal}(0,1)\) and \(N_{\text{test}}=10\),000 equally-spaced test points within the range of the training data. The error is computed as the maximum absolute error between estimated values and those from the exact Gaussian KDE cdf. We compare against naively approximating the Gaussian KDE via sampling \(S\in\{30,100,300,10^{3},3\cdot 10^{3},10^{4}\}\) without replacement to form the KDE, followed by interpolating with \(R=1000\) references as usual. The runtime using exact Gaussian cdf computation is depicted on the x-axis (“*”). Runtime was measured on a machine with a 2.8 GHz Core i5 processor.
speedup at a fixed precision level, compared to naively speeding up the Gaussian KDE by subsampling data points then interpolating as usual.
Our software package, implemented in Python/Numba with a Scikit-learn (Pedregosa et al., 2011) compatible API, is available at [https://github.com/calvinmccarter/kditransform](https://github.com/calvinmccarter/kditransform).
### Application to correlation analysis
In correlation analysis, whereas Pearson's \(r\)(Pearson, 1895) is appropriate for measuring the strength of linear relationships, Spearman's rank-correlation coefficient \(\rho\)(Spearman, 1904) is useful for measuring the strength of non-linear yet monotonic relationships. Spearman's \(\rho\) may be computed by applying Pearson's correlation coefficient after first transforming the original data to quantiles or ranks \(R(x):=N\hat{F}_{N}(x)\). Thus, it is straightforward to extend Spearman's \(\rho\) by computing the correlation coefficient between two variables by computing their respective KD-integrals, then applying Pearson's formula as before. Like Spearman's \(\rho\), it is apparent that ours is a particular case of a general correlation coefficient \(\Gamma\)(Kendall, 1948). For \(N\) samples of random variables \(X\) and \(Y\), the general correlation coefficient \(\Gamma\) may be written as
\[\Gamma=\frac{\sum_{i,j=1}^{N}a_{ij}b_{ij}}{\sqrt{\sum_{i,j=1}^{N}a_{ij}^{2} \sum_{i,j=1}^{N}b_{ij}^{2}}},\]
for \(a_{ij}:=r_{j}-r_{i},b_{ij}:=s_{j}-s_{i}\), where now \(r_{i}\) and \(s_{i}\) correspond to the KD-integrals of \(X_{i}\) and \(Y_{i}\), respectively. Because ranks are robust to the effect of outliers, Spearman's \(\rho\) is also useful as a robust measure of correlation; our proposed approach inherits this benefit, as will be shown in the experiments.
### Application to univariate clustering
Here we apply KD-integral transformation to the problem of univariate clustering (a.k.a. discretization). Our approach relies on the intuition that local minima and local maxima of the KDE will tend to correspond to cluster boundaries and cluster centroids, respectively. However, naive application of this idea would perform poorly because low-density regions will tend to have many isolated extrema, causing us to partition low-density regions into many separate clusters. When we apply the KD-integral transformation, we draw such points in low-density regions closer together, because the definite integrals between such points will tend to be small. Then, when we form a KDE on these transformed points and identify local extrema, we avoid partitioning such low-density regions into many separate clusters.
Our proposed approach thus comprises three steps:
1. Compute \(T_{n}=\hat{F}_{N}^{\text{KDQ}}(X_{n}),\ \forall n\in\{1,\ldots,N\}\).
2. Form the kernel density estimator \(\hat{f}_{h}(t)\) for \(T_{1},\ldots,T_{N}\). For this second KDE, select a vanishing bandwidth via Scott's Rule (\(h=N^{-0.2}\sigma_{X}\)) (Scott, 1992).
3. Identify the cluster boundaries from the local minima in \(\hat{f}_{h}(t)\), and inverse-KD-integral-transform the boundaries for \(T_{n}\) to obtain boundaries for clustering \(X_{n}\).
## 3 Experiments
In this section, we evaluate our approach on supervised classification problems with real-world tabular datasets, on correlation analyses using simulated and real data, and on clustering of simulated univariate datasets with known ground-truth. We use the \(\text{poly}(|x|)e^{-|x|}\) kernel with polynomial order \(\kappa=4\) in our experiments.
### Feature preprocessing for supervised learning
#### 3.1.1 Classification with PCA and Gaussian Naive Bayes
We first replicate the experimental setup of (Raschka, 2014), analyzing the effect of feature preprocessing methods on a simple Naive Bayes classifier for the Wine dataset (Forina et al., 1988). In addition to min-max scaling, used in (Raschka, 2014), we also try quantile transformation and our proposed KD-integral transformation approach. In Figure 2, we illustrate the effect of min-max scaling, quantile transformation, and KD-integral transformation on the MalicAcid feature on this dataset. We see that KD-integral transform in Figure 2(C) is concave for inputs \(>\) 3, compressing outliers together, while preserving the bimodal shape of the feature distribution. It is substantially smoother than the empirical quantile transform in Figure 2(B).
We examine the accuracy resulting from each of the preprocessing methods, followed by (as in (Raschka, 2014)) principle component analysis (PCA) with 2 components, followed by a Gaussian Naive Bayes classifier, evaluated via a 70-30 train-test split. For the KD-integral transformation, we show results for the default bandwidth factor of 1, for a bandwidth factor chosen via inner 30-fold cross-validation, and for a sweep of bandwidth factors between 0.1 and 10. The accuracy, averaged over 100 simulated train-test splits, is shown in Figure 3(A). We repeat the above experimental setup for 3 more popular tabular classification datasets: Iris (Fisher, 1936), Penguins (Gorman et al., 2014), Hawks (Cannon et al., 2019), shown in Figure 3(B-D), respectively.
Compared to min-max scaling and quantile transformation, our tuning-free proposal wins on Wine; on Iris, it is sandwiched between min-max scaling and quantile transformation; on Penguins, it wins over both min-max and quantile transformation; on Hawks, it ties with min-max while quantile transformation struggles. We also compare to \(z\)-score scaling; our tuning-free method beats it on Wine and Iris, loses to it on Penguins, and ties with it on Hawks. Overall, among the tuning-free approaches, ours comes in first, second, second, and second, respectively. Our approach with tuning is always non-inferior or better than min-max scaling and quantile transformation, but loses to \(z\)-score scaling on Penguins; however, note that \(z\)-score scaling performs poorly on Wine and Iris.
#### 3.1.2 Supervised regression with linear regression
We compare the different methods on two standard regression problems, California Housing (Pace and Barry, 1997) and Abalone (Nash et al., 1995), with results depicted in Figure 4. Here we measure the root-mean-squared error on linear regression after preprocessing, with cross-validation setup as before. KD-integral, both with default bandwidth and with cross-validated bandwidth, outperformed the other approaches. Even in CA Housing, where quantile transformation outperforms min-max scaling due to skewed feature distributions, there is evidently benefit to maintaining a large enough bandwidth to preserve the distribution's shape. In both regression datasets, the performance improvement offered by KD-integral exceeds the difference in performance between linear and quantile transformations. We perform additional experiments on CA
Figure 2: Comparison of (A) min-max scaling, (B) quantile transformation, and (C) KD-integral transformation on the MalicAcid feature in the Wine dataset. The horizontal density plots depict the distribution of the original data, while the vertical density plots show the distribution after each preprocessing step.
Figure 4: Root mean-squared error (rMSE) on supervised regression problems for different feature preprocessing methods. Results are shown for California housing (A), and Abalone (B); lower rMSE is better. Performance is shown as a horizontal line for min-max scaling, quantile transformation, KD-integral with the default bandwidth factor \(\alpha=1\), and KD-integral with bandwidth selected via CV. In green, we show rMSE as a function of KD-integral bandwidth factor \(\alpha\).
Figure 3: Accuracy on supervised classification problems for different feature preprocessing methods. Results are shown for Wine (A), Iris (B), Penguins (C), and Hawks (D); higher accuracy is better. Accuracy is shown as a horizontal line for min-max scaling, \(z\)-score scaling, quantile transformation, KD-integral with the default bandwidth factor \(\alpha=1\), and KD-integral with bandwidth selected via CV. In green, we show accuracy as a function of KD-integral bandwidth factor \(\alpha\). Error bars depict the standard deviation over 100 simulations
Housing, with data subsampling, showing that the chosen bandwidth remains constant regardless of sample size; see the Appendix (A.3).
#### 3.1.3 Linear classification on Small Data Benchmarks
We next compared preprocessing methods on a dataset-of-datasets benchmark, comprising 142 tabular datasets, each with at least 50 samples. We replicated the experimental setup of the Small Data Benchmarks (Feldman, 2021) on the UCI++ dataset repository (Paulo et al., 2015). In (Feldman, 2021), the leading linear classifier was a support vector classifier (SVC) with min-max preprocessing, to which we appended SVC with quantile transformation and SVC with KD-integral transformation. As in (Feldman, 2021), for each preprocessing method, we optimized the regularization hyperparameter \(C\in\{10^{-4},10^{-3},\ldots,10^{2}\}\), evaluating each method via one-vs-rest-weighted ROC AUC, averaged over 4 stratified cross-validation folds. For the latter, we measured results with both the default bandwidth factor \(\alpha=1\) (so as to give equal hyperparameter optimization budgets to each approach), and with cross-validation grid-search to select \(C\) and \(\alpha\in\{3^{-1},3^{0},3^{1}\}\). Our results are summarized in Table 1. We see that KD-integral transformation provides statistically-significant greater average ROC AUC, with less variance, at the same tuning budget as the other approaches; further improvement is provided by tuning. We further analyzed the performance of the different methods in terms of the number of samples in each dataset in Figure 5. Plotting the relative ROC AUC against the number of samples \(N\), we see that our proposed approach is particularly helpful in avoiding suboptimal performance for small-\(N\) datasets.
\begin{table}
\begin{tabular}{l c c c}
**METHOD** & **Mean (StdDev) ROC AUC** & \(p\) **(vs KDI (\(\alpha\!=\!1\)))** & \(p\) **(vs KDI (\(\alpha\!=\!\text{CV}\)))** \\ \hline Min-max & 0.864 (0.132) & \(1.6\times 10^{-3}\) & \(5.1\times 10^{-8}\) \\ Quantile & 0.866 (0.131) & \(7.9\times 10^{-3}\) & \(2.9\times 10^{-4}\) \\ KD-integral (\(\alpha=1\)) & 0.868 (0.129) & & \(3.7\times 10^{-6}\) \\ KD-integral (\(\alpha=CV\)) & 0.869 (0.129) & & \\ \end{tabular}
\end{table}
Table 1: Performance of preprocessing methods on Small Data Benchmarks, as measured by area under the Receiver Operating Characteristic curve (ROC AUC). The columns display the mean and standard deviation of the ROC AUC, computed over 142 datasets in the benchmark. We also show \(p\)-values from one-sided Wilcoxon signed-rank test comparisons.
Figure 5: Performance of preprocessing methods on Small Data Benchmarks, plotted against dataset size. For each dataset, we compute the relative ROC AUC for a given method as its own ROC AUC, divided by the maximum ROC AUC over all preprocessing methods for that dataset.
### Correlation analysis
To provide a basic intuition, we first illustrate the different methods on synthetic datasets shown in Figure 6, replicating the example from (Wikipedia, 2009). When two variables are monotonically but not linearly related, as in Figure 6(A), the Spearman correlation exceeds the Pearson correlation. In this case, our approach behaves similarly to Spearman's. When two variables have noisy linear relationship, as in Figure 6(B), both the Pearson and Spearman have moderate correlation, and our approach interpolates between the two. When two variables have a linear relationship, yet are corrupted by outliers, the Pearson correlation is reduced due to the outliers, while the Spearman correlation is robust to this effect. In this case, our approach also behaves similarly to Spearman's.
Next, we perform correlation analysis on the California housing dataset, containing district-level features such as average prices and number of bedrooms from the 1990 Census. Overall, the computed correlation coefficients are typically close, with only a few exceptions, as shown in Figure 7. Our approach's top two disagreements with Pearson's are on (AverageBedrooms, AverageRooms) and (MedianIncome, AverageRooms). Our approach's top two disagreements with Spearman's are on (AverageBedrooms, AverageRooms) and (Population, AverageBedrooms). We further observe that KD-integral-based correlations typically, but not always, lie between the Pearson and Spearman correlation coefficients.
Figure 6: Illustration of correlation analysis using Pearson’s \(r\), Spearman’s \(\rho\), and our proposed approach, for simulated data. Three scenarios are depicted: (A) nonlinear yet monotonic relationship, (B) noisy linear relationship, and (C) linear relationship corrupted by outliers.
Figure 7: Correlation coefficients derived from the California housing dataset. The pink line depicts the gap between Spearman and KD-integral correlations, while the distance from the gray line shows how far each are from the Pearson correlation. Both (A) and (B) contain the same data, but top disagreements between ours and Pearson’s, and ours and Spearman’s are highlighted separately in (A) and (B), respectively. The error bars in black depict the standard deviation of each correlation, computed from 100 bootstrap simulations.
We analyze the correlation disagreements for (AverageBedrooms, AverageRooms) in Figure 8. From the original data, it is apparent that AverageBedrooms and AverageRooms are correlated, whether we examine the full dataset or exclude outlier districts. This relationship was obscured by quantile transformation (and thus, by Spearman correlation analysis), whereas it is still noticeable after KD-integral transformation.
We repeat the analysis of disagreement for (MedianIncome, AverageRooms) and (Population, AverageBedrooms) in Figure 9. For the former disagreement, our approach agrees with quantile transformation-based analysis, identifying the typical positive dependence between median income and average rooms, by reducing the impact of districts with extremely high average rooms. For the latter disagreement, our approach agrees with original data-based analysis, identifying the negative relationship one would expect to observe between district population (and therefore density) and the average number of bedrooms.
### Clustering univariate data
In this experiment, we generate five separate synthetic univariate datasets, each sampled according to the following mixture distributions:
Figure 8: Correlation analysis for (AverageBedrooms, AverageRooms) in the California housing dataset. The rows, from top to bottom, correspond to original data, quantile transformation, and KD-integral transformation. The full dataset is shown on the left, while outliers are excluded on the right. Each district is colored by its median house value. In parenthesis above each row are the Pearson (0.85), Spearman (0.08), and KD-integral (0.31) estimated correlations between the variables.
* \(0.55*\mathcal{N}(\mu=1,\sigma=0.75)+0.30*\mathcal{N}(\mu=4,\sigma=1)+0.15*\text{ Unif}[a=0,b=20]\)
* \(0.45*\mathcal{N}(\mu=1,\sigma=0.5)+0.45*\mathcal{N}(\mu=4,\sigma=1)+0.10*\text{ Unif}[a=0,b=20]\)
* \(0.67*\mathcal{N}(\mu=1,\sigma=0.5)+0.33*\mathcal{N}(\mu=4,\sigma=1)\)
* \(0.8*\text{Exp}(\lambda=1)+0.2*[10+\text{Exp}(\lambda=4)]\)
* \(0.5*\text{Exp}(\lambda=8)+0.5*[100-\text{Exp}(\lambda=5)]\).
We compare our approach to five other clustering algorithms: KMeans with \(K\) chosen to maximize the Silhouette Coefficient (Rousseeuw, 1987), GMM with \(K\) chosen via the Bayesian information criterion (BIC), Bayesian Gaussian Mixture Model (GMM) with a Dirichlet Process prior, Mean Shift clustering (Comaniciu & Meer, 2002), and HDBSCAN (Campello et al., 2013; McInnes et al., 2017) (with min_cluster_size=5, cluster_selection_epsilon=0.5), and local minima of an adaptive-bandwidth KDE with adaptive sensitivity=0.5 (Abramson, 1982; Wang & Wang, 2007).
In Figure 10, we compare the ground-truth cluster identities of the data with the estimated cluster identities from each of the methods, for \(N=500\) samples. On the top row, we depict the true clusters, as well as the true number of mixture components. On each of the following rows, we show the distributions of estimated clusters for each the methods. We see that our approach is the only method to correctly infer the true number of mixture components, and that it partitions the space similarly to ground-truth. GMM with BIC performed well on the first three datasets, but not on the last two. Meanwhile, KMeans performed well on the last three datasets but not on the first two. Bayesian GMM tended to slightly overestimate the number of components, while MeanShift and HDBSCAN (even after excluding the samples it classified as noise) tended to aggressively overestimate.
We include results for an ablation of our proposed approach, in which we define separate clusters at the local minima of the KDE of unpreprocessed inputs, rather than on the KDE of KD-integrals. We see that the
Figure 9: Correlation analysis for (MedianIncome, AverageRooms) (A) and (Population, AverageBderooms) (B) in the California housing dataset. See Figure 8 for an explanation of the plot.
ablated method fails on the first, second, and fourth datasets, where there is a large imbalance between the mixture weights of the cluster components. Using the adaptive-bandwidth KDE fixes the second and fourth dataset but not the first.
We repeated the above experimental setup, this time varying the number of samples \(N\in\{100,200,500,1000,2000,5000\}\), and performing 20 independent simulations per each setting of \(N\). For each simulation, we recorded whether the true \(K\) and estimated \(\hat{K}\) number of components matched, as well as the the adjusted Rand index (ARI) between the ground-truth and estimated cluster labelings. The results, averaged over 20 simulations, are shown in Figure 11. Our approach is the only method to attain high accuracy across all settings when \(N>1000\). Alternative methods behave inconsistently across datasets or across varying sample sizes. For the first two datasets with small \(N\), KD-integral is outperformed by GMM and Bayesian GMM. However, GMM struggles on the first two datasets for large \(N\), and on the fourth and fifth datasets; meanwhile, Bayesian GMM struggles on all datasets for large \(N\). The only approach that comes close to KD-integral is using local minima of the adaptive KDE; however, it struggled with smaller sample sizes on the first and second datasets.
Figure 10: Clustering performance for five datasets (one dataset per column), generated with \(N=500\) samples. On each of the rows, for the different methods, we indicate the estimated number of components (green if correct, red if incorrect).
## 4 Related Work
As far as we know, the use of kernel density integrals to interpolate between min-max scaling and quantile transformation has not been previously proposed in the literature. Similarly, we are not aware of kernel density integrals being proposed to balance between the strengths of Pearson's \(r\) and Spearman's \(\rho\).
Our approach is procedurally similar to copula transformations in statistics and finance (Cherubini et al., 2004; Patton, 2012). But because we have a different goal, namely a generic feature transformation that is to be extrinsically optimized and evaluated, our proposed approach has a markedly different effect. Besides the small adjustment from \(\hat{F}^{\text{KDI,naive}}\) (which is procedurally identical to the kernel density copula transformation (Gourieroux et al., 2000)) to \(\hat{F}^{\text{KDI}}\), our proposal aims at something quite different from copula transforms. The copula literature ultimately aims at transforming to a particular reference distribution (e.g., uniform or Gaussian), with the KDE used in place of the empirical distribution merely for statistical efficiency, thus choosing a consistency-yielding bandwidth (Gourieroux et al., 2000; Fermanian and Scaillet, 2003). We depart from this choice, finding that a large bandwidth that preserves the shape of the input distribution is frequently optimal for settings (e.g. classification) where marginal distributions need not have a given parametric form.
Our proposed approach for univariate clustering is similar in spirit to various density-based clustering methods, including mean-shift clustering (Comaniciu and Meer, 2002), level-set trees (Schmidberger and Frank, 2005; Wang and Huang, 2009; Kent et al., 2013), and HDBSCAN (Campello et al., 2013). However, such methods tend to leave isolated points as singletons, while joining points in high-density regions into larger clusters. To our knowledge, our approach for compressing together such isolated points has not been previously considered.
The kernel density estimator (KDE) was previously proposed (Flores et al., 2019) in the context of discretization-based preprocessing for supervised learning. However, their method did not use kernel density integrals as a preprocessing step, but instead employed a supervised approach that, for a multiclass classification problem with \(C\) classes, constructed \(C\) different KDEs for each feature.
## 5 Discussion
Practical RecommendationsWe recommend that if one must employ a feature preprocessor without any tuning or comparisons between different preprocessing methods, then KD-integral with bandwidth-factor of 1 is the best one-shot option. If instead tuning is possible, we suggest comparing the performance of
Figure 11: Performance at clustering univariate features, for varying number of samples \(N\). The top row depicts the fraction of simulations in which the estimated number of mixture components equaled the true number. The bottom row depicts the average ARI between the ground-truth clustering and the estimated clusters. Each of the five columns corresponds to five mixture distributions described in the text and depicted on the top row of histograms.
KD-integral, with a log-space sweep of \(\alpha\in[0.1,10]\), in addition to the other popular preprocessing methods. Finally, we recommend that one always estimate our transformation with the order-4 polynomial-exponential kernel approximation of the Gaussian kernel, and represent it with \(R_{\max}=1000\) reference points.
LimitationsOur approach for supervised preprocessing is limited by the fact that, even though the bandwidth factor is a continuous parameter, its selection requires costly hyperparameter tuning. Meanwhile, our proposed approach would benefit from further theoretical analysis, as well as investigation into extensions for multivariate clustering.
Future WorkIn this paper, we have focused on per-feature transformation. But, especially in genomics, it is common to perform per-sample quantile normalization (Bolstad et al., 2003; Amaratunga and Cabrera, 2001), in which features for a single sample are mapped to quantiles computed across all features; this would benefit from further study. Also, future work could study whether kernel density integrals could be profitably used in place of vanilla quantiles outside of classic tabular machine learning problems. For example, quantile regression (Koenker and Bassett Jr, 1978) has recently found increasing use in conformal prediction (Romano et al., 2019; Liu et al., 2022), uncertainty quantification (Jeon et al., 2016), and reinforcement learning (Rowland et al., 2023).
## 6 Conclusions
In this paper, we proposed the use of the kernel density integral transformation as a nonlinear preprocessing step, both for supervised learning settings and for statistical data analysis. In a variety of experiments on simulated and real datasets, we demonstrated that our proposed approach is straightforward to use, requiring simple (or no) tuning to offer improved performance compared to previous approaches. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.